473,408 Members | 2,888 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,408 software developers and data experts.

Python from Wise Guy's Viewpoint

THE GOOD:

1. pickle

2. simplicity and uniformity

3. big library (bigger would be even better)

THE BAD:

1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?

2. Statements vs Expressions business is very dumb. Try writing
a = if x :
y
else: z

3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.

4. splintering of the language: you have the inefficient main language,
and you have a different dialect being developed that needs type
declarations. Why not allow type declarations in the main language
instead as an option (Lisp does it)

5. Why do you need "def" ? In Haskell, you'd write
square x = x * x

6. Requiring "return" is also dumb (see #5)

7. Syntax and semantics of "lambda" should be identical to
function definitions (for simplicity and uniformity)

8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)

9. Syntax for arrays is also bad [a (b c d) e f] would be better
than [a, b(c,d), e, f]

420

P.S. If someone can forward this to python-dev, you can probably save some
people a lot of soul-searching
Jul 18 '05
467 21147
pr***********@comcast.net wrote:

I offered my `black hole' program and got these responses:

Remi Vanicat <va*************@labri.fr>
`I don't see how this function can be useful...'

Jesse Tov <to*@eecs.harvREMOVEard.edu>
`we don't need functions with those types.'

Dirk Thierbach <dt********@gmx.de>
`recursive types are very often a result of a real typing error.'

"Marshall Spight" <ms*****@dnai.com>
`I don't believe the feature this function illustrates could be
useful.'

Will this the response for any program that does, in fact, fool a
number of static type checkers?


You missed one important point here. It is, in fact, trivial to extend
Hindley-Milner type inference in such a way that it can deal with your
function. That's what OCaml does when given the -rectypes option.
However, it is a conscious, deliberate decision not to do so, at least
not by default!

Why is that? Well, Dirk's quote gives the reason. But let me elaborate.

The design of a type system is by no means canonical. In fact, it is
based on set of pragmatic decisions and trade-offs. Idealized, you start
with the trivial type system, which has only one type. Then you refine
it incrementally by distinguishing certain classes of values through
introduction of new types and typing rules. Introduction of typing rules
is based on the following criteria:

- Do they catch a lot of errors?
- Are these errors serious?
- Are they hard to localize otherwise?
- Does the refinement rule out useful constructions?
- Are such constructions common?
- Are they expressible by other means?
- Are the rules intuitive?
- Do they interact nicely with other rules?

And probably more. There are never any definite answers to any of these
questions. The trade-off depends on many factors, such as the problem
domain the language is used for. Usually the line is drawn based on
experience with other languages and known problem domains. In the case
of arbitrary recursive types, experience with languages that allowed
them has clearly shown that it caused much more grief than joy.

BTW, almost the same criteria as above apply when you as a programmer
use the type system as a tool and program it by introducing your own
types. It can be tremendously useful if you make the right strategic
decisions. OTOH, especially if you are unexperienced, type abstractions
might also turn out to be counterproductive.

- Andreas

--
Andreas Rossberg, ro******@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.

Jul 18 '05 #351
Matthew Danish <md*****@andrew.cmu.edu> writes:
1 is an integer. Simple type inference.
Okay, I'm not familiar enough with ML, and I didn't know that it won't
let you use 1 to represent 1.0 or 1/1 or whatever.
Lisp gets exact rational arithmetic right, why don't ML or Haskell?
Could you point to a case where they don't? I don't understand your
criticism at all. Is the ability to do modulo arithmetic "wrong"?

- fun fact 0 = 1 | fact n = n * fact (n - 1);
val fact = fn : int -> int
- fact 10;
val it = 3628800 : int
- fact 15;
val it = 1307674368000 : int (* ideally *)
Here's Haskell as provided by GHCi:

Prelude> let fact n = if n == 0 then 1 else n * fact (n-1)
Prelude> fact 10
3628800
Prelude> :i it
-- it is a variable
it :: Integer
Prelude> fact 15
1307674368000

So '10' defaults to Integer, which is infinite precision.
You can of course also do:

Prelude> fact 10.0
3628800.0
- 1 * 1.0;
val it = 1.0 : float (* ideally *)
Prelude> 1*1.0
1.0

The 1 is treated as a Float, since that is what * requires
- 2 / 4;
val it = 1/2 : ratio (* ideally *)
Prelude> 2/4
0.5
Prelude> (2/4 :: Rational)
1 % 2

The default is Float, but Rational is there if that's what you want
- 2 truncate 4;
val it = 0 : int


I assume this means:

Prelude> 2 `div` 4
0
Prelude> :i it
-- it is a variable
it :: Integer

If I understood you correctly, Haskell doesn't get it all that wrong?

-kzm
--
If I haven't seen further, it is by standing in the footprints of giants
Jul 18 '05 #352
pr***********@comcast.net wrote:

I offered my `black hole' program and got these responses:


It can be written in Haskell using laziness. Am I quite sure you will
object in some way ;)

blackHole :: a
blackHole = error "black-hole"

*BH> :t blackHole 1 2 3 'a' "ho" (blackHole, 1.2)
blackHole 1 2 3 'a' "ho" (blackHole, 1.2) :: forall t. t

*BH> blackHole 1 2 3 'a' "ho" (blackHole, 1.2)
*** Exception: black-hole

*BH> let _ = blackHole 1 2 3 'a' "ho" (blackHole, 1.2) in "abcdef"
"abcdef"

Best regards,
Tom

--
..signature: Too many levels of symbolic links
Jul 18 '05 #353
Matthew Danish wrote:

Here is a very trivial example, in SML:

20 * 30

Multiplication, as well as literals, are overloaded. Depending on
whether you type this expression as Int8.int (8-bit integers) or
IntInf.int (infinite precision integer) the result is either 600 or an
overflow exception.
May I point out that the correct answer is 600, not overflow?


No, it's not. If I choose type Int8 then I do so precisely for the
reason that I want to be signalled if some computation overflows the
intended value domain. Otherwise I had chosen IntInf or something. You
see, this is exactly the reason why I say that the type system gives you
expressiveness.
Something that annoys me about many statically-typed languages is the
insistence that arithmetic operations should return the same type as the
operands.
I should note in this context is that static types usually express
different things than dynamic ones, especially when it comes to number
types. In Lisp, the runtime tag of a number will usually describe the
representation of the number. This may well change between operations.
But static typing, at least in high-level languages, is about semantics.
If I choose a certain integer type I do so because it has the desired
domain, which I want to have checked - I'm not at all interested in its
representation. In fact, values of IntInf are likely to have multiple
representations depending on their size, but the type is invariant,
abstracting away from such low-level representation details.

Actually, I think dynamic typing should abstract from this as well, but
unfortunately this does not seem to happen.
2 / 4 is 1/2, not 0.
Integer division is not real division (and should not use the same name).
Arithmetically, 1 * 1.0 is
well-defined, so why can I not write this in an SML program?
Because the designers decided (rightly so, IMHO) that it is best to
avoid implicit conversions, since they might introduce subtle bugs and
does not coexist well with type inference. But anyway, this has nothing
to do with static vs dynamic typing - C allows the above, and there
might well be dynamic languages that raise a runtime error if you try it.
I do believe Haskell does it right, though, with its numeric tower
derived from Lisp.


Yes, in Haskell you can write the above, but for slightly different
reasons (integer literals are overloaded for floating types).

- Andreas

--
Andreas Rossberg, ro******@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.

Jul 18 '05 #354
You are right about Haskell, as I wrote in my initial post on this, it
has a numeric tower derived from Lisp (though I think it is not enabled
by default, or something?). Mostly I was annoyed that 20 * 30 being
possibly an overflow was brought up as an example of static typing being
expressive.

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #355
Matthew Danish wrote:
On Mon, Oct 27, 2003 at 10:08:15PM +0100, Joachim Durchholz wrote:
Matthew Danish wrote:
On Mon, Oct 27, 2003 at 07:00:01PM +0100, Andreas Rossberg wrote:

Pascal Costanza wrote:

>Can you show me an example of a program that does't make sense anymore
>when you strip off the static type information?

Here is a very trivial example, in SML:

20 * 30

Multiplication, as well as literals, are overloaded. Depending on
whether you type this expression as Int8.int (8-bit integers) or
IntInf.int (infinite precision integer) the result is either 600 or an
overflow exception.

May I point out that the correct answer is 600, not overflow?


No, the correct answer isn't 600 in all cases.
If it's infinite-precision arithmetic, the correct answer is indeed 600.
If it's 8-bit arithmetic with overflow, there is no correct answer.
If it's 8-bit arithmetic with wrap-around, the correct answer is 88.
If it's 8-bit arithmetic with saturation, the correct answer is 255.
(The result happens to be independent of whether the arithmetic is
signed or unsigned.)


What is this stuff? I am talking about integers here! You know, the
infinite set with the same cardinality as natural numbers. Why can't
the implementation figure out how to represent them most efficiently?


Hey, that 20*30 expression given above didn't say what type of integer
arithmetic was meant, or did it?
Of course, this doesn't say much about the distinction between static
and dynamic typing, actually the issues and unlucky fringe cases seem
very similar to me. (In particular, overloading and type inference don't
tell us whether the multiplication should be done in 8-bit, 16-bit,
machine-precision, infinite-precision, wrap-around, or saturated
arithmetic, and run-time types don't give us any useful answer either.)


Lisp gets exact rational arithmetic right, why don't ML or Haskell?


They do as well - infinite-precision integers are standard issue in
these languages.

It's just that Andreas's answer to the challenge of presenting a
type-dependent expression was simpler than anybody would have expected.
Of course, if your best response is that this all is ill-defined when it
isn't - then, I fear, you'll simply have to stay on your side of the fence.

Regards,
Jo

Jul 18 '05 #356
pr***********@comcast.net wrote:
"Marshall Spight" <ms*****@dnai.com> writes:
It would be really interesting to see a small but useful example
of a program that will not pass a statically typed language.
It seems to me that how easy it is to generate such programs
will be an interesting metric.
Would this count?

(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))


Moscow ML version 2.00 (June 2000)
Enter `quit();' to quit.
- fun noisyApply f x =
(print "I am about to apply "; printVal f;
print " to "; printVal x; print "\n";
f x); val ('a, 'b) noisyApply = fn : ('a -> 'b) -> 'a -> 'b - noisyApply Math.sin Math.pi;
I am about to apply fn to 3.14159265359 val it = 1.22460635382E~16 : real

In this example, printVal requires some runtime type information. But
that does in no way preclude static type checking.

--
Andreas Rossberg, ro******@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.

Jul 18 '05 #357
Pascal Costanza wrote:
As an example of the kind of "overloading" (or type dispatch, if you
want)
you cannot express in dynamically typed lagnuages: in Haskell, you can
naturally define functions which are overloaded on their return type,
i.e.
you don't need any value at all at runtime to do the "dispatch". For
example, consider an overloaded function fromString, that produces
values of
potentially arbitrary types from strings.
Wrong.


Not sure how the "wrong" relates to the quoted paragraph, but I guess
you mean my conjecture that you cannot dispatch on return types with
dynamic typing.
(defmethod from-string-expansion (to-type string)
(if (or (subtypep to-type 'sequence)
(subtypep to-type 'character)
(eq to-type t))
`(coerce ,string ',to-type)
`(coerce (read-from-string ,string) ',to-type)))

(defmacro set-from-string (x string &environment env)
(multiple-value-bind
(bound localp declarations)
(variable-information x env)
(declare (ignore bound localp))
(let ((type (or (cdr (assoc 'type declarations)) t)))
`(setf ,x ,(from-string-expansion type string)))))


Interesting example, thanks for showing it. I'm not fluent enough in
Lisp to understand how this actually works but it does not seem to be
extensible in a compositional way (you have to insert all type cases by
hand). And does the resolution work transitively? I.e. if I write some
complex function f using fromString somewhere, performing arbitrary
calculations on its return value of type t, and returning something of a
type containing t, is all this code parametric in t such that I can call
f expecting arbitrary result types? All this would be automatic in Haskell.

Also note that your transcript shows that your approach indeed requires
*explicit* type annotations, while you would rarely need them when doing
the same thing in a language like Haskell.

Anyway, your code is complicated enough to make my point that the static
type system gives you similar expressiveness with less fuss.

- Andreas

--
Andreas Rossberg, ro******@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.

Jul 18 '05 #358
Andreas Rossberg wrote:
Pascal Costanza wrote:
As an example of the kind of "overloading" (or type dispatch, if you
want)
you cannot express in dynamically typed lagnuages: in Haskell, you can
naturally define functions which are overloaded on their return type,
i.e.
you don't need any value at all at runtime to do the "dispatch". For
example, consider an overloaded function fromString, that produces
values of
potentially arbitrary types from strings.

Wrong.

Not sure how the "wrong" relates to the quoted paragraph, but I guess
you mean my conjecture that you cannot dispatch on return types with
dynamic typing.
(defmethod from-string-expansion (to-type string)
(if (or (subtypep to-type 'sequence)
(subtypep to-type 'character)
(eq to-type t))
`(coerce ,string ',to-type)
`(coerce (read-from-string ,string) ',to-type)))

(defmacro set-from-string (x string &environment env)
(multiple-value-bind
(bound localp declarations)
(variable-information x env)
(declare (ignore bound localp))
(let ((type (or (cdr (assoc 'type declarations)) t)))
`(setf ,x ,(from-string-expansion type string)))))

Interesting example, thanks for showing it. I'm not fluent enough in
Lisp to understand how this actually works but it does not seem to be
extensible in a compositional way (you have to insert all type cases by
hand).


No, at least not in the default method for from-string-expansion I have
shown above. That method only covers the default cases, and needs to
distinguish different ways how Common Lisp handles pre-defined types.

What you would need to do for your own types is to write your own
specialized versions of from-string-expansion:

(defmethod from-string-expansion ((to-type (eql 'foo)) string)
`(somehow-convert-string-to-foo ,string))

You don't need to modify the method given above. (You need to write
conversion routines for your own types in any language.)
And does the resolution work transitively? I.e. if I write some
complex function f using fromString somewhere, performing arbitrary
calculations on its return value of type t, and returning something of a
type containing t, is all this code parametric in t such that I can call
f expecting arbitrary result types? All this would be automatic in Haskell.
It should be possible in principle. The macro shown above is expanded at
compile-time (this is 100% correct, but sufficiently correct for the
sake of our discussion). The &environment keyword captures the lexical
environment that is current at macro expansion time.
VARIABLE-INFORMATION looks up type information about a variable in that
environment, before the code is actually translated into its final form.

The original idea for such environment objects was that you could not
only look up standardized information about entities of the compilation
environment, but that you can also stuff in your own information. So in
principle, it should be possible to use this as a basis for a type
inferencing mechanism.

The downside of all this is that ANSI Common Lisp doesn't define the
needed functions to do all this. It defines such environment objects
only in very rudimentary ways that are actually not powerful enough.
CLtL2 had this stuff, but apparently it had some glitches that are hard
to get right, and it was decided to leave this stuff out of the ANSI
standard.

There are Common Lisp implementations that implement this functionality
to a certain degree, and apparently Duane Rettig of Franz Inc. is
currently working on an implementation that has sorted out the edge
cases. They seem to consider making this available as open source.

It's probably possible to do similar things with the ANSI-defined
DEFINE-COMPILER-MACRO, or with proprietary extensions like DEFTRANSFORM
in LispWorks.
Also note that your transcript shows that your approach indeed requires
*explicit* type annotations, while you would rarely need them when doing
the same thing in a language like Haskell.
I think it should be possible in principle to add type inferencing to
Common Lisp as a sophisticated macro package, even without proper
environment objects.

It would probably take serious effort to implement such a beast, and it
would be difficult to solve interferences with "third-party" Lisp code,
but at least you would not need to change the base compiler to add this.
Anyway, your code is complicated enough to make my point that the static
type system gives you similar expressiveness with less fuss.


Yes, you're right. If you would want/need static type checking by
default, it wouldn't make much sense to do this in a dynamically typed
language that treats static type checking as an exceptional case. And
vice versa.
Pascal

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #359
Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> writes:
Python and Ruby get it wrong: the result is an integer (truncated
division), very different from the result of / on the same numbers
represented in a different type.


It's being fixed since python >= 2.2 (sort of):
from __future__ import division
1/3 0.33333333333333331 5//2

2

'as
Jul 18 '05 #360
Matthew Danish <md*****@andrew.cmu.edu> writes:
On Mon, Oct 27, 2003 at 10:08:15PM +0100, Joachim Durchholz wrote:
Matthew Danish wrote:
On Mon, Oct 27, 2003 at 07:00:01PM +0100, Andreas Rossberg wrote:

>Pascal Costanza wrote:
>
>>Can you show me an example of a program that does't make sense anymore
>>when you strip off the static type information?
>
>Here is a very trivial example, in SML:
>
> 20 * 30
>
>Multiplication, as well as literals, are overloaded. Depending on
>whether you type this expression as Int8.int (8-bit integers) or
>IntInf.int (infinite precision integer) the result is either 600 or an
>overflow exception.

May I point out that the correct answer is 600, not overflow?
No, the correct answer isn't 600 in all cases.
If it's infinite-precision arithmetic, the correct answer is indeed 600.
If it's 8-bit arithmetic with overflow, there is no correct answer.
If it's 8-bit arithmetic with wrap-around, the correct answer is 88.
If it's 8-bit arithmetic with saturation, the correct answer is 255.
(The result happens to be independent of whether the arithmetic is
signed or unsigned.)


What is this stuff? I am talking about integers here! You know, the
infinite set with the same cardinality as natural numbers.


I didn't see you say that. If you write 20 * 30 in SML, then you are
*not* talking about the infinite set of integers but rather about a set
[-2^N..2^N-1] where N is something like 31 or 32. If you had written

20 * 30 : IntInf.int

or something like that, then you would have talked about the infinite
set of integeres, and you would have gotten your "correct" result of
600. [I still have no idea why that is more "correct" than, say, 18.
That's the result in Z_{97}.]
Lisp gets exact rational arithmetic right, why don't ML or Haskell?
Clever compilers like CMUCL can even emit efficient code when it can
figure out the specific types. Surely a statically typed language would
find this even easier!


Sure, it is definitely not any harder than it is in Lisp. The problem
is that for many algorithms people want to be sure that the compiler
represents their values in machine words. Infinite precision is
needed sometimes, but in the majority of cases it is overkill. If you
need infinite precision, specify the type (IntInf.int in SML's case).
A clever compiler might optimize that like a Lisp compiler does. In
most other cases, why take any chances?

Matthias
Jul 18 '05 #361
Matthew Danish <md*****@andrew.cmu.edu> writes:
On Tue, Oct 28, 2003 at 11:20:45AM +0100, ke********@ii.uib.no wrote:
Matthew Danish <md*****@andrew.cmu.edu> writes:
What is this stuff? I am talking about integers here!
But the SML program isn't. Or it may be, and maybe not. So it's
ambigous without type information.
Why can't the implementation figure out how to represent them most
efficiently?


Because it needs a type annotation or inference to decide that the
numbers are indeed integers, and not a different set with different
arithmetic properties.


1 is an integer. Simple type inference. In Lisp, it's also a fixnum,
it's also an (unsigned-byte 23), it's also an (integer 1 (2)), etc.
Lisp gets exact rational arithmetic right, why don't ML or Haskell?


Could you point to a case where they don't? I don't understand your
criticism at all. Is the ability to do modulo arithmetic "wrong"?


- fun fact 0 = 1 | fact n = n * fact (n - 1);
val fact = fn : int -> int
- fact 10;
val it = 3628800 : int
- fact 15;
val it = 1307674368000 : int (* ideally *)


$ sml
Standard ML of New Jersey v110.43.3 [FLINT v1.5], September 26, 2003
- fun fact 0 = 1 : IntInf.int
= | fact n = n * fact (n - 1);
[autoloading]
[autoloading done]
val fact = fn : IntInf.int -> IntInf.int
- fact 15;
val it = 1307674368000 : IntInf.int
-
- 1 * 1.0;
val it = 1.0 : float (* ideally *)


That's not "ideal" at all, to me. I find the automatic conversions in
C a paint in the b*tt because it is not obvious at all where they
happen in a given expression. How hard is it to write

real 1 * 1.0

thereby making things explicit, unambiguous, and non-surprising?

Matthias
Jul 18 '05 #362
Matthew Danish <md*****@andrew.cmu.edu> wrote:
On Sat, Oct 25, 2003 at 12:37:38AM +0000, John Atwood wrote:
Pascal Costanza <co******@web.de> wrote:
>- when a test case gives me an exception, I can inspect the runtime
>environment and analyze how far the test case got, what it already
>successfully did, what is missing, and maybe even why it is missing.
>With a statically typed language, I wouldn't be able to get that far.
>
>Furthermore, when I am still in the exceptional situation, I can change
>variable settings, define a function on the fly, return some value from
>a yet undefined method by hand to see if it can make the rest of the
>code work, and so on.
That's because you're in an interpreted environemt, not because you're
using a dynamically typed language. Interpreters for statically typed
languages allow the same.


Wrong on all counts.

* Most Common Lisp environments compile to native code, even when
working interactively.
SBCL, for example, has no interpreter whatsoever. The interpreter is
simulated by calling the compiler and evaluating the resulting
function immediately.


If the code is executed in the environment, and one can execute
arbitrary snippets of code, it's an interpreted environment,
regardless of whether the code executed is native, bytecode,
or other.
* There exists statically typed language implementations which do the
same (SML/NJ)
Yes, these are among those I have in mind when I say "Interpreters for
statically typed languages allow the same."
* The behavior of redefinition in a statically typed environment
is far different from the behavior in a dynamically typed environment.
For one thing, generativity of names kicks in, which makes it
basically impossible to redefine types and functions without
recompiling all uses (and thus restarting your program), in a static
environment.


Yes, and that's a good thing. It prevents the program form getting in an
unreachable/inconsistent state, and secondly, in an FP, especially a pure
FP, with explicit state, one need not run the program from the start to
test whatever code is of interest at the moment, because the state can be
created via test cases.
John
Jul 18 '05 #363
Marcin 'Qrczak' Kowalczyk <qr****@knm.org.pl> writes:
You make some array of doubles, set a[i] = exp(i) (a double) and it works.
Then you set a[i] = 2*i (an integer) and it still works, because the
integer has been converted to double during assignment. An integer can be
used in place of a double with the same value.
Surely in a "proper" staticly typed language, you can't assign an integer
to a variable declared (floating point) double. Shouldn't that be a type
error? (Unless you do an explicit cast, of course, but then the programmer
made the decision, not the language.)
Now in a dynamically typed language the analogous code would set some
array elements to integers without conversion. If division on them means
a different thing, an integer can no longer be used in place of a floating
point number with the same value. And division is the only operation whith
breaks this.
Why would division on integers mean something different than division on
(floating point) doubles?
Dynamically typed languages shouldn't use / for both real and truncating
division. For statically typed languages it's only a matter of taste (I
prefer to use different symbols), but for dynamically typed languages it's
important to prevent bugs.


In Lisp, "/" always means the same thing. It's never a truncating operation.

This doesn't seem to be related to static vs. dynamic typing.

-- Don
__________________________________________________ _____________________________
Don Geddis http://don.geddis.org/ do*@geddis.org
Football combines two of the worst things about American life.
It is violence punctuated by committee meetings.
-- George Will
Jul 18 '05 #364
Andreas Rossberg <ro******@ps.uni-sb.de> writes:
I should note in this context is that static types usually express different
things than dynamic ones, especially when it comes to number types. In Lisp,
the runtime tag of a number will usually describe the representation of the
number. This may well change between operations. But static typing, at least
in high-level languages, is about semantics. If I choose a certain integer
type I do so because it has the desired domain, which I want to have checked
- I'm not at all interested in its representation.
Types don't have to be disjoint. In Lisp, if a data object is a FIXNUM,
at the same time it's also a NUMBER. And perhaps an (INTEGER 0 16) too.

Yes, at least one of the types defines the representation. But there are
semantic types as well.

As to "change between operations": it doesn't matter what your type system is.
Any function call has the potential to "change types". It would be a silly
system that requires the (type) domain and range of every function to always
be identical.
In fact, values of IntInf are likely to have multiple representations
depending on their size, but the type is invariant, abstracting away from
such low-level representation details.
And Lisp's NUMBER type also has multiple representations. And the SQRT
function takes a NUMBER and returns a NUMBER. But also, at the same time,
it takes INTEGERs and returns FLOATs, and takes negative INTEGERs and returns
COMPLEX NUMBERs. Semantics and representation, all at the same time!
Actually, I think dynamic typing should abstract from this as well, but
unfortunately this does not seem to happen.
But it does.
Because the designers decided (rightly so, IMHO) that it is best to avoid
implicit conversions, since they might introduce subtle bugs and does not
coexist well with type inference.


Sounds like a tradeoff. Perhaps, for some programmers, the freedom you get
with implicit type conversions is more valuable than the benefits of type
inference?

-- Don
__________________________________________________ _____________________________
Don Geddis http://don.geddis.org/ do*@geddis.org
Jul 18 '05 #365
On Tue, Oct 28, 2003 at 06:52:08PM +0000, John Atwood wrote:
Matthew Danish <md*****@andrew.cmu.edu> wrote:
* Most Common Lisp environments compile to native code, even when
working interactively.
SBCL, for example, has no interpreter whatsoever. The interpreter is
simulated by calling the compiler and evaluating the resulting
function immediately.


If the code is executed in the environment, and one can execute
arbitrary snippets of code, it's an interpreted environment,
regardless of whether the code executed is native, bytecode,
or other.


So long as you are clear on the meaning. For most people, calling
something an interpreted environment implies lack of compiler.
* The behavior of redefinition in a statically typed environment
is far different from the behavior in a dynamically typed environment.
For one thing, generativity of names kicks in, which makes it
basically impossible to redefine types and functions without
recompiling all uses (and thus restarting your program), in a static
environment.


Yes, and that's a good thing. It prevents the program form getting in an
unreachable/inconsistent state, and secondly, in an FP, especially a pure
FP, with explicit state, one need not run the program from the start to
test whatever code is of interest at the moment, because the state can be
created via test cases.


And it also gets in the way of the flexibility traditionally associated
with dynamically typed languages. It gets in the way of development and
debugging as well. And as for pure FP, can you recreate network
connection state at will? So that the other end doesn't even know
something went wrong (besides the delay)?

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #366
On Tue, 28 Oct 2003 08:57:34 -0800, Don Geddis wrote:
Why would division on integers mean something different than division on
(floating point) doubles?


Because C did it this way and many languages are copying its conventions.
And C did it because it used / for integer division before it had floating
point types.

I was responding to "Something that annoys me about many statically-typed
languages is the insistence that arithmetic operations should return the
same type as the operands. 2 / 4 is 1/2, not 0."

While it is true that many statically typed languages do that, it's not
a consequence of static typing (others don't). The only fault of static
typing here is that it makes this choice relatively harmless, compared to
dynamically typed languages where it's a serious problem if this choice
is made.

I was going to respond "making 2/4 equal to 0 is unrelated to static or
dynamic typing", but it *is* related, only in a subtle way.

--
__("< Marcin Kowalczyk
\__/ qr****@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Jul 18 '05 #367
Matthias Blume wrote:
Pascal Costanza <co******@web.de> writes:

Can you give a better example of a program that would render
meaningless without type annotations?

fun f A = "A"
| f B = "B"


I don't find this convincing. This is similar to the 20 * 30 example.

The resolution in both cases would be to define a default meaning if no
explicit type annotation exists. Done.

These examples don't allow me to write one single meaningful program
more than in a dynamically typed language.

My claim that dynamically typed languages have more expressive power
means that they allow you to write more programs that show useful
behavior at runtime. The strongest example I have are programs that
allow arbitrary changes to the definitions that are embedded inside of
them. That's the essence of metacircularity (runtime metaobject
protocols, etc). You cannot statically type check such programs by
definition.

I think that Neelakantan has better examples for programs that are
possible to write with a statically typed language, but not with
dynamically typed ones. (Not 100% sure yet, though.)
Pascal

Jul 18 '05 #368
Stephen J. Bevan wrote:
Pascal Costanza <co******@web.de> writes:
Fergus Henderson wrote:

Dynamic typing works better with XP than static typing because with
dynamic typing you can write unit tests without having the need to
immediately write appropriate target code.

That one seems to have been pretty thoroughly debunked by other
responses
in this thread. A static type system won't stop you writing unit tests.
And if you want to actually run the unit tests, then you are going to
need appropriate target code, regardless of whether the system is
statically or dynamically typed.


Not if I only want to check whether the first ten tests work, and
don't care about the remaining ones.

Perhaps I'm just a low tech kind of guy but if I just want to run the
first ten then I comment out the rest. Even without a fancy IDE that
only take a few key presses.


....and it requires you to go to all the places where they are defined.

Yes, I know the answer: "But they should be all in one place." No, they
shouldn't need to be all in one place. For example, I might want to
place test code close to the definitions that they test. Or I might want
to organize them according to some other criteria.

No, it's not hard to find them all, then. I can use grep or my IDE to
find them. But that's still more work than just commenting them out. If
I seldomly need to find all test cases, I can trade locality of all test
cases for some other possible advantages.

Ah, another example: What if my test code is actually produced by some
macro, or some other code generation facility?
Pascal

Jul 18 '05 #369
Pascal Costanza <co******@web.de> writes:
Matthias Blume wrote:
Pascal Costanza <co******@web.de> writes:
Can you give a better example of a program that would render
meaningless without type annotations? fun f A = "A"

| f B = "B"


I don't find this convincing. This is similar to the 20 * 30 example.

The resolution in both cases would be to define a default meaning if
no explicit type annotation exists. Done.


Of course, you can always define a default meaning for the case that
no explicit (type) information is available. But what you are doing
is essentially providing such information: no explicit type annotation
amounts to having an implicit annotation.

By the way, this is how, e.g., SML does it anyway: If you write just
20*30 and nothing else is known, then the type is resolved to be
Int.int.

This does not invalidate the claim that you know the semantics of the
phrase only if you know the type. It just so happens that you always
know the type.
I think that Neelakantan has better examples for programs that are
possible to write with a statically typed language, but not with
dynamically typed ones. (Not 100% sure yet, though.)


There are no such programs, obviously. You can always translate a
statically typed program into a dynamically typed one (and vice
versa).

The advantage (as far as I am concerned) of the statically typed
program is in the guarantees that it provides: If I write

fun foo x = ...

and x is declared or inferred to be of type t, then I never again have
to worry about what happens should someone pass a non-t to foo
because that simply cannot happen. This all by itself is useful, but
it gets even more useful if t is an abstract type so that I have full
control over how and where t values are generated.

This sort of thing is most useful when designing libraries because in
this case you don't know yet (who will call foo and how (and you might
in fact never know). But you do know that whoever it is and however he
does it, he must pass a value of type t.

Matthias
Jul 18 '05 #370
Andreas Rossberg wrote:
The design of a type system is by no means canonical. In fact, it is
based on set of pragmatic decisions and trade-offs. Idealized, you start
with the trivial type system, which has only one type. Then you refine
it incrementally by distinguishing certain classes of values through
introduction of new types and typing rules. Introduction of typing rules
is based on the following criteria:

- Do they catch a lot of errors?
- Are these errors serious?
- Are they hard to localize otherwise?
- Does the refinement rule out useful constructions?
- Are such constructions common?
- Are they expressible by other means?
- Are the rules intuitive?
- Do they interact nicely with other rules?

And probably more. There are never any definite answers to any of these
questions. The trade-off depends on many factors, such as the problem
domain the language is used for. Usually the line is drawn based on
experience with other languages and known problem domains. In the case
of arbitrary recursive types, experience with languages that allowed
them has clearly shown that it caused much more grief than joy.


Hmm, could a kind of "meta type system protocol" be feasible? I.e., a
language environment in which you could tweak the type system to your
concrete needs, without having to change the language completely?
Pascal

Jul 18 '05 #371
Matthias Blume wrote:
The problem
is that for many algorithms people want to be sure that the compiler
represents their values in machine words. Infinite precision is
needed sometimes, but in the majority of cases it is overkill. If you
need infinite precision, specify the type (IntInf.int in SML's case).
A clever compiler might optimize that like a Lisp compiler does. In
most other cases, why take any chances?


I disagree strongly here. I am convinced that in most algorithms,
machine words don't matter at all. Have you ever seen in books on
algorithms that they actually _need_ to restrict them to values that are
representable in machine word sizes?

"Here is a proof for the correctness of the Quicksort algorithm, but
only for arrays with a maximum length of 65535." Ha! Nonsense! ;-)
Computers are fast enough and have enough memory nowadays. You are
talking about micro efficiency. That's not interesting anymore.
(but I have no problems if we agree to disagree here. we both don't have
the necessary empirical data to back our claims)

Pascal

Jul 18 '05 #372
John Atwood wrote:
* The behavior of redefinition in a statically typed environment
is far different from the behavior in a dynamically typed environment.
For one thing, generativity of names kicks in, which makes it
basically impossible to redefine types and functions without
recompiling all uses (and thus restarting your program), in a static
environment.

Yes, and that's a good thing. It prevents the program form getting in an
unreachable/inconsistent state,


Oh dear, that argument again.

No, to repeat this for the nth time, that's not _generally_ a good
thing. It also prevents the program from getting in a certain class of
states that would still be reachable/consistent. So, in some situations
it might be a _bad_ thing.
Pascal

Jul 18 '05 #373
Pascal Costanza <co******@web.de> writes:
Computers are fast enough and have enough memory nowadays. You are
talking about micro efficiency. That's not interesting anymore.
I have worked on projects where people worried about *every cycle*.
(Most of the time I agree with you, though. Still, using infinite
precision by default is, IMO, a mistake. Having it around and at your
fingertips, though, is nice. That's why I added the long-missing
compiler support for IntInf to SML/NJ recently.)
(but I have no problems if we agree to disagree here.
Good.
we both don't have the necessary empirical data to back our claims)


Speak for yourself.

Matthias
Jul 18 '05 #374
Matthias Blume wrote:
Pascal Costanza <co******@web.de> writes:
I think that Neelakantan has better examples for programs that are
possible to write with a statically typed language, but not with
dynamically typed ones. (Not 100% sure yet, though.)


There are no such programs, obviously. You can always translate a
statically typed program into a dynamically typed one (and vice
versa).


No, for christ's sake! There are dynamically typed programs that you
cannot translate into statically typed ones!

As soon as you add full-fledged runtime metaprogramming to a language,
and write a program that uses it, you cannot statically type check this
anymore, by definition, because you cannot statically determine anymore
in what ways such a program would change during its lifetime!

And don't tell me that "in 99% of all cases, you don't need this". This
just isn't true. And even if it were true, it wouldn't matter!

If there would only be _one_ useful program on earth that someone cared
about that would make use of runtime metaprogramming, this would make my
statement true that static typing decreases expressive power in possibly
serious ways. And there are definitely lots of them out there!
The advantage (as far as I am concerned) of the statically typed
program is in the guarantees that it provides: If I write

fun foo x = ...

and x is declared or inferred to be of type t, then I never again have
to worry about what happens should someone pass a non-t to foo
because that simply cannot happen. This all by itself is useful, but
it gets even more useful if t is an abstract type so that I have full
control over how and where t values are generated.

This sort of thing is most useful when designing libraries because in
this case you don't know yet (who will call foo and how (and you might
in fact never know). But you do know that whoever it is and however he
does it, he must pass a value of type t.


Yes, these things are all obvious. But these are not examples for an
increase of expressive power! These are only examples of _restricting_
the set of potentially useful programs! How hard is this to understand?

You _want_ this restriction. Why don't you _admit_ that it is a restriction?
Pascal

Jul 18 '05 #375
Matthias Blume wrote:
we both don't have the necessary empirical data to back our claims)

Speak for yourself.


You too.
This:
I have worked on projects where people worried about *every cycle*. ^^^^^^^^
(Most of the time I agree with you, though. Still, using infinite
precision by default is, IMO, a mistake. Having it around and at your ^^^^^^
fingertips, though, is nice. That's why I added the long-missing
compiler support for IntInf to SML/NJ recently.)
is by any scientifical standard not enough evidence to back this:
The problem
is that for many algorithms people want to be sure that the compiler
represents their values in machine words. Infinite precision is
needed sometimes, but in the majority of cases it is overkill. If you
need infinite precision, specify the type (IntInf.int in SML's case).
A clever compiler might optimize that like a Lisp compiler does. In
most other cases, why take any chances?

Pascal

Jul 18 '05 #376
Pascal Costanza <co******@web.de> writes:
Matthias Blume wrote:
Pascal Costanza <co******@web.de> writes:
I think that Neelakantan has better examples for programs that are
possible to write with a statically typed language, but not with
dynamically typed ones. (Not 100% sure yet, though.)

There are no such programs, obviously. You can always translate a

statically typed program into a dynamically typed one (and vice
versa).


No, for christ's sake! There are dynamically typed programs that you
cannot translate into statically typed ones!


Yes you can. (In the worst case scenario you lose all the benefits of
static typing. But a translation is *always* possible. After all,
dynamically typed programs are already statically typed in the trival
"one type fits all" sense.)
Yes, these things are all obvious. But these are not examples for an
increase of expressive power! These are only examples of _restricting_
the set of potentially useful programs! How hard is this to
understand?

You _want_ this restriction. Why don't you _admit_ that it is a restriction?


Who said that I don't admit that I want restrictions. That's the
whole point. Static typing increases my expressive power because I
can now *express restrictions* which I couldn't express before.
That's the whole point. Being able to express more programs is not
the issue, being able to express more restrictions and being told
early about their violations is!

[This whole discussion is entirely due to a mismatch of our notions of
what constitutes expressive power.]

Matthias
Jul 18 '05 #377

Pascal Costanza <co******@web.de> writes:
Matthias Blume wrote:
The problem
is that for many algorithms people want to be sure that the compiler
represents their values in machine words. Infinite precision is
needed sometimes, but in the majority of cases it is overkill. If you
need infinite precision, specify the type (IntInf.int in SML's case).
A clever compiler might optimize that like a Lisp compiler does. In
most other cases, why take any chances?


I disagree strongly here. I am convinced that in most algorithms,
machine words don't matter at all. Have you ever seen in books on
algorithms that they actually _need_ to restrict them to values that
are representable in machine word sizes?


Hmmm.. lets see... AES and MD5 most if not all DSP/ signal processing
algorithms. There are quite a few.
Jul 18 '05 #378
Matthias Blume wrote:
Pascal Costanza <co******@web.de> writes:

Matthias Blume wrote:

Pascal Costanza <co******@web.de> writes:
I think that Neelakantan has better examples for programs that are
possible to write with a statically typed language, but not with
dynamically typed ones. (Not 100% sure yet, though.)

There are no such programs, obviously. You can always translate a

statically typed program into a dynamically typed one (and vice
versa).


No, for christ's sake! There are dynamically typed programs that you
cannot translate into statically typed ones!

Yes you can. (In the worst case scenario you lose all the benefits of
static typing. But a translation is *always* possible. After all,
dynamically typed programs are already statically typed in the trival
"one type fits all" sense.)


No, that's not all you need to do. Essentially you would need to write a
complete interpreter/compiler for the dynamically typed language on top
of the statically typed one, _if_ you want runtime metaprogramming.
That's not what I would call a straightforward translation.

But this is really getting tiring.
[This whole discussion is entirely due to a mismatch of our notions of
what constitutes expressive power.]


No, it's not. There's a class of programs that exhibit a certain
behavior at runtime that you cannot write in a statically typed language
_directly in the language itself_. There's no program that exhibits a
certain behavior at runtime that you can only write in a statically
typed language. [1]

That's a fact that you simply don't want to admit. But you're
objectively wrong here.

However, the horse is beaten to death by now.
Good bye.

Pascal

[11 Except perhaps for a class of programs that would change their
runtime and/or space complexity, provided they would need lots of
dynamic type checks. But I haven't sorted out yet whether this class
really exists.

Jul 18 '05 #379
Daniel C. Wang wrote:
Pascal Costanza <co******@web.de> writes:
I disagree strongly here. I am convinced that in most algorithms,
machine words don't matter at all. Have you ever seen in books on
algorithms that they actually _need_ to restrict them to values that
are representable in machine word sizes?

Hmmm.. lets see... AES and MD5 most if not all DSP/ signal processing
algorithms. There are quite a few.


It's true that they are specified such that particularly efficient
implementations exists on machines with the proper word size. This
holds for many crypthographic algorithms.

This doesn't rule out other implementations though.

<Shame less self plug

<http://www.scheme.dk/md5/md5.html>

/>
--
Jens Axel Søgaard

Jul 18 '05 #380
Pascal Costanza <co******@web.de> writes:
Perhaps I'm just a low tech kind of guy but if I just want to run the
first ten then I comment out the rest. Even without a fancy IDE that
only take a few key presses.
...and it requires you to go to all the places where they are defined.

Yes, I know the answer: "But they should be all in one place." No,
they shouldn't need to be all in one place.


As I wrote, I'm a low tech guy, I put all the tests for a particular
feature in the same file. If I only want to run some of the tests in
the file then I comment out those tests. If I only want to run the
tests in some file rather than others then I comment out the names of
the files containing the tests I don't want to run. I can see how
things can get more complicated if you use other approaches, which is
one of the reasons I don't use those approaches. YMMV.

Ah, another example: What if my test code is actually produced by some
macro, or some other code generation facility?


Er, comment out either definition of the macro and the calls to it or
the code generation facility.
Jul 18 '05 #381
Pascal Costanza <co******@web.de> writes:
is that for many algorithms people want to be sure that the compiler
represents their values in machine words. Infinite precision is
needed sometimes, but in the majority of cases it is overkill. If you
need infinite precision, specify the type (IntInf.int in SML's case).
A clever compiler might optimize that like a Lisp compiler does. In
most other cases, why take any chances?


I disagree strongly here. I am convinced that in most algorithms,
machine words don't matter at all. Have you ever seen in books on
algorithms that they actually _need_ to restrict them to values that
are representable in machine word sizes?
[snip]
Computers are fast enough and have enough memory nowadays. You are
talking about micro efficiency. That's not interesting anymore.


Implement something like MD5, SHA-1, AES, ... etc. in your favourite
language and use the fastest compiler available to you to calculate
how many MB/s it can process. If it can get say with a factor of 2 of
C code then IMHO you'll have proved your point. If not, then either
your point stands but your favourite language doesn't have
sufficiently good compilers available yet, or exact sizes are required
in order to get good performance in this case.
Jul 18 '05 #382
Pascal Costanza <co******@web.de> writes:
Matthias Blume wrote:
we both don't have the necessary empirical data to back our claims) Speak for yourself.


You too.


I am.


This:
> I have worked on projects where people worried about *every cycle*.

^^^^^^^^
> (Most of the time I agree with you, though. Still, using infinite
> precision by default is, IMO, a mistake. Having it around and at your

^^^^^^
> fingertips, though, is nice. That's why I added the long-missing
> compiler support for IntInf to SML/NJ recently.)


is by any scientifical standard not enough evidence to back this:


It is more than enough evidence that there are programs which require
cycle-level efficiency and therefore cannot afford to use infinite
precision arithmetic.

It is, in my opinion, ridiculous to claim that most programs should
use infinite precision integers in almost all places (i.e., by
default). Infinite precision is rarely necessary and almost always
wasteful overkill.

And in any case, it is also completely besides the point in the
discussion of static checking vs. everything dynamic. (It just came
up as a side discussion after Andreas gave an example of ML code which
exhibits different semantics in different typing contexts -- a
discussion which seems to circle around the extremely questionable
idea that the only correct result of 20 * 30 has to be 600.)

Matthias
Jul 18 '05 #383
Matthias Blume <fi**@my.address.elsewhere> wrote in message news:<m1************@tti5.uchicago.edu>...

Pascal Costanza <co******@web.de> writes:

No, for christ's sake! There are dynamically typed programs that you
cannot translate into statically typed ones!


Yes you can. (In the worst case scenario you lose all the benefits of
static typing. But a translation is *always* possible. After all,
dynamically typed programs are already statically typed in the trival
"one type fits all" sense.)


This is sophistry at its worst. If you "translate" a dynamically typed
program into a statically typed language by eliminating all the static
type checking, then WTF is the point of the static type checking?

It's also possible to "translate" any program into a turing machine
tape, so we should all start coding that way!

Introducing TuringTape(TM), the ultimate bondage and discipline
language!
Jul 18 '05 #384
Matthias Blume <fi**@my.address.elsewhere> wrote in message news:<m1************@tti5.uchicago.edu>...
[This whole discussion is entirely due to a mismatch of our notions of
what constitutes expressive power.]


No, it is due to your desire to be type constrained inappropriately
early in the development process. Lispers know that early on, you
don't care about type constraints because you haven't settled on your
final data representations yet. So why waste time placating a
type-checking compiler before you have to?

With lisp, you only add as much type checking as you need, *when* you
need it.

With a statically typed language, you have to do the compiler type
check dance from the very first minute, even though you don't need to
solve the type constraint problems yet.
Jul 18 '05 #385
Pascal Costanza <co******@web.de> writes:

[ on my claim that every dynamically typed program has a statically typed
translation: ]
No, that's not all you need to do. Essentially you would need to write
a complete interpreter/compiler for the dynamically typed language on
top of the statically typed one, _if_ you want runtime
metaprogramming. That's not what I would call a straightforward
translation.
Actually, I didn't say what I need, so how can you contradict me on
this point? Anyway, having to have a compiler/interpreter around is
only true if the original program contained a complete
compiler/interpreter, in which case I'd say this is just par for the
course.
[This whole discussion is entirely due to a mismatch of our notions of
what constitutes expressive power.]


No, it's not. There's a class of programs that exhibit a certain
behavior at runtime that you cannot write in a statically typed
language _directly in the language itself_.


This is simply not true. See above.
There's no program that
exhibits a certain behavior at runtime that you can only write in a
statically typed language. [1]
I do not dispute this fact.
[11 Except perhaps for a class of programs that would change their
runtime and/or space complexity, provided they would need lots of
dynamic type checks.
This comment makes me /very/ uneasy. Namely, the funny thing here
is that you seem to question a statement that you make /yourself/ even
though it is undoubtedly true. *Of course* you can write everything
that can be written in a statically typed language in a dynamically
typed language in such a way that runtime behavior is the same
(including complexity). Translating a well-typed ML program into,
say, Lisp is completely straightforward. (Early ML implementations
were done that way.)

The proof of type-safety for the original ML program also shows that
no additional runtime checks are needed for the result of the
translation. You don't even need the runtime checks that Lisp does on
its own because it doesn't know any better. (This is unless you turn
these checks off explicitly -- which in general would make the code
unsafe but in this specific case does not.)

So rest assured that the answer to your question:
But I haven't sorted out yet whether this class
really exists.


is "no". But that was not my point anyway.

Matthias

PS: You don't need to try and convince me of the virtues of dynamic
typing. I *come* from this world -- having implemented at least three
(if not four -- depending on how you count) interpreters/compilers for
dynamically typed languages. Because of this I also already know all
the arguments that you are undoubtedly thinking of bringing forward,
having either used them myself or once marveled at them when I heard
them from other, then like-minded folks. But the long-term effect of
working with and on such systems was that I now prefer to avoid them.
"Run-time metaprogrammming" you say? The emperor has no clothes.
Jul 18 '05 #386
Raffael Cavallaro wrote:
Matthias Blume <fi**@my.address.elsewhere> wrote in message
news:<m1************@tti5.uchicago.edu>...

> > Pascal Costanza <co******@web.de> writes:
> No, for christ's sake! There are dynamically typed programs that you
> cannot translate into statically typed ones!


Yes you can. (In the worst case scenario you lose all the benefits of
static typing. But a translation is *always* possible. After all,
dynamically typed programs are already statically typed in the trival
"one type fits all" sense.)


This is sophistry at its worst. If you "translate" a dynamically typed
program into a statically typed language by eliminating all the static
type checking, then WTF is the point of the static type checking?


Read what he wrote..in particular the words *WORST CASE SCENARIO*

Regards
--
Adrian Hey
Jul 18 '05 #387
Pascal Costanza <co******@web.de> writes:
Fergus Henderson wrote:
Pascal Costanza <co******@web.de> writes:
What I want is:

testxyz obj = (concretemethod obj == 42)

Does the code compile as long as concretemethod doesn't exist?


No, because the likelihood of that being a typo (e.g. for `concrete_method')
is too high.


This proves that a static type system requires you to write more code
than strictly necessary.


True in the sense that "there exists a static type system that requires ...".
That is true for several static type systems that I know --
but the amount of extra code required is very very small.
E.g. in Haskell it is usually sufficient to just write

concretemethod = error "stub"

However, your statement would be false if you tried to generalize it to
all languages and language implementations that have static type systems.
As I said, it would be easy to modify a statically typed system to
optionally allow references to undefined functions, and indeed there
are some systems which do that. (For example I think ObjectCenter,
and interpretive environment for C/C++ programs, did that.)

If a couple of potential users of Mercury were to ask for it, I would
go ahead and add support for this to the Mercury implementation. But
so far, to the best of my knowledge, no Mercury user has ever asked for it.
It would be fairly straight-forward to also add support for allowing
code like that to compile even if there was no declaration, but that
does not seems like a good idea to me - it would make it easier for
typos to go unnoticed, with insufficient compensating benefit.


A good development environment gives you immediate feedback on such
kinds of typos. A good compiler for a dynamically type language issues a
warning. So these typos don't go unnoticed.


My experience is that compiler warnings are too easily ignored.

As for error highlighting in IDEs, well... Firstly, as you yourself
mentioned, not everyone wants to use a complicated IDE.

Secondly, even in such an IDE, I think errors could still slip through
unnoticed. For example, consider the following scenario. You might
write a call to a new function, which will get highlighted. But you
expect that, since the function is not yet defined so you ignore the
highlighting. Then you write another call to the function, which also
gets highlighted, and again you ignore it, since you expected that.
Finally you write the definition of the function, and run the program.
The compiler reports a few dozen warnings, which you ignore, since they
all seem to be referring to some undefined stubs in part of the program
that one of your colleagues is responsible for. Then you run a few tests,
which all work, so you check your change in to the shared repository and
go home for the weekend.

Your colleague, who is working all weekend to get things ready for an
important demo, does a cvs update and incorporates your change. But when
he runs _his_ test case, it now crashes with an error about
"undefined function 'tingamajig_handler'"! He greps the source
for "tingamajig", but the only occurrence is the one single place
where it is called, which happens to have absolutely no comments about
what it is supposed to do. In desparation, he tries calling you,
but your mobile phone's battery has run flat. He tries to implement
"tingamajig_handler" himself, but has no idea of what invariants it
is supposed to enforce, and eventually gives up. The demo on Monday
morning is a complete failure.

On Monday afternoon, he finally catches up with you, and tells you of his
woes. You see immediately that the problem was just a simple typo --
the function was named "thingamajig_handler", not "tingamajig_handler".
A far-fetched hypothetical? Possibly. But if you tell us that
"typos don't go unnoticed", please forgive me if I am a little skeptical ;-)

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #388
Matthias Blume <fi**@my.address.elsewhere> writes:
- 1 * 1.0;
val it = 1.0 : float (* ideally *)
That's not "ideal" at all, to me. I find the automatic conversions in
C a paint in the b*tt


You're making the mistake of comparing to C :-) And it's much worse
in C++, when conversions in conjunction with overloading can impact
which function gets called in non-obvious ways.

Anyway, I think Haskell makes this work better. Still no automatic
conversions, but more flexibility in the use of numeric literals.
Occasionally, you need to sprinkle "fromIntegral"s around in
expressions, which I agree aren't terribly pretty.

-kzm
--
If I haven't seen further, it is by standing in the footprints of giants
Jul 18 '05 #389
On Wed, Oct 29, 2003 at 04:20:01AM +0000, Matthias Blume wrote:
It is more than enough evidence that there are programs which require
cycle-level efficiency and therefore cannot afford to use infinite
precision arithmetic.
Said programs will then use specialized data types.
It is, in my opinion, ridiculous to claim that most programs should
use infinite precision integers in almost all places (i.e., by
default). Infinite precision is rarely necessary and almost always
wasteful overkill.
How is it wasteful? Numbers can still be represented by the most
efficient representation available, while retaining infinite precision
when operated upon. Do you realize that Common Lisp implementations
represent integers in the range of a fixnum as machine words? And
automatically convert them to bignum objects when they overflow?
Declarations can take this further, such that a compiler as smart as
CMUCL can manipulate raw (unsigned-byte 32) values, for example.

Are the vast majority of your programs the type which behave properly
within machine-word integers? Do they take into account the possibility
of overflow? Should someone have to worry about issues like overflow
when they just want to write a simple arithmetic program, in a high
level language?
idea that the only correct result of 20 * 30 has to be 600.)


(20 * 30) mod 256 is, of course, a completely different expression.

--
; Matthew Danish <md*****@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
Jul 18 '05 #390
Stephen J. Bevan wrote:
Pascal Costanza <co******@web.de> writes:
Perhaps I'm just a low tech kind of guy but if I just want to run the
first ten then I comment out the rest. Even without a fancy IDE that
only take a few key presses.


...and it requires you to go to all the places where they are defined.

Yes, I know the answer: "But they should be all in one place." No,
they shouldn't need to be all in one place.

As I wrote, I'm a low tech guy, I put all the tests for a particular
feature in the same file. If I only want to run some of the tests in
the file then I comment out those tests. If I only want to run the
tests in some file rather than others then I comment out the names of
the files containing the tests I don't want to run. I can see how
things can get more complicated if you use other approaches, which is
one of the reasons I don't use those approaches. YMMV.
Ah, another example: What if my test code is actually produced by some
macro, or some other code generation facility?

Er, comment out either definition of the macro and the calls to it or
the code generation facility.


These are both all or nothing solutions.

+ "all the tests for a particular feature in one place" - maybe that's
not what I want (and you have ignored my arguments in this regard)

and:
+ what if I want to run _some_ of the tests that my macro produces but
not _all_ of them?
Actually, that seems to be the typical reaction of static typing fans.
This reminds me of a joke.

Imagine yourself back in the 1980's. A general of the former Soviet
Union says: "We can travel anywhere we want." Question: "What about
Great Britain, Italy, France, the US?" "We don't want to travel there."
Pascal

Jul 18 '05 #391
Pascal Costanza <co******@web.de> writes:
You can't have metacircularity in a statically type language.


Could you explain exactly what you mean by "metacircularity"?

Anyway, I'm skeptical of this claim. At very least, it should be possible
to have a language which is mostly statically typed (i.e. statically
typed by default), even if on some occaisions you have to fall back to
dynamic typing.

Whether or not any existing statically typed language implementations
support this sort of thing is another question...

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #392
Stephen J. Bevan wrote:
Pascal Costanza <co******@web.de> writes:
is that for many algorithms people want to be sure that the compiler
represents their values in machine words. Infinite precision is
needed sometimes, but in the majority of cases it is overkill. If you
need infinite precision, specify the type (IntInf.int in SML's case).
A clever compiler might optimize that like a Lisp compiler does. In
most other cases, why take any chances?


I disagree strongly here. I am convinced that in most algorithms,
machine words don't matter at all. Have you ever seen in books on
algorithms that they actually _need_ to restrict them to values that
are representable in machine word sizes?
[snip]
Computers are fast enough and have enough memory nowadays. You are
talking about micro efficiency. That's not interesting anymore.

Implement something like MD5, SHA-1, AES, ... etc. in your favourite
language and use the fastest compiler available to you to calculate
how many MB/s it can process. If it can get say with a factor of 2 of
C code then IMHO you'll have proved your point. If not, then either
your point stands but your favourite language doesn't have
sufficiently good compilers available yet, or exact sizes are required
in order to get good performance in this case.


Are these algorithms reason enough to have machine word sized numerical
data types as the default for a _general purpose_ language?
Pascal

Jul 18 '05 #393
Pascal Costanza <co******@web.de> writes:
Can you show me an example of a program that does't make sense anymore
when you strip off the static type information?


Here's a C++ example:

x << y

Depending on the types of x and y, this might mean left shift, I/O,
or something entirely different.

Here's another C++ example:

#include "foo.h"
main() {
auto Foo x; // constructor has side-effects
}

If you strip away the static type information here, i.e. change "auto Foo x;"
to just "auto x;", then there's no way to know which constructor to call!

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #394
Pascal Costanza <co******@web.de> writes:
Fergus Henderson wrote:
Pascal Costanza <co******@web.de> writes:
Well, the research that ultimately lead to the HotSpot Virtual Machine
originated in virtual machines for Smalltalk and for Self. Especially
Self is an "extremely" dynamic language, but they still managed to make
it execute reasonably fast.


Please correct me if I'm wrong, but as I understand it, iterating over a
collection of values is still going to require keeping some representation
of the type of each element around at runtime, and testing the type for
each element accessed, in case it is not the expected type. AFAIK neither
HotSpot nor the Self compiler do the kind of optimizations which would
be needed to avoid that.


You don't need to check the type on each access. If you only copy a
value from one place to other, and both places are untyped, you don't
need any check at all.


Great. I feel so much better now. Now my type errors are free to
propagate throughout my program's data structures, so that when they
are finally detected, it may be far from the true source of the problem.

But the example that I was thinking of did actually want to access the
value, not just copy it.
Furthermore, if I remember correctly, dynamically compiled systems use
type inferencing at runtime to reduce the number of type checks.


In cases such as the one described above, they may reduce the number of
times that the type of the _collection_ is checked, but they won't be
able to avoid checking the element type at every element access.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #395
Pascal Costanza <co******@web.de> writes:
Hmm, could a kind of "meta type system protocol" be feasible? I.e., a
language environment in which you could tweak the type system to your
concrete needs, without having to change the language completely?


Yes. But that is still a research issue at this stage.
For some work in this area, see the following references:

[1] Martin Sulzmann, "A General Type Inference Framework for
Hindley/Milner Style Systems." In 5th International Symposium
on Functional and Logic Programming (FLOPS), Tokyo, Japan,
March 2001.

[2] Sandra Alves and Mario Florido, "Type Inference using
Constraint Handling Rules", Electronic Notes in Theoretical
Computer Science volume 64, 2002.

[3] Kevin Glynn, Martin Sulzmann, Peter J. Stuckey,
"Type Classes and Constraint Handling Rules",
University of Melbourne Department of Computer Science
and Software Engineering Technical Report 2000/7, June 2000.

Also the work on dependent type systems, e.g. the language Cayenne,
could be considered to fit in this category.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #396
Pascal Costanza wrote:

Hmm, could a kind of "meta type system protocol" be feasible? I.e., a
language environment in which you could tweak the type system to your
concrete needs, without having to change the language completely?


Well, such "protocols" usually come in the form of compiler switches or
pragmas. ;-)

- Andreas

--
Andreas Rossberg, ro******@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.

Jul 18 '05 #397
Raffael Cavallaro wrote:
Matthias Blume <fi**@my.address.elsewhere> wrote in message news:<m1************@tti5.uchicago.edu>...
[This whole discussion is entirely due to a mismatch of our notions of
what constitutes expressive power.]
No, it is due to your desire to be type constrained inappropriately
early in the development process.


Oh my. How is this related?

I think Matthias' is absolutely right. The mismatch here is that some
fail to understand that - obviously, one should hope - the ability to
express restrictions is an ability to express something, i.e. expressive
power. Otherwise assertions, pre/post conditions, probably exceptions
and similar stuff wouldn't carry any expressive power either. Which of
course is nonsense.
Lispers know that early on, you
don't care about type constraints because you haven't settled on your
final data representations yet.
Another repeated misunderstanding. When I use types in early coding
phases in ML for example, these types are mostly abstract. They don't
say anything about representations. All checking takes place against
abstract types, whose representation is fully exchangable.
With lisp, you only add as much type checking as you need, *when* you
need it.


Yes, and you also loose most of the benefits of typing...

--
Andreas Rossberg, ro******@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.

Jul 18 '05 #398
Matthew Danish <md*****@andrew.cmu.edu> writes:
Declarations can take this further, such that a compiler as smart as
CMUCL can manipulate raw (unsigned-byte 32) values, for example.
Oh, so you're saying you want static declarations, checked and
enforced by the compiler? Hmm, I've read this point of view in this
thread somewhere.
Are the vast majority of your programs the type which behave properly
within machine-word integers?

idea that the only correct result of 20 * 30 has to be 600.)


(20 * 30) mod 256 is, of course, a completely different expression.


Undoubtedly, it is a different expression. But it might mean the
same, given a correspondingly chosen domain for 20 and 30, together
with an certain * operation.

Matthias
Jul 18 '05 #399
Matthias Blume wrote:
No, it's not. There's a class of programs that exhibit a certain
behavior at runtime that you cannot write in a statically typed
language _directly in the language itself_.
This is simply not true. See above.


OK, let's try to distill this to some simple questions.

Assume you have a compiler ML->CL that translates an arbitrary ML
program with a main function into Common Lisp. The main function is a
distinguished function that starts the program (similar to main in C).
The result is a Common Lisp program that behaves exactly like its ML
counterpart, including the fact that it doesn't throw any type errors at
runtime.

Assume furthermore that ML->CL retains the explicit type annotations in
the result of the translation in the form of comments, so that another
compiler CL->ML can fully reconstruct the original ML program without
manual help.

Now we can modify the result of ML->CL for any ML program as follows. We
add a new function that is defined as follows:

(defun new-main ()
(loop (print (eval (read)))))

(We assume that NEW-MAIN is a name that isn't defined in the rest of the
original program. Otherwise, it's easy to automatically generate a
different unique name.)

Note that we haven't written an interpreter/compiler by ourselves here,
we just use what the language offers by default.

Furthermore, we add the following to the program: We write a function
RUN (again a unique name) that spawns two threads. The first thread
starts the original main function, the second thread opens a console
window and starts NEW-MAIN.

Now, RUN is a function that executes the original ML program (as
translated by ML->CL, with the same semantics, including the fact that
it doesn't throw any runtime type errors in its form as generated by
ML->CL), but furthermore executes a read-eval-print-loop that allows
modification of the internals of that original program in arbitrary
ways. For example, the console allows you to use DEFUN to redefine an
arbitrary function of the original program that runs in the first
thread, so that the original definition is not visible anymore and all
calls to the original definiton within the first thread use the new
definition after the redefinition is completed. [1]

Now here come the questions.

Is it possible to modify CL->ML in a way that any program originally
written in ML, translated with ML->CL, and then modified as sketched
above (including NEW-MAIN and RUN) can be translated back to ML? For the
sake of simplicity we can assume an implementation of ML that already
offers multithreading. Again, for the sake of simplicity, it's
acceptable that the result of CL->ML accepts ML as an input language for
the read-eval-print-loop in RUN instead of Common Lisp. The important
thing here is that redefinitions issued in the second thread should
affect the internals of the program running in the first thread, as
described above.

To ask the question in more detail:

a) Is it possible to write CL->ML in a way that the result is both still
statically type checkable and not considerably larger than the original
program that was given to ML->CL. Especially, is it possible to do this
without implementing a new interpreter/compiler on top of ML and then
letting the program run in that language, but keep the program
executable in ML itself?

(So, ideally, it should be roughly only as many lines longer as the
modified CL version is compared to the unmodified CL version.)

b) Otherwise, is there a way to extend ML's type system in a way that it
is still statically checkable and can still handle such programs?

c) If you respond with yes to either a or b, what does your sketch of an
informal proof in your head look like that convincingly shows that this
can actually work?

d) If you respond with no to both a or b, would you still disagree with
the assessment there is a class of programs that can be implemented with
dynamically typed languages but without statically typed ones? If so, why?
[11 Except perhaps for a class of programs that would change their
runtime and/or space complexity, provided they would need lots of
dynamic type checks.

This comment makes me /very/ uneasy. Namely, the funny thing here
is that you seem to question a statement that you make /yourself/ even
though it is undoubtedly true. *Of course* you can write everything
that can be written in a statically typed language in a dynamically
typed language in such a way that runtime behavior is the same
(including complexity). Translating a well-typed ML program into,
say, Lisp is completely straightforward. (Early ML implementations
were done that way.)


Well, nothing in this thread makes me uneasy at all. I am not trying to
defend dynamic typing out of pure personal preference. I want to find
out if we can objectively classify the programs that are expressible in
either statically typed languages or dynamically typed languages. Until
recently, I have thought that the class of programs you can write with a
dynamically typed language is a strict superset of the programs you can
write with a statically typed language. It is especially the class of
programs that allows full-fledged runtime metaprogramming, as sketched
above, that statically typed languages cannot implement by definition
without resorting to implement a full interpreter/compiler for a
dynamically typed language.

If it turns out that statically typed languages can indeed express a
class of programs that exhibit a certain behavior that you cannot write
in a dynamically typed language without implementing a full
interpreter/compiler for a statically typed language on top, this
wouldn't make me feel "uneasy". To the contrary, I would be happy
because I would finally understand what all the fuss about static typing
is about.

If it is your only concern that you defend your pet programming style,
well, that's not my problem. I am interested in insights.
PS: You don't need to try and convince me of the virtues of dynamic
typing. I *come* from this world -- having implemented at least three
(if not four -- depending on how you count) interpreters/compilers for
dynamically typed languages. Because of this I also already know all
the arguments that you are undoubtedly thinking of bringing forward,
having either used them myself or once marveled at them when I heard
them from other, then like-minded folks. But the long-term effect of
working with and on such systems was that I now prefer to avoid them.
"Run-time metaprogrammming" you say? The emperor has no clothes.


I don't care whether you have made bad experiences in this regard or
not, to more or less the same degree that you probably don't care
whether I have made bad experiences with static type systems. (And
that's fine.)

I am only asking questions that we can objectively answer. Can static
typing and runtime metaprogramming be reconciled, yes or no?
Pascal

[1] Yes, with all the dangers that this may incur! There are ways to
handle the potential dangers of such an approach. That's what dynamic
metaobject protocols are designed for. However, this doesn't matter for
the sake of this discussion.

--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jul 18 '05 #400

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

75
by: Xah Lee | last post by:
http://python.org/doc/2.4.1/lib/module-re.html http://python.org/doc/2.4.1/lib/node114.html --------- QUOTE The module defines several functions, constants, and an exception. Some of the...
6
by: Juha S. | last post by:
Hi, I'm writing a small text editor type application with Python 2.5 and Tkinter. I'm using the Tk text widget for input and output, and the problem is that when I try to save its contents to a...
4
by: Berco Beute | last post by:
I wonder what it would take to implement Python in JavaScript so it can run on those fancy new JavaScript VM's such as Chrome's V8 or Firefox' tracemonkey. Much the same as Python implementations...
0
by: Luke Kenneth Casson Leighton | last post by:
On Sep 3, 10:02 pm, bearophileH...@lycos.com wrote: 1200 lines of code for the compiler, and about... 800 for a basic suite of builtin types (Dict, List, set, string). ...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.