473,434 Members | 1,657 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,434 software developers and data experts.

What is Expressiveness in a Computer Language

in March, i posted a essay “What is Expressiveness in a Computer
Language”, archived at:
http://xahlee.org/perl-python/what_i...esiveness.html

I was informed then that there is a academic paper written on this
subject.

On the Expressive Power of Programming Languages, by Matthias
Felleisen, 1990.
http://www.ccs.neu.edu/home/cobbe/pl...ive-slides.pdf

Has anyone read this paper? And, would anyone be interested in giving a
summary?

thanks.

Xah
xa*@xahlee.org
http://xahlee.org/

Jun 9 '06
669 25338
Darren New schrieb:
Marshall wrote:
Also: has subtyping polymorphism or not, has parametric polymorphism or
not.


And covariant or contravariant.


That's actually not a design choice - if you wish to have a sound type
system, all input parameters *must* be contravariant, all output
parameters *must* be covariant, and all in/out parameters must be
novariant. (Eiffel got this one wrong in almost all cases.)

Regards,
Jo
Jun 26 '06 #451
Anton van Straaten schrieb:
Joachim Durchholz wrote:
Anton van Straaten schrieb:
There's a close connection between latent types in the sense I've
described, and the "tagged values" present at runtime. However, as
type theorists will tell you, the tags used to tag values at runtime,
as e.g. a number or a string or a FooBar object, are not the same
thing as the sort of types which statically-typed languages have.
Would that be a profound difference, or is it just that annotating a
value with a full type expression would cause just too much runtime
overhead?


It's a profound difference. The issue is that it's not just the values
that need to be annotated with types, it's also other program terms.


Yes - but isn't that essentially just auxiliary data from and for the
data-flow analysis that tracks what values with what types might reach
which functions?
In
addition, during a single run of a program, all it can ever normally do
is record the types seen on the path followed during that run, which
doesn't get you to static types of terms. To figure out the static
types, you really need to do static analysis.


Agreed.

Regards,
Jo
Jun 26 '06 #452
I wrote:
These informal systems, which may not prove what they claim to prove
are my concept of a "type system".
Chris Smith <cd*****@twu.net> replied: Okay, that works. I'm not sure where it gets us, though....


Ok, we'll get there. First, we need to step back in time, to when there
was roughly algol, cobol, fortran, and lisp. Back then, at least as I
understood things, there were only a few types that generally people
understood integer, real, and (perhaps) pointer. Now, with algol or
fortran things were generally only of the first two types and
variables were statically declared to be one or the other. With lisp
any cell could hold any one of the 3, and some clever implementor
added "tag bits" to distinguish which the cell held. As I understood
it, the tag bits weren't originally for type correctness, so much as
they facilitated garbage collection. The garbage collector didn't
know what type of data you put in a cell and had to determine which
cells contained pointers, so that the mark-and-sweep algorithms could
sweep by following the pointers (and not following the integers that
looked like pointers). Still, these tag bits did preserve the
"dynamic" type, in the sense that we knew types from the other
languages. As I remember it, sophistication with type really didn't
occur as a phenomena for the general programmer until the introduction
of Pascal (and to a lesser extent PL/I). Moreover, as I recall it,
perhaps because I didn't learn the appropriate dialect was that lisp
dialects kept with the more general notion of type (lists and tuples)
and eschewed low-level bit-fiddling where we might want to talk about
a 4 bit integer representing 0 .. 15 or -8 .. 7.

The important thing is the dynamicism of lisp allowed one to write
polymorphic programs, before most of us knew the term. However, we
still had a notion of type: integers and floating point numbers were
still different and one could not portably use integer operations on
floating pointer values or vice versa. However, one could check the
tag and "do the right thing" and many lisp primitives did just that,
making them polymorphic.

The way most of us conceived (or were taught to conceive) of the
situation was that there still were types, they were just associated
with values and the type laws we all knew and loved still worked, they
just worked dynamically upon the types of the operand values that were
presented at the time.

Can this be made rigorous? Perhaps.

Instead of viewing the text of the program staticly, let us view it
dynamicly, that is by introducing a time (or execution) dimension.
This is easier if you take an imperative view of the dynamic program
and imagine things having an operational semantics, which is why we
stepped back in time in the first place, so that we are in a world
where imperative programming is the default model for most
programmers.

Thus, as we traverse a list, the first element might be an integer,
the second a floating point value, the third a sub-list, the fourth
and fifth, two more integers, and so on. If you look statically at
the head of the list, we have a very wide union of types going by.
However, perhaps there is a mapping going on that can be discerned.
For example, perhaps the list has 3 distinct types of elements
(integers, floating points, and sub-lists) and it circles through the
types in the order, first having one of each type, then two of each
type, then four of each type, then eight, and so on. The world is now
patterned.

However, I don't know how to write a static type annotation that
describes exactly that type. That may just be my lack of experience
in writing annotations. However, it's well within my grasp to imagine
the appropriate type structure, even though **I** can't describe it
formally. More importantly, I can easily write something which checks
the tags and sees whether the data corresponds to my model.

And, this brings us to the point, if the data that my program
encounters doesn't match the model I have formulated, then something
is of the wrong "type". Equally importantly, the dynamic tags, have
helped me discover that type error.

Now, the example may have seemed arbitrary to you, and it was in some
sense arbitrary. However, if one imagines a complete binary tree with
three kinds elements stored in memory as rows per depth, one can get
exactly the structure I described. And, if one were rewriting that
unusual representation to a more normal one, one might want exactly
the "type check" I proposed to validate that the input binary tree was
actually well formed.

Does this help explain dynamic typing as a form of typing?

-Chris
Jun 26 '06 #453
Dirk Thierbach wrote:
David Hopwood <da******************@blueyonder.co.uk> wrote:
Marshall wrote:
David Hopwood wrote:A type system that required an annotation on all subprograms that
do not provably terminate, OTOH, would not impact expressiveness
at all, and would be very useful.Interesting. I have always imagined doing this by allowing an
annotation on all subprograms that *do* provably terminate.
Maybe the paper "Linear types and non-size-increasing polynomial time
computation" by Martin Hofmann is interesting in this respect.
<http://citeseer.ist.psu.edu/hofmann98linear.html>
From the abstract:

We propose a linear type system with recursion operators for inductive
datatypes which ensures that all definable functions are polynomial
time computable. The system improves upon previous such systems in
that recursive definitions can be arbitrarily nested; in particular,
no predicativity or modality restrictions are made.

It does not only ensure termination, but termination in polynomial time,
so you can use those types to carry information about this as well.


That's interesting, but linear typing imposes some quite severe
restrictions on programming style. From the example of 'h' on page 2,
it's clear that the reason for the linearity restriction is just to
ensure polynomial-time termination, and is not needed if we only want
to prove termination.
If the annotation marks not-provably-terminating subprograms, then it
calls attention to those subprograms. This is what we want, since it is
less safe/correct to use a nonterminating subprogram than a terminating
one (in some contexts).


That would be certainly nice, but it may be almost impossible to do in
practice. It's already hard enough to guarantee termination with the
extra information present in the type annotation. If this information
is not present, then the language has to be probably restricted so
severely to ensure termination that it is more or less useless.


I think you're overestimating the difficulty of the problem. The fact
that *in general* it can be arbitrarily hard to prove termination, can
obscure the fact that for large parts of a typical program, proving
termination is usually trivial.

I just did a manual termination analysis for a 11,000-line multithreaded
C program; it's part of a control system for some printer hardware. This
program has:

- 212 functions that trivially terminate. By this I mean that the function
only contains loops that have easily recognized integer loop variants,
and only calls other functions that are known to trivially terminate,
without use of recursion. A compiler could easily prove these functions
terminating without need for any annotations.

- 8 functions that implement non-terminating event loops.

- 23 functions that intentionally block. All of these should terminate
(because of timeouts, or because another thread should do something that
releases the block), but this cannot be easily proven with the kind of
analysis we're considering here.

- 3 functions that read from a stdio stream into a fixed-length buffer. This
is guaranteed to terminate when reading a normal file, but not when reading
from a pipe. In fact the code never reads from a pipe, but that is not
locally provable.

- 2 functions that iterate over a linked list of network adaptors returned
by the Win32 GetAdaptorsInfo API. (This structure could be recognized as
finite if we were calling a standard library abstraction over the OS API,
but we are relying on Windows not to return a cyclic list.)

- 1 function that iterates over files in a directory. (Ditto.)

- 2 functions that iterate over a queue, implemented as a linked list. The queue
is finite, but this is not locally provable. Possibly an artifact of the lack
of abstraction from using C, where the loop is not easily recognizable as an
iteration over a finite data structure.

- 1 function with a loop that terminates for a non-trivial reason that involves
some integer arithmetic, and is probably not expressible in a type system.
The code aready had a comment informally justifying why the loop terminates,
and noting a constraint needed to avoid an arithmetic overflow.
(Bignum arithmetic would avoid the need for that constraint.)

- 2 functions implementing a simple parsing loop, which terminates because the
position of the current token is guaranteed to advance -- but this probably
would not be provable given how the loop is currently written. A straightforward
change (to make the 'next token' function return the number of characters to
advance, asserted to be > 0, instead of a pointer to the next token) would make
the code slightly clearer, and trivially terminating.
So for this program, out of 254 functions:

- 83.5% are trivially terminating already.

- 12.2% would need annotations saying that they are intended to block or not
to terminate. This is largely an artifact -- we could remove the need for
*all* of these annotations by adopting an event-loop concurrency model, and
only attempting to prove termination of functions within a "turn" (one
iteration of each top-level loop).

- 2.4% would need annotations saying that termination is too hard to prove --
but only because they interact with the operating system.
This is obviously not a type system problem; fixing it requires careful
design of the language's standard library.

- 0.8% would need annotations only in C; in a language with standard libraries
that provide data structures that are guaranteed to be finite, they probably
would not.

- 1.2% would need changes to the code, or are genuinely too difficult to prove
to be terminating using a type system.
This is obviously not a large program (although I wouldn't want to do a manual
analysis like this for a much larger one), and it may be atypical in that it has
almost no use of recursion (or maybe that *is* typical for C programs). It would
be interesting to do the same kind of analysis for a functional program, or one
that does a more "algorithmically intensive" task.

--
David Hopwood <da******************@blueyonder.co.uk>
Jun 26 '06 #454
Pascal Costanza wrote:
David Hopwood wrote:
Pascal Costanza wrote:
Chris Smith wrote:

While this effort to salvage the term "type error" in dynamic
languages is interesting, I fear it will fail. Either we'll all have
to admit that "type" in the dynamic sense is a psychological concept
with no precise technical definition (as was at least hinted by
Anton's post earlier, whether intentionally or not) or someone is
going to have to propose a technical meaning that makes sense,
independently of what is meant by "type" in a static system.

What about this: You get a type error when the program attempts to
invoke an operation on values that are not appropriate for this
operation.

Examples: adding numbers to strings; determining the string-length of a
number; applying a function on the wrong number of parameters; applying
a non-function; accessing an array with out-of-bound indexes; etc.
This makes essentially all run-time errors (including assertion failures,
etc.) "type errors". It is neither consistent with, nor any improvement
on, the existing vaguely defined usage.


Nope. This is again a matter of getting the levels right.

Consider division by zero: appropriate arguments for division are
numbers, including the zero. The dynamic type check will typically not
check whether the second argument is zero, but will count on the fact
that the processor will raise an exception one level deeper.


That is an implementation detail. I don't see its relevance to the above
argument at all.
This is maybe better understandable in user-level code. Consider the
following class definition:

class Person {
String name;
int age;

void buyPorn() {
if (< this.age 18) throw new AgeRestrictionException();
...
}
}

The message p.buyPorn() is a perfectly valid message send and will pass
both static and dynamic type tests (given p is of type Person in the
static case). However, you will still get a runtime error.


Your definition above was "You get a type error when the program attempts
to invoke an operation on values that are not appropriate for this operation."

this.age, when less than 18, is a value that is inappropriate for the buyPorn()
operation, and so this satisfies your definition of a type error. My point was
that it's difficult to see any kind of program error that would *not* satisfy
this definition.

--
David Hopwood <da******************@blueyonder.co.uk>
Jun 26 '06 #455
Joachim Durchholz wrote:
That's actually not a design choice


It's certainly a choice you can get wrong, as you say. ;-)

I mean, if "without runtime safety" is a choice, I expect picking the
wrong choice here can be. :-)

--
Darren New / San Diego, CA, USA (PST)
Native Americans used every part
of the buffalo, including the wings.
Jun 26 '06 #456
Chris Smith wrote:
Pascal Costanza <pc@p-cos.net> wrote:
Chris Smith wrote:
Of course zero is not appropriate as a second argument to the division
operator! I can't possibly see how you could claim that it is. The
only reasonable statement worth making is that there doesn't exist a
type system in widespread use that is capable of checking this. ...and this is typically not even checked at the stage where type tags
are checked in dynamically-typed languages. Hence, it is not a type
error. (A type error is always what you define to be a type error within
a given type system, right?)


Sure, it's not a type error for that language.
Note, this example was in response to David's reply that my definition
turns every runtime error into a type error. That's not the case, and
that's all I want to say.


But your definition does do that. Your definition of a type error was
when a program attempts to invoke an operation on values that are not
appropriate for this operation.


In my definition, I didn't say who or what decides what values are
appropriate for operations.
Clearly, in this example, the program
is invoking an operation (division) on values that are not appropriate
(zero for the second argument). Hence, if your definition really is a
definition, then this must qualify.


Here, you are asking more from dynamic type systems than any existing
type system provides. The definition of what is considered to be a type
error and what not is always part of the specification of a type system.
However, it sounds as
if you're claiming that it wouldn't be possible for the type system to
do this?

No. I only need an example where a certain error is not a type error in
_some_ language. I don't need to make a universal claim here.


Definitions are understood to be statements of the form "if and only
if". They assert that /everything/ that fits the definition is
describable by the word, and /everything/ that doesn't is not
describable by the word. If that's not what you meant, then we are
still in search of a definition.

If you want to make a statement instead of the sort you've implied
above... namely that a type error is *any* error that's raised by a type
system, then that's fine. It certainly brings us closer to static
types. Now, though, the task is to define a type system without making
a circular reference to types. You've already rejected the statement
that all runtime errors are type errors, because you specifically reject
the division by zero case as a type error for most languages. What is
that distinction?


I don't need such a distinction. I can just define what is covered by a
type system and what is not. All type systems work that way.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 26 '06 #457
Chris F Clark <cf*@shell01.TheWorld.com> wrote:
Ok, we'll get there. First, we need to step back in time, to when there
was roughly algol, cobol, fortran, and lisp. Back then, at least as I
understood things, there were only a few types that generally people
understood integer, real, and (perhaps) pointer.
In order to continue my tradition of trying to be as precise as
possible, let me point out that of course there were more "candidate
types", as it were, and that people understood them just fine. They
just didn't have a type system in place to make them into real types, so
they didn't think to call them types.
The important thing is the dynamicism of lisp allowed one to write
polymorphic programs, before most of us knew the term.
Sure. In exchange for giving up the proofs of the type checker, you
could write all kinds of programs. To this day, we continue to write
programs in languages with dynamic checking features because we don't
have powerful enough type systems to express the corresponding type
system.
Thus, as we traverse a list, the first element might be an integer,
the second a floating point value, the third a sub-list, the fourth
and fifth, two more integers, and so on. If you look statically at
the head of the list, we have a very wide union of types going by.
However, perhaps there is a mapping going on that can be discerned.
For example, perhaps the list has 3 distinct types of elements
(integers, floating points, and sub-lists) and it circles through the
types in the order, first having one of each type, then two of each
type, then four of each type, then eight, and so on. The world is now
patterned.

However, I don't know how to write a static type annotation that
describes exactly that type.
I believe that, in fact, it would be trivial to imagine a type system
which is capable of expressing that type. Okay, not trivial, since it
appears to be necessary to conceive of an infinite family of integer
types with only one value each, along with range types, and
relationships between them; but it's probably not completely beyond the
ability of a very bright 12-year-old who has someone to describe the
problem thoroughly and help her with the notation.

The more interesting problem is whether this type system could be made:
(a) unobstrusive enough, probably via type inference or something along
those lines, that it would be usable in practice; and (b) general enough
that you could define a basic type operator that applies to a wide range
of problems rather than revising your type operator the first time you
need a complete 3-tree of similar form.
That may just be my lack of experience
in writing annotations.
You would, of course, need a suitable type system first. For example,
it appears to me that there is simply no possible way of expressing what
you've described in Java, even with the new generics features. Perhaps
it's possible in ML or Haskell (I don't know). My point is that if you
were allowed to design a type system to meet your needs, I bet you could
do it.
And, this brings us to the point, if the data that my program
encounters doesn't match the model I have formulated, then something
is of the wrong "type". Equally importantly, the dynamic tags, have
helped me discover that type error.
Sure. The important question, then, is whether there exists any program
bug that can't be formulated as a type error. And if so, what good does
it do us to talk about type errors when we already have the perfectly
good word "bug" to describe the same concept?
Now, the example may have seemed arbitrary to you, and it was in some
sense arbitrary.


Arbitrary is good. One of the pesky things that keeps getting in the
way here is the intuition attached to the word "type" that makes
everyone think "string or integer".

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 26 '06 #458
Pascal Costanza <pc@p-cos.net> wrote:
Clearly, in this example, the program
is invoking an operation (division) on values that are not appropriate
(zero for the second argument). Hence, if your definition really is a
definition, then this must qualify.


Here, you are asking more from dynamic type systems than any existing
type system provides. The definition of what is considered to be a type
error and what not is always part of the specification of a type system.


No, I'm not. Were we to follow this path of definition, it would not
require that dynamic type systems catch ALL type errors; only that they
catch some. Not that it matters, though. I am analyzing the logical
consequences of your own definition. If those logical consequences were
impossible to fulfill, that would be an even more convincing reason to
find a new definition. Concepts of fairness don't enter into the
equation.
If you want to make a statement instead of the sort you've implied
above... namely that a type error is *any* error that's raised by a type
system, then that's fine. It certainly brings us closer to static
types. Now, though, the task is to define a type system without making
a circular reference to types. You've already rejected the statement
that all runtime errors are type errors, because you specifically reject
the division by zero case as a type error for most languages. What is
that distinction?


I don't need such a distinction. I can just define what is covered by a
type system and what is not. All type systems work that way.


You've got to define something... or, I suppose, just go on using words
without definitions. You claimed to give a definition, though.

I have been led by others to believe that the right way to go in the
dynamic type sense is to first define type errors, and then just call
anything that finds type errors a type system. That would work, but it
would require a type error. You seem to want to push that work off of
the word "type error" and back onto "type system", but that requires
that you define what a type system is. It doesn't work for you to say
BOTH that a type system is anything that catches type errors, AND that a
type error is anything that's caught by a type system. That's not a
definition; it's just circular reasoning. If I can plug in: type error
= fly, type system = frog, and get a valid instance of your definitions,
then you aren't giving much of a definition. (Unless you think that
frogs are perfectly good examples of type systems.)

Incidentally, in the case of static type systems, we define the system
(for example, using the definition given from Pierce many times this
thread), and then infer the definition of types and type errors from
there. Because a solid definition has been given first for a type
system without invoking the concepts of types or type errors, this
avoids being circular.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 26 '06 #459
Chris F Clark wrote:
Thus, as we traverse a list, the first element might be an integer,
the second a floating point value, the third a sub-list, the fourth
and fifth, two more integers, and so on. If you look statically at
the head of the list, we have a very wide union of types going by.
However, perhaps there is a mapping going on that can be discerned.
For example, perhaps the list has 3 distinct types of elements
(integers, floating points, and sub-lists) and it circles through the
types in the order, first having one of each type, then two of each
type, then four of each type, then eight, and so on.


Sounds like an interesting problem. Although not the exact type
specified above, here's something pretty similar that I could think of
implementing in Haskell. (I don't know what a "sub-list" is, for
example). Maybe some Haskell wizard could get rid of the tuples?
data Clark a b c = Nil | Cons a (Clark b c (a,a)) deriving Show

clark = (Cons 42 (Cons 3.14 (Cons "abc"
(Cons (1,2) (Cons (1.2,3.4) (Cons ("foo","bar")
(Cons ((9,8),(7,6)) (Cons ((0.1,0.2),(0.3,0.4)) Nil))))))))

main = print clark

Jun 27 '06 #460
Pascal Costanza wrote:

Consider division by zero: appropriate arguments for division are
numbers, including the zero.
A bold assertion!

The general question is, what do we do about partial functions?

The dynamic type check will typically not
check whether the second argument is zero, but will count on the fact
that the processor will raise an exception one level deeper.


This is an implementation artifact, and hence not relevant to our
understanding of the issue.
Marshall

Jun 27 '06 #461
George Neuner wrote:

I can know that my conversion of floating point to integer is going to
produce a value within a certain range ... but, in general, the
compiler can't determine what that range will be. All it knows is
that a floating point value is being truncated and the stupid
programmer wants to stick the result into some type too narrow to
represent the range of possible values.

Like I said to Ben, I haven't seen any _practical_ static type system
that can deal with things like this. Writing a generic function is
impossible under the circumstances, and writing a separate function
for each narrow type is ridiculous and a maintenance nightmare even if
they can share the bulk of the code.


This is the partial function question again, in a different guise.
Marshall

Jun 27 '06 #462
I wrote:
The important thing is the dynamicism of lisp allowed one to write
polymorphic programs, before most of us knew the term.
Chris Smith <cd*****@twu.net> writes:
Sure. In exchange for giving up the proofs of the type checker, you
could write all kinds of programs. To this day, we continue to write
programs in languages with dynamic checking features because we don't
have powerful enough type systems to express the corresponding type
system.
And to me the question is what kinds of types apply to these dynamic
programs, where in fact you may have to solve the halting problem to
know exactly when some statement is executed. I expect that some
programs have type signatures that exceed the expressibility of any
static system (that is Turing complete). Therefore, we need something
that "computes" the appropriate type at run-time, because we need full
Turing power to compute it. To me such a system is a "dynamic type
system". I think dynamic tags are a good approximation, because they
only compute what type the expression has this time.
I believe that, in fact, it would be trivial to imagine a type system
which is capable of expressing that type. Okay, not trivial, since it
appears to be necessary to conceive of an infinite family of integer
types with only one value each, along with range types, and
relationships between them; but it's probably not completely beyond the
ability of a very bright 12-year-old who has someone to describe the
problem thoroughly and help her with the notation.
Well, it look like you are right in that I see following is a Haskell
program that looks essentially correct. I wanted something that was
simple enough that one could see that it could be computed, but which
was complex enough that it had to be computed (and couldn't be
statically expressed with a system that did no "type computations").
Perhaps, there is no such beast. Or, perhaps I just can't formulate
it. Or, perhaps we have static type checkers which can do
computations of unbounded complexity. However, I thought that one of
the characteristics of type systems was that they did not allow
unbounded complexity and weren't Turing Complete.
You would, of course, need a suitable type system first. For example,
it appears to me that there is simply no possible way of expressing what
you've described in Java, even with the new generics features. Perhaps
it's possible in ML or Haskell (I don't know). My point is that if you
were allowed to design a type system to meet your needs, I bet you could
do it.
Or, I could do as I think the dynamic programmers do, dispense with
trying to formulate a sufficiently general type system and just check
the tags at the appropriate points.
Sure. The important question, then, is whether there exists any program
bug that can't be formulated as a type error.


If you allow Turing Complete type systems, then I would say no--every
bug can be reforumlated as a type error. If you require your type
system to be less powerful, then some bugs must escape it.

-Chris
Jun 27 '06 #463
"Greg Buchholz" <sl**************@yahoo.com> writes:
Chris F Clark wrote:
Thus, as we traverse a list, the first element might be an integer,
the second a floating point value, the third a sub-list, the fourth
and fifth, two more integers, and so on. If you look statically at
the head of the list, we have a very wide union of types going by.
However, perhaps there is a mapping going on that can be discerned.
For example, perhaps the list has 3 distinct types of elements
(integers, floating points, and sub-lists) and it circles through the
types in the order, first having one of each type, then two of each
type, then four of each type, then eight, and so on.


Sounds like an interesting problem. Although not the exact type
specified above, here's something pretty similar that I could think of
implementing in Haskell. (I don't know what a "sub-list" is, for
example). Maybe some Haskell wizard could get rid of the tuples?
data Clark a b c = Nil | Cons a (Clark b c (a,a)) deriving Show

clark = (Cons 42 (Cons 3.14 (Cons "abc"
(Cons (1,2) (Cons (1.2,3.4) (Cons ("foo","bar")
(Cons ((9,8),(7,6)) (Cons ((0.1,0.2),(0.3,0.4)) Nil))))))))

main = print clark


Very impressive. It looks right to me and simple enough to
understand. I must find the time to learn a modern FP language. Can
you write a fold for this that prints the data as a binary tree of
triples? I have to believe it isn't that hard....

-Chris

Jun 27 '06 #464
Chris F Clark wrote:

And to me the question is what kinds of types apply to these dynamic
programs, where in fact you may have to solve the halting problem to
know exactly when some statement is executed. I expect that some
programs have type signatures that exceed the expressibility of any
static system (that is Turing complete). Therefore, we need something
that "computes" the appropriate type at run-time, because we need full
Turing power to compute it. To me such a system is a "dynamic type
system". I think dynamic tags are a good approximation, because they
only compute what type the expression has this time.
Yes, an important question (IMHO the *more* important question
than the terminology) is what *programs* do we give up if we
wish to use static typing? I have never been able to pin this
one down at all.

Frankly, if the *only* issue between static and dynamic was
the static checking of the types, then static typing woud
unquestionably be superior. But that's not the only issue.
There are also what I call "packaging" issues, such as
being able to run partly-wrong programs on purpose so
that one would have the opportunity to do runtime analysis
without having to, say, implement parts of some interface
that one isn't interested in testing yet. These could also
be solved in a statically typed language. (Although
historically it hasn't been done that way.)

The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable? I think there might be, but I've
never been able to find a solid example of one.

Perhaps, there is no such beast. Or, perhaps I just can't formulate
it. Or, perhaps we have static type checkers which can do
computations of unbounded complexity. However, I thought that one of
the characteristics of type systems was that they did not allow
unbounded complexity and weren't Turing Complete.
The C++ type system is Turing complete, although in practical terms
it limits how much processing power it will spend on types at
compile time.

If you allow Turing Complete type systems, then I would say no--every
bug can be reforumlated as a type error. If you require your type
system to be less powerful, then some bugs must escape it.


I don't think so. Even with a Turing complete type system, a program's
runtime behavior is still something different from its static behavior.
(This is the other side of the "types are not tags" issue--not only
is it the case that there are things static types can do that tags
can't, it is also the case that there are things tags can do that
static types can't.)
Marshall

Jun 27 '06 #465
Chris F Clark <cf*@shell01.TheWorld.com> wrote:
And to me the question is what kinds of types apply to these dynamic
programs, where in fact you may have to solve the halting problem to
know exactly when some statement is executed.
Yes, I believe (static) type systems will always end up approximating
(conservatively) the possible behavior of programs in order to perform
their verification.
Or, perhaps we have static type checkers which can do
computations of unbounded complexity. However, I thought that one of
the characteristics of type systems was that they did not allow
unbounded complexity and weren't Turing Complete.


Honestly, I suspect you'd get disagreement within the field of type
theory about this. Certainly, there are plenty of researchers who have
proposed type systems that potentially don't even terminate. The
consensus appears to be that they are worth studying within the field of
type theory, but I note that Pierce still hasn't dropped the word
"tractable" from his definition in his introductory text, despite
acknowledging only a couple pages later that languages exist whose type
systems are undecidable, and apparently co-authoring a paper on one of
them. The consensus seems to be that such systems are interesting if
they terminate quickly for interesting cases (in which case, I suppose,
you could hope to eventually be able to formalize what cases are
interesting, and derive a truly tractable type system that checks that
interesting subset).

Interestingly, Pierce gives ML as an example of a language whose type
checker does not necesarily run in polynomial time (thus failing some
concepts of "tractable") but that does just fine in practice. I am just
quoting here, so I don't know exactly how this is true. Marshall
mentioned template meta-programming in C++, which is definitely Turing-
complete.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 27 '06 #466
"Marshall" <ma*************@gmail.com> writes:
Yes, an important question (IMHO the *more* important question
than the terminology) is what *programs* do we give up if we
wish to use static typing? I have never been able to pin this
one down at all.
Well, given Turing Machine equivalence...

I'd mention retrospective software. But you can always implement the
wanted retrospective features as a layer above the statically typed
language.

So the question is how much work the programmer needs to do to
implement a given program with static typing or with dynamic typing.

The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable? I think there might be, but I've
never been able to find a solid example of one.


More than *never* typable, you want to identify some kind of software
that is not *economically* statically typable.

Was it costlier to develop the software developed in non statically
typed programming languages than in a statically typed programming
language?

--
__Pascal Bourguignon__ http://www.informatimago.com/

"I have challenged the entire quality assurance team to a Bat-Leth
contest. They will not concern us again."
Jun 27 '06 #467
Marshall <ma*************@gmail.com> wrote:
Yes, an important question (IMHO the *more* important question
than the terminology) is what *programs* do we give up if we
wish to use static typing? I have never been able to pin this
one down at all.
You'd need to clarify what you mean by "use static typing". Clearly, if
you use forms of static typing that currently exist for Java, there's
considerable limitation in expressiveness. If you mean a hypothetical
"perfect" type system, then there would be no interesting programs you
couldn't write, but such a system may not be possible to implement, and
certainly isn't feasible.

So it seems to me that we have this ideal point at which it is possible
to write all correct or interesting programs, and impossible to write
buggy programs. Pure static type systems try to approach that point
from the conservative side. Pure dynamic systems (and I actually think
that design-by-contract languages and such apply here just as well as
type tagging) try to approach that point via runtime verification from
the liberal side. (No, this has nothing to do with George Bush or Noam
Chomsky... :) Either side finds that the closer they get, the more
creative they need to be to avoid creating development work that's no
longer commensurate with the gains involved (whether in safety or
expressiveness).

I think I am now convinced that the answer to the question of whether
there is a class of type errors in dynamically checked languages is the
same as the answer for static type systems: no. In both cases, it's
just a question of what systems may be provided for the purposes of
verifying (more or less conservatively or selectively) certain program
properties. Type tags or other features that "look like" types are only
relevant in that they encapsulate common kinds of constraints on program
correctness without requiring the programmer to express those
constraints in any more general purpose language.

As for what languages to use right now, we are interestingly enough back
where Xah Lee started this whole thread. All languages (except a few of
the more extreme statically typed languages) are Turing complete, so in
order to compare expressiveness, we'd need some other measure that
considers some factor or factors beyond which operations are
expressible.
There are also what I call "packaging" issues, such as
being able to run partly-wrong programs on purpose so
that one would have the opportunity to do runtime analysis
without having to, say, implement parts of some interface
that one isn't interested in testing yet. These could also
be solved in a statically typed language. (Although
historically it hasn't been done that way.)
I'll note veryh briefly that the built-in compiler for the Eclipse IDE
has the very nice feature that when a particular method fails to
compile, it can still produces a result but replace the implementation
of that method with one that just throws an Error. I've taken advantage
of that on occasion, though it doesn't allow the same range of options
as a language that will just go ahead and try the buggy operations.
The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable? I think there might be, but I've
never been able to find a solid example of one.


This seems to depend on the specific concept of equivalence of programs
that you have in mind.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 27 '06 #468
David Hopwood <da******************@blueyonder.co.uk> wrote:
Dirk Thierbach wrote: That's interesting, but linear typing imposes some quite severe
restrictions on programming style. From the example of 'h' on page 2,
it's clear that the reason for the linearity restriction is just to
ensure polynomial-time termination, and is not needed if we only want
to prove termination.
Yes. It's just an example of what can be actually done with a typesystem.
It's already hard enough to guarantee termination with the extra
information present in the type annotation. If this information is
not present, then the language has to be probably restricted so
severely to ensure termination that it is more or less useless.

I think you're overestimating the difficulty of the problem. The fact
that *in general* it can be arbitrarily hard to prove termination, can
obscure the fact that for large parts of a typical program, proving
termination is usually trivial.


Yes. The problem is the small parts where it's not trivial (and the
size of this part depends on the actual problem the program is trying
to solve). Either you make the language so restrictive that these
parts are hard (see your remark above) or impossible to write, or
you'll be left with a language where the compiler cannot prove
termination of those small parts, so it's not a "type system" in the
usual sense.

Of course one could write an optional tool that automatically proves
termination of the easy parts, and leaves the rest to the programmer,
but again, that's a different thing.

- Dirk
Jun 27 '06 #469
"Marshall" <ma*************@gmail.com> writes:
There are also what I call "packaging" issues, such as
being able to run partly-wrong programs on purpose so
that one would have the opportunity to do runtime analysis
without having to, say, implement parts of some interface
that one isn't interested in testing yet. These could also
be solved in a statically typed language. (Although
historically it hasn't been done that way.)


I keep hearing this, but I guess I fail to understand it. How
"partly-wrong" do you require the program to be?

During development, I frequently sprinkle my (Haskell) program with
'undefined' and 'error "implement later"' - which then lets me run the
implemented parts until execution hits one of the 'undefined's.

The "cost" of static typing for running an incomplete program is thus
simply that I have to name entities I refer to. I'd argue that this
is good practice anyway, since it's easy to see what remains to be
done. (Python doesn't seem to, but presumably a serious dynamically
typed system would warn about references to entities not in scope?)

-k
--
If I haven't seen further, it is by standing in the footprints of giants
Jun 27 '06 #470
Chris Smith schreef:
So it seems to me that we have this ideal point at which it is
possible to write all correct or interesting programs, and impossible
to write buggy programs.


I think that is a misconception. Even at the idealest point it will be
possible (and easy) to write buggy programs. Gdel!

--
Affijn, Ruud

"Gewoon is een tijger."
Jun 27 '06 #471
Marshall wrote:
The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable?


In a statically typed language that has a "dynamic" type, all
dynamically typed programs are straightforwardly expressible.

In a statically typed, Turing-complete language that does not have a
"dynamic" type, it is always *possible* to express a dynamically typed
program, but this may require a global transformation of the program
or use of an interpreter -- i.e. the "Felleisen expressiveness" of the
language is reduced.

--
David Hopwood <da******************@blueyonder.co.uk>
Jun 27 '06 #472
Ketil Malde wrote:
"Marshall" <ma*************@gmail.com> writes:
There are also what I call "packaging" issues, such as
being able to run partly-wrong programs on purpose so
that one would have the opportunity to do runtime analysis
without having to, say, implement parts of some interface
that one isn't interested in testing yet. These could also
be solved in a statically typed language. (Although
historically it hasn't been done that way.)


I keep hearing this, but I guess I fail to understand it. How
"partly-wrong" do you require the program to be?

During development, I frequently sprinkle my (Haskell) program with
'undefined' and 'error "implement later"' - which then lets me run the
implemented parts until execution hits one of the 'undefined's.

The "cost" of static typing for running an incomplete program is thus
simply that I have to name entities I refer to. I'd argue that this
is good practice anyway, since it's easy to see what remains to be
done. (Python doesn't seem to, but presumably a serious dynamically
typed system would warn about references to entities not in scope?)


I'll give you my understanding of it, but bear in mind that I only
program
in statically typed languages, and I in fact do exactly what you
decribe
above: stub out unimplemented methods.

The issue is that, while stubbing out things may not be expensive,
it is not free either. There is some mental switching cost to being
in a state where one writes some code, wants to execute it, but
can't, because of type system constraints that are globally applicable
but not locally applicable to the task at hand, or to the specific
subprogram one is working on right at that moment.

Since I'm very used to doing it, it doesn't seem like an issue to
me, but programmers in dynamical languages complain bitterly
about this. It is my feeling that programming languages should
try to be inclusive, and since this feature is easily enough
accomodated, (as a 'lenient' switch to execution) it ought
to be.
Marshall

Jun 27 '06 #473
David Hopwood wrote:
Marshall wrote:
The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable?


In a statically typed language that has a "dynamic" type, all
dynamically typed programs are straightforwardly expressible.


So, how does this "dynamic" type work? It can't simply be
the "any" type, because that type has no/few functions defined
on it. It strikes me that *when* exactly binding happens will
matter. In a statically typed language, it may be that all
binding occurs at compile time, and in a dynamic language,
it may be that all binding occurs at runtime. So you might
have to change the binding mechanism as well. Does the
"dynamic" type allow this?
Marshall

Jun 27 '06 #474
On Mon, 26 Jun 2006 13:02:33 -0600, Chris Smith <cd*****@twu.net>
wrote:
George Neuner <gneuner2/@comcast.net> wrote:
I worked in signal and image processing for many years and those are
places where narrowing conversions are used all the time - in the form
of floating point calculations reduced to integer to value samples or
pixels, or to value or index lookup tables. Often the same
calculation needs to be done for several different purposes.


These are value conversions, not type conversions. Basically, when you
convert a floating point number to an integer, you are not simply
promising the compiler something about the type; you are actually asking
the compiler to convert one value to another -- i.e., see to it that
whatever this is now, it /becomes/ an integer by the time we're done.
This also results in a type conversion, but you've just converted the
value to the appropriate form. There is a narrowing value conversion,
but the type conversion is perfectly safe.


We're talking at cross purposes. I'm questioning whether a strong
type system can be completely static as some people here seem to
think. I maintain that it is simply not possible to make compile time
guarantees about *all* runtime behavior and that, in particular,
narrowing conversions will _always_ require runtime checking.

I can know that my conversion of floating point to integer is going to
produce a value within a certain range ... but, in general, the
compiler can't determine what that range will be.


If you mean "my compiler can't", then this is probably the case. If you
mean "no possible compiler could", then I'm not sure this is really very
likely at all.


Again, the discussion is about narrowing the result. It doesn't
matter how much the compiler knows about the ranges. When the
computation mixes types, the range of the result can only widen as the
compiler determines the types involved.

George
--
for email reply remove "/" from address
Jun 27 '06 #475
Ketil Malde <ke********@ii.uib.no> writes:
"Marshall" <ma*************@gmail.com> writes:
There are also what I call "packaging" issues, such as
being able to run partly-wrong programs on purpose so
that one would have the opportunity to do runtime analysis
without having to, say, implement parts of some interface
that one isn't interested in testing yet. These could also
be solved in a statically typed language. (Although
historically it hasn't been done that way.)
I keep hearing this, but I guess I fail to understand it. How
"partly-wrong" do you require the program to be?


This conclusion is false. To be "wrong", whether partly or fully,
a program needs to specifically not conform to the requirements that
the programmer gives it. You are assuming that only a completed
program can be right. Let me give a counterexample:

Consider this part of a CL program:

CL-USER(1): (defun foo (x)
(declare (optimize speed (safety 0) (debug 0))
(fixnum x)
)
(bar (the fixnum (1+ x))))
FOO
CL-USER(2):

This is of course not a complete program, because the function BAR is
not yet defined. If I try to run it, it will of course get an error.
But does that require then that I call this a "partly wrong" program?
No, of course not; it is not a program; it is a function, and for my
purposes it is completely correct (I've debugged it already :-)

So what can we do with an incomplete program? Nothing? Well, no. Of
course, we can't run it (or can we?). But even before talking about
running the program, we can at least do some reasoning about it:

1. It seems to have some static type declarations (in a dynamic
langiuage? oh my :-).

2. The 1+ operation limits the kinds of operations that would be
acceptable, even in the absence of static type declarations.

3. The declarations can be ignored by the CL implementation, so #2
might indeed come into play. I ensured that the declarations would
be "trusted" in Allegro CL, by declaring high speed and low safety.

4. I can compile this function. Note that I get a warning about the
incompleteness of the program:

CL-USER(2): (compile 'foo)
Warning: While compiling these undefined functions were referenced: BAR.
FOO
NIL
NIL
CL-USER(3):

5. I can disassemble the function. Note that it is a complete
function, despite the fact that BAR is undefined:

CL-USER(3): (disassemble 'foo)
;; disassembly of #<Function FOO>
;; formals: X
;; constant vector:
0: BAR

;; code start: #x406f07ec:
0: 83 c0 04 addl eax,$4
3: 8b 5e 12 movl ebx,[esi+18] ; BAR
6: b1 01 movb cl,$1
8: ff e7 jmp *edi
CL-USER(4):

6. I can _even_ run the program! Yes, I get an error:

CL-USER(4): (foo 10)
Error: attempt to call `BAR' which is an undefined function.
[condition type: UNDEFINED-FUNCTION]

Restart actions (select using :continue):
0: Try calling BAR again.
1: Return a value instead of calling BAR.
2: Try calling a function other than BAR.
3: Setf the symbol-function of BAR and call it again.
4: Return to Top Level (an "abort" restart).
5: Abort entirely from this (lisp) process.
[1] CL-USER(5):

7. But note first that I have many options, including retrying the
call to BAR. What was this call to BAR? I can reason about that as
well, by getting a backtrace:

[1] CL-USER(5): :zoom
Evaluation stack:

(ERROR #<UNDEFINED-FUNCTION @ #x406f3dfa>)
->(BAR 11)
[... EXCL::%EVAL ]
(EVAL (FOO 10))
(TPL:TOP-LEVEL-READ-EVAL-PRINT-LOOP)
(TPL:START-INTERACTIVE-TOP-LEVEL
#<TERMINAL-SIMPLE-STREAM [initial terminal io] fd 0/1 @ #x40120e5a>
#<Function TOP-LEVEL-READ-EVAL-PRINT-LOOP> ...)
[1] CL-USER(6):

8. If I then define BAR, with or without compiling it, and then take
that option to retry the call:

[1] CL-USER(6): (defun bar (x) (1- x))
BAR
[1] CL-USER(7): :cont
10
CL-USER(8):

Hmm, I was able to complete the program dynamically? Who ever heard
of doing that? :-)
During development, I frequently sprinkle my (Haskell) program with
'undefined' and 'error "implement later"' - which then lets me run the
implemented parts until execution hits one of the 'undefined's.
Why do it manually? And what do you do when you've hit the undefined?
Start the program over?
The "cost" of static typing for running an incomplete program is thus
simply that I have to name entities I refer to. I'd argue that this
is good practice anyway, since it's easy to see what remains to be
done.


Good practice, yes, but why not have the language help you to practice
it?

--
Duane Rettig du***@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
Jun 27 '06 #476
Chris F Clark wrote:
Very impressive. It looks right to me and simple enough to
understand. I must find the time to learn a modern FP language. Can
you write a fold for this that prints the data as a binary tree of
triples? I have to believe it isn't that hard....


{- Refactoring this as a fold is left as an exercise to the reader -}

data Clark a b c = Nil | Cons a (Clark b c (a,a)) deriving Show

clark =
(Cons 42 (Cons 3.14 (Cons "abc"
(Cons (1,2) (Cons (1.2,3.4) (Cons ("foo","bar")
(Cons ((9,8),(7,6)) (Cons ((0.1,0.2),(0.3,0.4))
(Cons (("w","x"),("y","z")) Nil)))))))))

main = print (toTree clark)

data Tree a = Empty | Node a (Tree a) (Tree a) deriving Show

toTree :: Clark a b c -> Tree (a,b,c)
toTree (Cons x (Cons y (Cons z rest)))
= Node (x,y,z)
(toTree (fst (lift_c rest)))
(toTree (snd (lift_c rest)))
toTree _ = Empty

lift_c :: Clark (a,a) (b,b) (c,c) -> (Clark a b c, Clark a b c)
lift_c Nil = (Nil,Nil)
lift_c (Cons (x,y) rest) = (Cons x (fst (lift_c rest)),
Cons y (snd (lift_c rest)))

Jun 27 '06 #477

Marshall wrote:

Yes, an important question (IMHO the *more* important question
than the terminology) is what *programs* do we give up if we
wish to use static typing? I have never been able to pin this
one down at all.
It would depend on the type system, naturally.

It isn't clear to me which programs we would have to give up, either.
I don't have much experience in sophisticated typed languages. It is
rather easy to find programs that baffle an unsophisticated typed
language (C, C++, Java, etc.).

Looking back in comp.lang.lisp, I see these examples:

(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))

(defun blackhole (argument)
(declare (ignore argument))
#'blackhole)

But wait a sec. It seems that these were examples I invented in
response to the same question from you!


The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable?


Certainly! As soon as you can reflect on the type system you can
construct programs that type-check iff they fail to type-check.
Perhaps, there is no such beast. Or, perhaps I just can't formulate
it. Or, perhaps we have static type checkers which can do
computations of unbounded complexity. However, I thought that one of
the characteristics of type systems was that they did not allow
unbounded complexity and weren't Turing Complete.


The C++ type system is Turing complete, although in practical terms
it limits how much processing power it will spend on types at
compile time.


I think templates only have to expand to seven levels, so you are
severely limited here.
If you allow Turing Complete type systems, then I would say no--every
bug can be reforumlated as a type error. If you require your type
system to be less powerful, then some bugs must escape it.


I don't think so. Even with a Turing complete type system, a program's
runtime behavior is still something different from its static behavior.
(This is the other side of the "types are not tags" issue--not only
is it the case that there are things static types can do that tags
can't, it is also the case that there are things tags can do that
static types can't.)


I agree. The point of static checking is to *not* run the program. If
the type system gets too complicated, it may be a de-facto interpreter.

Jun 27 '06 #478
>> The C++ type system is Turing complete, although in practical terms
it limits how much processing power it will spend on types at
compile time.


I think templates only have to expand to seven levels, so you are
severely limited here.


You can adjust the iteration-depth. However, as turing-complete implies the
potential to create infinite loops - it enforces _some_ boundary. Otherwise
the user hitting C-c will do :)

Diez
Jun 27 '06 #479
Joe Marshall wrote:
Marshall wrote:
Yes, an important question (IMHO the *more* important question
than the terminology) is what *programs* do we give up if we
wish to use static typing? I have never been able to pin this
one down at all.


It would depend on the type system, naturally.

It isn't clear to me which programs we would have to give up, either.
I don't have much experience in sophisticated typed languages. It is
rather easy to find programs that baffle an unsophisticated typed
language (C, C++, Java, etc.).

Looking back in comp.lang.lisp, I see these examples:

(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))

(defun blackhole (argument)
(declare (ignore argument))
#'blackhole)

But wait a sec. It seems that these were examples I invented in
response to the same question from you!

The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable?


Certainly! As soon as you can reflect on the type system you can
construct programs that type-check iff they fail to type-check.


Sorry, but can you elaborate on this last point a little? I think I
missed something.
Jun 27 '06 #480
Marshall wrote:
David Hopwood wrote:
Marshall wrote:
The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable?
In a statically typed language that has a "dynamic" type, all
dynamically typed programs are straightforwardly expressible.


So, how does this "dynamic" type work?


<http://citeseer.ist.psu.edu/abadi89dynamic.html>
It can't simply be the "any" type, because that type has no/few
functions defined on it.


It isn't. From the abstract of the above paper:

[...] even in statically typed languages, there is often the need to
deal with data whose type cannot be determined at compile time. To handle
such situations safely, we propose to add a type Dynamic whose values are
pairs of a value v and a type tag T where v has the type denoted by T.
Instances of Dynamic are built with an explicit tagging construct and
inspected with a type safe typecase construct.

"Gradual typing" as described in
<http://www.cs.rice.edu/~jgs3847/pubs/pubs/2006/siek06:_gradual.pdf> is
another alternative. The difference between gradual typing and a
"dynamic" type is one of convenience rather than expressiveness --
gradual typing does not require explicit tagging and typecase constructs.

--
David Hopwood <da******************@blueyonder.co.uk>
Jun 27 '06 #481
George Neuner <gneuner2/@comcast.net> wrote:
We're talking at cross purposes. I'm questioning whether a strong
type system can be completely static as some people here seem to
think. I maintain that it is simply not possible to make compile time
guarantees about *all* runtime behavior and that, in particular,
narrowing conversions will _always_ require runtime checking.
Nah, we're not at cross-purposes. I'm discussing the same thing you
are. My answer is that while narrowing conversion are not (by
definition) type-safe, a sufficiently strong type system can remove
these type-unsafe narrowing conversions from a program.
Again, the discussion is about narrowing the result. It doesn't
matter how much the compiler knows about the ranges. When the
computation mixes types, the range of the result can only widen as the
compiler determines the types involved.


Well, there are certain operations that can reduce the range. For
example, dividing a number that's known to be in the range 6-10 by the
exact constant 2 yields a result that's guaranteed to be in the range 3-
5. That's a smaller range.

That said, though, there is a more fundamental point here. When you
perform a a narrowing type conversion, one of two things must be true:
(a) you know more about the types than the compiler, and thus know that
the conversion is safe, or (b) your code is broken. Exluding the
possibility that you've written buggy code, you must possess some
knowledge that convinces you the conversion is safe. In that case, we
need only allow the type system to have that same knowledge, and the
problem is solved.

(One thing worth pointing out here is that it's quite possible that you
want to do something different depending on the kind of data. In that
case, the sufficiently powerful type system would need to have rules so
that an if statemement creates a modified type environment to take that
into account. This is different from a runtime type check, in that you
are writing explicit code that provides correct program behavior in
either case.)

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 27 '06 #482
Dr.Ruud <rv********@isolution.nl> wrote:
So it seems to me that we have this ideal point at which it is
possible to write all correct or interesting programs, and impossible
to write buggy programs.


I think that is a misconception. Even at the idealest point it will be
possible (and easy) to write buggy programs. Gdel!


I agree. I never said that the ideal point is achievable... only that
there is a convergence toward it. (In some cases, that convergence may
have stalled at some equilibrium point because of the costs of being
near that point.)

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 27 '06 #483

David Hopwood wrote:
Marshall wrote:
David Hopwood wrote:
Marshall wrote:

The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable?

In a statically typed language that has a "dynamic" type, all
dynamically typed programs are straightforwardly expressible.
So, how does this "dynamic" type work?


<http://citeseer.ist.psu.edu/abadi89dynamic.html>
It can't simply be the "any" type, because that type has no/few
functions defined on it.


It isn't. From the abstract of the above paper:

[...] even in statically typed languages, there is often the need to
deal with data whose type cannot be determined at compile time. To handle
such situations safely, we propose to add a type Dynamic whose values are
pairs of a value v and a type tag T where v has the type denoted by T.
Instances of Dynamic are built with an explicit tagging construct and
inspected with a type safe typecase construct.


Well, all this says is that the type "dynamic" is a way to explicitly
indicate the inclusion of rtti. But that doesn't address my objection;
if a typesafe typecase construct is required, it's not like using
a dynamic language. They don't require typecase to inspect values
before one can, say, invoke a function.

"Gradual typing" as described in
<http://www.cs.rice.edu/~jgs3847/pubs/pubs/2006/siek06:_gradual.pdf> is
another alternative. The difference between gradual typing and a
"dynamic" type is one of convenience rather than expressiveness --
gradual typing does not require explicit tagging and typecase constructs.


Perhaps this is the one I should read; it sounds closer to what I'm
talking about.
Marshall

Jun 27 '06 #484
Joe Marshall wrote:
Marshall wrote:

Yes, an important question (IMHO the *more* important question
than the terminology) is what *programs* do we give up if we
wish to use static typing? I have never been able to pin this
one down at all.
It would depend on the type system, naturally.


Naturally!

It isn't clear to me which programs we would have to give up, either.
I don't have much experience in sophisticated typed languages. It is
rather easy to find programs that baffle an unsophisticated typed
language (C, C++, Java, etc.).
C and Java, certainly, but I'm wary these days about making
any statement about limitations on C++'s type system, for it is
subtle and quick to anger.

Looking back in comp.lang.lisp, I see these examples:

(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))

(defun blackhole (argument)
(declare (ignore argument))
#'blackhole)

But wait a sec. It seems that these were examples I invented in
response to the same question from you!


Ah, how well I remember that thread, and how little I got from it.

My memories of that thread are
1) the troll who claimed that Java was unable to read two numbers from
standard input and add them together.
2) The fact that all the smart people agreed the blackhole
function indicated something profound.
3) The fact that I didn't understand the black hole function.

The noisy-apply function I think I understand; it's generic on the
entire arglist. In fact, if I read it correctly, it's even generic
on the *arity* of the function, which is actually pretty impressive.
True? This is an issue I've been wrestling with in my own type
system investigations: how to address genericity across arity.

Does noisy-apply get invoked the same way as other functions?
That would be cool.

As to the black hole function, could you explain it a bit? I apologize
for my lisp-ignorance. I am sure there is a world of significance
in the # ' on the third line, but I have no idea what it is.

If you allow Turing Complete type systems, then I would say no--every
bug can be reforumlated as a type error. If you require your type
system to be less powerful, then some bugs must escape it.


I don't think so. Even with a Turing complete type system, a program's
runtime behavior is still something different from its static behavior.
(This is the other side of the "types are not tags" issue--not only
is it the case that there are things static types can do that tags
can't, it is also the case that there are things tags can do that
static types can't.)


I agree. The point of static checking is to *not* run the program. If
the type system gets too complicated, it may be a de-facto interpreter.


Aha! A lightbulb just want off.

Consider that a program is a function parameterized on its input.
Static
analysis is exactly the field of what we can say about a program
without
know the values of its parameters.
Marshall

Jun 27 '06 #485
Marshall wrote:
David Hopwood wrote:
Marshall wrote:
David Hopwood wrote:
Marshall wrote:

>The real question is, are there some programs that we
>can't write *at all* in a statically typed language, because
>they'll *never* be typable?

In a statically typed language that has a "dynamic" type, all
dynamically typed programs are straightforwardly expressible.

So, how does this "dynamic" type work?


<http://citeseer.ist.psu.edu/abadi89dynamic.html>
It can't simply be the "any" type, because that type has no/few
functions defined on it.


It isn't. From the abstract of the above paper:

[...] even in statically typed languages, there is often the need to
deal with data whose type cannot be determined at compile time. To handle
such situations safely, we propose to add a type Dynamic whose values are
pairs of a value v and a type tag T where v has the type denoted by T.
Instances of Dynamic are built with an explicit tagging construct and
inspected with a type safe typecase construct.


Well, all this says is that the type "dynamic" is a way to explicitly
indicate the inclusion of rtti. But that doesn't address my objection;
if a typesafe typecase construct is required, it's not like using
a dynamic language. They don't require typecase to inspect values
before one can, say, invoke a function.


I was answering the question posed above: "are there some programs that
we can't write *at all* in a statically typed language...?"
"Gradual typing" as described in
<http://www.cs.rice.edu/~jgs3847/pubs/pubs/2006/siek06:_gradual.pdf> is
another alternative. The difference between gradual typing and a
"dynamic" type is one of convenience rather than expressiveness --
gradual typing does not require explicit tagging and typecase constructs.


Perhaps this is the one I should read; it sounds closer to what I'm
talking about.


Right; convenience is obviously important, as well as expressiveness.

--
David Hopwood <da******************@blueyonder.co.uk>
Jun 27 '06 #486
David Hopwood wrote:
Marshall wrote:
David Hopwood wrote:
Marshall wrote:

The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable?

In a statically typed language that has a "dynamic" type, all
dynamically typed programs are straightforwardly expressible.

Correction: as the paper on gradual typing referenced below points
out in section 5, something like the "typerec" construct of
<http://citeseer.ist.psu.edu/harper95compiling.html> would be
needed, in order for a system with a "dynamic" type to be equivalent
in expressiveness to dynamic typing or gradual typing.

("typerec" can be used to implement "dynamic"; it is more general.)
So, how does this "dynamic" type work?
<http://citeseer.ist.psu.edu/abadi89dynamic.html>
It can't simply be the "any" type, because that type has no/few
functions defined on it.


It isn't. From the abstract of the above paper:

[...] even in statically typed languages, there is often the need to
deal with data whose type cannot be determined at compile time. To handle
such situations safely, we propose to add a type Dynamic whose values are
pairs of a value v and a type tag T where v has the type denoted by T.
Instances of Dynamic are built with an explicit tagging construct and
inspected with a type safe typecase construct.

"Gradual typing" as described in
<http://www.cs.rice.edu/~jgs3847/pubs/pubs/2006/siek06:_gradual.pdf> is
another alternative. The difference between gradual typing and a
"dynamic" type


This should be: the difference between gradual typing and a system with
"typerec"...
is one of convenience rather than expressiveness --
gradual typing does not require explicit tagging and typecase constructs.


--
David Hopwood <da******************@blueyonder.co.uk>
Jun 27 '06 #487
Chris Smith wrote:
Pascal Costanza <pc@p-cos.net> wrote:
Clearly, in this example, the program
is invoking an operation (division) on values that are not appropriate
(zero for the second argument). Hence, if your definition really is a
definition, then this must qualify. Here, you are asking more from dynamic type systems than any existing
type system provides. The definition of what is considered to be a type
error and what not is always part of the specification of a type system.


No, I'm not. Were we to follow this path of definition, it would not
require that dynamic type systems catch ALL type errors; only that they
catch some. Not that it matters, though. I am analyzing the logical
consequences of your own definition. If those logical consequences were
impossible to fulfill, that would be an even more convincing reason to
find a new definition. Concepts of fairness don't enter into the
equation.


Yes it does. All static type systems define only a strict subset of all
possible errors to be covered, and leave others as runtime errors. The
same holds for dynamic type systems.
If you want to make a statement instead of the sort you've implied
above... namely that a type error is *any* error that's raised by a type
system, then that's fine. It certainly brings us closer to static
types. Now, though, the task is to define a type system without making
a circular reference to types. You've already rejected the statement
that all runtime errors are type errors, because you specifically reject
the division by zero case as a type error for most languages. What is
that distinction?

I don't need such a distinction. I can just define what is covered by a
type system and what is not. All type systems work that way.


You've got to define something... or, I suppose, just go on using words
without definitions. You claimed to give a definition, though.

I have been led by others to believe that the right way to go in the
dynamic type sense is to first define type errors, and then just call
anything that finds type errors a type system. That would work, but it
would require a type error. You seem to want to push that work off of
the word "type error" and back onto "type system", but that requires
that you define what a type system is.


I didn't know I was supposed to give a definition.
Incidentally, in the case of static type systems, we define the system
(for example, using the definition given from Pierce many times this
thread), and then infer the definition of types and type errors from
there. Because a solid definition has been given first for a type
system without invoking the concepts of types or type errors, this
avoids being circular.


Do you mean this one? "A type system is a tractable syntactic method for
proving the absence of certain program behaviors by classifying phrases
according to the kinds of values they compute." This isn't really such a
strong definition because it shifts the problem of what exactly a type
system is to finding a definition for the word "kind".

But if this makes you happy, here is an attempt:

"A dynamic type system is a tractable syntactic method for proving the
absence of certain program behaviors by classifying values according to
their kinds."

Basically, this correlates with the typical approach of using tags to
indicate the type/kind/class of values. And indeed, this is what dynamic
type systems typically do: they check tags associated with values. Other
kinds of runtime errors are not raised because of such checks, but
because of other reasons. Therefore, there is naturally a distinction
between runtime errors that are type errors and those that are not.

I am not convinced that this definition of mine is a lot clearer than
what I have said before, but I am also not convinced that Pierce's
definition is any clearer either. It is merely a characterization of
what static type systems do.

The preciseness comes into play when actually defining a type system:
then you have to give definitions what the errors at runtime are that
you want to prove absent by way of the static type system, give the
typing rules for the type system, and then prove soundness by showing
that the typing rules correlate precisely to the runtime errors in the
first stage. Since you have to map two different views of the same thing
to each other you have to be precise in giving definitions that you can
then successfully use in your proofs.

In dynamic type system, this level of precision is not necessary because
proving that dynamic type errors is what the dynamic type system catches
is trivially true, and most programmers don't care that there is a
distinction between runtime type errors and runtime "other" errors. But
this doesn't mean that this distinction doesn't exist.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 27 '06 #488
Marshall wrote:
Pascal Costanza wrote:
Consider division by zero: appropriate arguments for division are
numbers, including the zero.


A bold assertion!

The general question is, what do we do about partial functions?

The dynamic type check will typically not
check whether the second argument is zero, but will count on the fact
that the processor will raise an exception one level deeper.


This is an implementation artifact, and hence not relevant to our
understanding of the issue.


No, it's not. I am pretty sure that you can model this formally.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 27 '06 #489
David Hopwood <...nospam....uk> wrote:
A good debugger is invaluable regardless of your attitude
to type systems.


I found that certain language features help greatly in pinning
the errors, when programming in my own impure fp language
(PILS).

Originally, I implemented a single-stepping debugger for the
language, but I trashed it for three reasons, of which two are
rather uninteresting here: it messed up the GUI system, and the
evaluation order of PILS made single-stepping a confusing
experience.

The interesting reason is: I didn't need single-stepping, partly
because the programming system is unit-test friendly, partly
because of language features that seem to mimick the concept
of taking responsibility.

Basically, the PILS execution mechanism is pessimistic. When
a function is called, the assumption is it won't work, and the
only way the function can return a result is by throwing an
exception. So, if a function call doesn't throw an exception,
it has failed and execution will stop unless the function call
is explicitly marked as fault tolerant, as in

f (x) else "undefined"

This has been merged with tail call flattening - rather than
throwing the result, an (expression, binding) thing is thrown
(I guess .NET'ers would call it an anonymous parameterless
delegate), and the expression is then evaluated after the throw.

Mimickally speaking, when a function body performs this
throw, it takes responsibility for getting the job done, and
if it fails, it will suffer the peril of having its guts exposed
in public by the error message window - much like certain
medieval penal systems. I will spare you for the gory details,
just let me add that to honor the demands of modern
bureaucracy, I added a "try the other office" feature so that
there are ways of getting things done without ever taking
responsibility.

Strangely, this all started 20 years ago as I struggled
implementing an early PILS version on an 8-bit Z80
processor, at a blazing 4 MHz. I needed a fast way of
escaping from a failing pattern match, and it turned
out the conditional return operation of this particular
processor was the fastest I could get. Somehow, the
concept of "if it returns, it's bad" crept into the language
design. I was puzzled by its expressiveness and the ease
of using it, only years later did I realize that its ease
of use came from the mimicking of responsibility.

I'm still puzzled that the notion of mimicking a cultural/
social concept came from a microprocessor instruction.
It seems like I - and perhaps computer language designers
in general - have a blind spot. We dig physical objects and
mathematical theories, and don't take hints from the
"soft sciences"...

Perhaps I'm over-generalizing, it just strikes me that the
dispute always seems to stand between math versus object
orientation, with seemingly no systematic attempts at
utilizing insigts from the study of languages and cultural
concepts. It's like as if COBOL and VB scared us from
ever considering the prospect of making programming
languages readable to the uninitiated.

Jun 27 '06 #490
David Hopwood wrote:
Marshall wrote:
The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable?


In a statically typed language that has a "dynamic" type, all
dynamically typed programs are straightforwardly expressible.


What about programs where the types change at runtime?
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 27 '06 #491
Marshall wrote:
Joe Marshall wrote:
Marshall wrote: It isn't clear to me which programs we would have to give up, either.
I don't have much experience in sophisticated typed languages. It is
rather easy to find programs that baffle an unsophisticated typed
language (C, C++, Java, etc.).


C and Java, certainly, but I'm wary these days about making
any statement about limitations on C++'s type system, for it is
subtle and quick to anger.
Looking back in comp.lang.lisp, I see these examples:

(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))

(defun blackhole (argument)
(declare (ignore argument))
#'blackhole)

The noisy-apply function I think I understand; it's generic on the
entire arglist. In fact, if I read it correctly, it's even generic
on the *arity* of the function, which is actually pretty impressive.
True? This is an issue I've been wrestling with in my own type
system investigations: how to address genericity across arity.

Does noisy-apply get invoked the same way as other functions?
That would be cool.

As to the black hole function, could you explain it a bit? I apologize
for my lisp-ignorance. I am sure there is a world of significance
in the # ' on the third line, but I have no idea what it is.


You can ignore the #'. In Scheme this as follows:

(define blackhole (argument)
blackhole)

It just means that the function blackhole returns the function blackhole.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 27 '06 #492

Marshall wrote:
Joe Marshall wrote:
Looking back in comp.lang.lisp, I see these examples:

(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))

(defun blackhole (argument)
(declare (ignore argument))
#'blackhole)

But wait a sec. It seems that these were examples I invented in
response to the same question from you!
Ah, how well I remember that thread, and how little I got from it.

My memories of that thread are
1) the troll who claimed that Java was unable to read two numbers from
standard input and add them together.
2) The fact that all the smart people agreed the blackhole
function indicated something profound.
3) The fact that I didn't understand the black hole function.

The noisy-apply function I think I understand; it's generic on the
entire arglist. In fact, if I read it correctly, it's even generic
on the *arity* of the function, which is actually pretty impressive.
True?


Yes.
This is an issue I've been wrestling with in my own type
system investigations: how to address genericity across arity.

Does noisy-apply get invoked the same way as other functions?
That would be cool.
Yes, but in testing I found an incompatability. Here's a better
definiton:

(defun noisy-apply (f &rest args)
(format t "~&Applying ~s to ~s." f (apply #'list* args))
(apply f (apply #'list* args)))

CL-USER> (apply #'+ '(2 3))
5

CL-USER> (noisy-apply #'+ '(2 3))
Applying #<function + 20113532> to (2 3).
5

The internal application of list* flattens the argument list. The
Common Lisp APPLY function has variable arity and this change makes my
NOISY-APPLY be identical.
As to the black hole function, could you explain it a bit?
It is a function that takes any argument and returns the function
itself.
I apologize
for my lisp-ignorance. I am sure there is a world of significance
in the # ' on the third line, but I have no idea what it is.


The #' is a namespace operator. Since functions and variables have
different namespaces, I'm indicating that I wish to return the function
named blackhole rather than any variable of the same name that might be
around.

Jun 27 '06 #493
Joe Marshall wrote:
It isn't clear to me which programs we would have to give up, either.
I don't have much experience in sophisticated typed languages. It is
rather easy to find programs that baffle an unsophisticated typed
language (C, C++, Java, etc.).

Looking back in comp.lang.lisp, I see these examples:

(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))
'noisy-apply' is typeable using either of these systems:

<http://www.cs.jhu.edu/~pari/papers/fool2004/first-class_FOOL2004.pdf>
<http://www.coli.uni-saarland.de/publikationen/softcopies/Muller:1998:TIF.pdf>

(it should be easy to see the similarity to a message forwarder).
(defun blackhole (argument)
(declare (ignore argument))
#'blackhole)


This is typeable in any system with universally quantified types (including
most practical systems with parametric polymorphism); it has type
"forall a . a -> #'blackhole".
The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable?


Certainly! As soon as you can reflect on the type system you can
construct programs that type-check iff they fail to type-check.


A program that causes the type checker not to terminate (which is the
effect of trying to construct such a thing in the type-reflective systems
I know about) is hardly useful.

In any case, I think the question implied the condition: "and that you
can write in a language with no static type checking."

[...]
If you allow Turing Complete type systems, then I would say no--every
bug can be reforumlated as a type error. If you require your type
system to be less powerful, then some bugs must escape it.


I don't think so. Even with a Turing complete type system, a program's
runtime behavior is still something different from its static behavior.
(This is the other side of the "types are not tags" issue--not only
is it the case that there are things static types can do that tags
can't, it is also the case that there are things tags can do that
static types can't.)


I agree. The point of static checking is to *not* run the program. If
the type system gets too complicated, it may be a de-facto interpreter.


The following LtU post:

<http://lambda-the-ultimate.org/node/1085>

argues that types should be expressed in the same language that is being
typed. I am not altogether convinced, but I can see the writer's point.

There do exist practical languages in which types can be defined as
arbitrary predicates or coercion functions, for example E (www.erights.org).
E implementations currently perform type checking only at runtime, however,
although it was intended to allow static analysis for types that could be
expressed in a conventional static type system. I think the delay in
implementing this has more to do with the E developers' focus on other
(security and concurrency) issues, rather than an inherent difficulty.

--
David Hopwood <da******************@blueyonder.co.uk>
Jun 27 '06 #494
Pascal Costanza wrote:
David Hopwood wrote:
Marshall wrote:
The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable?


In a statically typed language that has a "dynamic" type, all
dynamically typed programs are straightforwardly expressible.


What about programs where the types change at runtime?


Staged compilation is perfectly compatible with static typing.
Static typing only requires that it be possible to prove absence
of some category of type errors when the types are known; it
does not require that all types are known before the first-stage
program is run.

There are, however, staged compilation systems that guarantee that
the generated program will be typeable if the first-stage program
is.

(It's clear that to compare the expressiveness of statically and
dynamically typed languages, the languages must be comparable in
other respects than their type system. Staged compilation is the
equivalent feature to 'eval'.)

--
David Hopwood <da******************@blueyonder.co.uk>
Jun 27 '06 #495

QCD Apprentice wrote:
Joe Marshall wrote:
Marshall wrote:

The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable?


Certainly! As soon as you can reflect on the type system you can
construct programs that type-check iff they fail to type-check.


Sorry, but can you elaborate on this last point a little? I think I
missed something.


This is just the halting problem lifted up into the type domain.

Jun 27 '06 #496
George Neuner wrote:
That was interesting, but the authors' method still involves runtime
checking of the array bounds. IMO, all they really succeeded in doing
was turning the original recursion into CPS and making the code a
little bit clearer.


Hmm. There is a comparison in both the DML and Haskell code, but
that's just there to make the binary search terminate correctly. The
point of the exercise is the line in the DML code "val m = (hi + lo)
div 2", or the "bmiddle" function in the Haskell code. If you change
that line to something like "val m = (hi + lo)" you'll get a compiler
error complaining about an unsolved constraint. The point of the
Haskell code is to use the type system to ensure that array indices
have a know provenance. They're derived from and statically associated
with each individual array, so you can't use an index from one array
with a different array. You'll probably also enjoy the paper...

"Eliminating Array Bound Checking Through Dependent Types"
http://citeseer.ist.psu.edu/xi98eliminating.html

....and DML itself...

http://www.cs.bu.edu/~hwxi/DML/DML.html

Greg Buchholz

Jun 27 '06 #497
Pascal Costanza <pc@p-cos.net> wrote:
You can ignore the #'. In Scheme this as follows:

(define blackhole (argument)
blackhole)

It just means that the function blackhole returns the function blackhole.


So, in other words, it's equivalent to (Y (\fa.f)) in lambda calculus,
where \ is lambda, and Y is the fixed point combinator?

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 27 '06 #498
Joe Marshall wrote:
It isn't clear to me which programs we would have to give up, either.
I don't have much experience in sophisticated typed languages. It is
rather easy to find programs that baffle an unsophisticated typed
language (C, C++, Java, etc.).

Looking back in comp.lang.lisp, I see these examples:

(defun noisy-apply (f arglist)
(format t "I am now about to apply ~s to ~s" f arglist)
(apply f arglist))


Just for fun, here is how you might do "apply" in Haskell...

{-# OPTIONS -fglasgow-exts #-}

class Apply x y z | x y -> z where
apply :: x -> y -> z

instance Apply (a->b) a b where
apply f x = f x

instance Apply b as c => Apply (a->b) (a,as) c where
apply f (x,xs) = apply (f x) xs

f :: Int -> Double -> String -> Bool -> Int
f x y z True = x + floor y * length z
f x y z False= x * floor y + length z

args = (1::Int,(3.1415::Double,("flub",True)))

main = print $ apply f args
....which depends on the fact that functions are automatically curried
in Haskell. You could do something similar for fully curried function
objects in C++. Also of possible interest...

Functions with the variable number of (variously typed) arguments
http://okmij.org/ftp/Haskell/types.html#polyvar-fn

....But now I'm curious about how to create the "apply" function in a
language like Scheme. I suppose you could do something like...

(define (apply fun args)
(eval (cons fun args)))

....but "eval" seems a little like overkill. Is there a better way?
Greg Buchholz

Jun 27 '06 #499

David Hopwood wrote:
Joe Marshall wrote:
(defun blackhole (argument)
(declare (ignore argument))
#'blackhole)


This is typeable in any system with universally quantified types (including
most practical systems with parametric polymorphism); it has type
"forall a . a -> #'blackhole".


The #' is just a namespace operator. The function accepts a single
argument and returns the function itself. You need a type that is a
fixed point (in addition to the universally quantified argument).
The real question is, are there some programs that we
can't write *at all* in a statically typed language, because
they'll *never* be typable?


Certainly! As soon as you can reflect on the type system you can
construct programs that type-check iff they fail to type-check.


A program that causes the type checker not to terminate (which is the
effect of trying to construct such a thing in the type-reflective systems
I know about) is hardly useful.

In any case, I think the question implied the condition: "and that you
can write in a language with no static type checking."


Presumably the remainder of the program does something interesting!

The point is that there exists (by construction) programs that can
never be statically checked. The next question is whether such
programs are `interesting'. Certainly a program that is simply
designed to contradict the type checker is of limited entertainment
value, but there should be programs that are independently non
checkable against a sufficiently powerful type system.

Jun 27 '06 #500

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

220
by: Brandon J. Van Every | last post by:
What's better about Ruby than Python? I'm sure there's something. What is it? This is not a troll. I'm language shopping and I want people's answers. I don't know beans about Ruby or have...
24
by: Xah Lee | last post by:
What is Expresiveness in a Computer Language 20050207, Xah Lee. In languages human or computer, there's a notion of expressiveness. English for example, is very expressive in manifestation,...
23
by: Xah Lee | last post by:
The Concepts and Confusions of Pre-fix, In-fix, Post-fix and Fully Functional Notations Xah Lee, 2006-03-15 Let me summarize: The LISP notation, is a functional notation, and is not a...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
1
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development projectplanning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...
0
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and...
0
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.