By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
425,925 Members | 727 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 425,925 IT Pros & Developers. It's quick & easy.

What is Expressiveness in a Computer Language

P: n/a
in March, i posted a essay “What is Expressiveness in a Computer
Language”, archived at:
http://xahlee.org/perl-python/what_i...esiveness.html

I was informed then that there is a academic paper written on this
subject.

On the Expressive Power of Programming Languages, by Matthias
Felleisen, 1990.
http://www.ccs.neu.edu/home/cobbe/pl...ive-slides.pdf

Has anyone read this paper? And, would anyone be interested in giving a
summary?

thanks.

Xah
xa*@xahlee.org
http://xahlee.org/

Jun 9 '06 #1
Share this Question
Share on Google+
669 Replies


P: n/a

Xah Lee wrote:
in March, i posted a essay "What is Expressiveness in a Computer
Language", archived at:
http://xahlee.org/perl-python/what_i...esiveness.html

I was informed then that there is a academic paper written on this
subject.

On the Expressive Power of Programming Languages, by Matthias
Felleisen, 1990.
http://www.ccs.neu.edu/home/cobbe/pl...ive-slides.pdf

Has anyone read this paper? And, would anyone be interested in giving a
summary?


The gist of the paper is this: Some computer languages seem to be
`more expressive' than
others. But anything that can be computed in one Turing complete
language can be computed in any other Turing complete language.
Clearly the notion of
expressiveness isn't concerned with ultimately computing the answer.

Felleisen's paper puts forth a formal definition of expressiveness in
terms of semantic
equivilances of small, local constructs. In his definition, wholescale
program transformation is
disallowed so you cannot appeal to Turing completeness to claim program
equivalence.

Expressiveness isn't necessarily a good thing. For instance, in C, you
can express the
addresses of variables by using pointers. You cannot express the same
thing in Java, and
most people consider this to be a good idea.

Jun 9 '06 #2

P: n/a
<I've removed the massive cross-posting - I wouldn't presume this message is
all that interesting to folks in those other NG's, and I'm sure they'd be
saying, "who the heck is Paul McGuire, and who gives a @#*$! what he
thinks?">

"Joe Marshall" <ev********@gmail.com> wrote in message
news:11*********************@h76g2000cwa.googlegro ups.com...

Expressiveness isn't necessarily a good thing. For instance, in C, you
can express the
addresses of variables by using pointers. You cannot express the same
thing in Java, and
most people consider this to be a good idea.


For those who remember the bad old days of COBOL, its claim to fame was that
it was more like English prose, with the intent of making a programming
language that was as readable as English, assuming that this was more
"expressive", and not requiring as much of a mental mapping exercise for
someone trying to "read" a program. Even the language terminology itself
strived for this: statements were "sentences"; blocks were "paragraphs".
The sentence model may have ended up being one of COBOL's Achilles Heel's -
the placement of terminating periods for an IF THEN ELSE block was crucial
for disambiguating which ELSE went with which IF. Unfortunately, periods
are one of the least visible printed characters, and an extra or missing
period could cause hours of extra debugging.

(Of course, at the time of COBOL's inception, the only primary languages to
compare with were assembly or FORTRAN-60, so this idea wasn't totally
unfounded.)

-- Paul
Jun 9 '06 #3

P: n/a
Joe Marshall wrote:
Xah Lee wrote:
On the Expressive Power of Programming Languages, by Matthias
Felleisen, 1990.
http://www.ccs.neu.edu/home/cobbe/pl...ive-slides.pdf
The gist of the paper is this: Some computer languages seem to be
`more expressive' than others. But anything that can be computed in
one Turing complete language can be computed in any other Turing
complete language. Clearly the notion of expressiveness isn't
concerned with ultimately computing the answer.

Felleisen's paper puts forth a formal definition of expressiveness in
terms of semantic equivilances of small, local constructs. In his
definition, wholescale program transformation is disallowed so you
cannot appeal to Turing completeness to claim program equivalence.


I suspect that the small, local transformations versus global
transformations is also to do with the practice of not saying the same
thing twice. Everything from subroutines to LISP macros also helps
here, increasing language expressiveness.
Expressiveness isn't necessarily a good thing. For instance, in C,
you can express the addresses of variables by using pointers. You
cannot express the same thing in Java, and most people consider this
to be a good idea.


Assuming the more-expressive feature does not preclude the
less-expressive one, good/bad depends on the programmer. I know *I*
can't be trusted with pointers ;-) , but I know many programmers benefit
greatly from them. Of course, knowing that the programmer cannot do
something does help the compiler stop you shooting yourself in the foot.

--
Simon Richard Clarkstone: s.************@durham.ac.uk/s?m?n.cl?rkst?n?@
hotmail.com ### "I have a spelling chequer / it came with my PC /
it plainly marks for my revue / Mistake's I cannot sea" ...
by: John Brophy (at: http://www.cfwf.ca/farmj/fjjun96/)
Jun 9 '06 #4

P: n/a
Xah Lee wrote:
[the usual toff-topic trolling stuff]

Shit, da troll is back. Abuse reports need to go to
abuse [] pacbell.net and abuse [] swbell.net this time.

Jun 9 '06 #5

P: n/a
Xah Lee wrote:
Has anyone read this paper? And, would anyone be interested in giving a
summary?


Not you, of course. Too busy preparing the next diatribe against UNIX,
Perl, etc. ;)

Jun 9 '06 #6

P: n/a


Joe Marshall wrote:
Xah Lee wrote:
in March, i posted a essay "What is Expressiveness in a Computer
Language", archived at:
http://xahlee.org/perl-python/what_i...esiveness.html

I was informed then that there is a academic paper written on this
subject.

On the Expressive Power of Programming Languages, by Matthias
Felleisen, 1990.
http://www.ccs.neu.edu/home/cobbe/pl...ive-slides.pdf

Has anyone read this paper? And, would anyone be interested in giving a
summary?

The gist of the paper is this: Some computer languages seem to be
`more expressive' than
others. But anything that can be computed in one Turing complete
language can be computed in any other Turing complete language.
Clearly the notion of
expressiveness isn't concerned with ultimately computing the answer.

Felleisen's paper puts forth a formal definition of expressiveness in
terms of semantic
equivilances of small, local constructs. In his definition, wholescale
program transformation is
disallowed so you cannot appeal to Turing completeness to claim program
equivalence.

Expressiveness isn't necessarily a good thing. For instance, in C, you
can express the
addresses of variables by using pointers. You cannot express the same
thing in Java, and
most people consider this to be a good idea.


Thanks for the summary.

Me, I would like to see a definition of expressiveness that would
exclude a programming mechanism from "things to be expressed".

If the subject is programmer productivity, well, I write programs to get
some behavior out of them, such as operating an ATM cash dispenser. If I
need to keep a list of transactions, I need to express the abstraction
"list" in some data structure or other, but below that level of
abstraction I am just hacking code, not expressing myself -- well, that
is the distinction for which I am arguing.

heck, in this case I will even give you as "thing to express" getting
back multiple values from a function. That comes up all the time, and it
can be an aggravation or a breeze. But then I would score C down because
it does not really return multiple values. One still has some heavy
lifting to do to fake the expressed thing. But I would still give it an
edge over java because Java's fakery would have to be a composite object
-- one could not have a primary return value as the function result and
ancillary values "somewhere else".

kt

--
Cells: http://common-lisp.net/project/cells/

"I'll say I'm losing my grip, and it feels terrific."
-- Smiling husband to scowling wife, New Yorker cartoon
Jun 9 '06 #7

P: n/a
hi Joe,

Joe Marshall wrote:
« Expressiveness isn't necessarily a good thing. For instance, in C,
you can express the addresses ...»

we gotta be careful here, because soon we gonna say binaries are the
most expressive. For instance, in assembly, you can express the
registers and stuff.

Expressiveness, with respect to — for lack of refined terms —
semantics, is a good thing, period. When discussing a language's
semantical expressiveness, it goes without saying that a “domain”
are understood, or needs to be defined. This is almost never mentioned
because it is very difficult. Put it in driveler's chant for better
understanding: we can't “compare apples with oranges”.

Let me give a example. Let's say i invented a language, where, there's
no addition of numbers, but phaserfy and realify with respective
operators ph and re. So, in my language, to do 1+2, you write “ph 1
re ph 2”, which means, to phaserfy 1, and phaserfy 2, then realify
their results, which results in 3. Now, this language is the most
expressive, because it can deal with concepts of phaserfy and realify
that no other lang can.

This may seem ridiculous, but is in fact a lot imperative languages do.
I won't go long here, but for instance, the addresses or references of
C and Perl is such. And in Java and few other OOP langs, there's
“iterator” and “enumerator” things, are likewise immaterial.

As to OOP's iterator and enumerator things, and the general perspective
of extraneous concepts in languages, i'll have to write a essay in
detail some other day.

----

Thanks for the summary.

Is there no one else who are able to read that paper?

Xah
xa*@xahlee.org
http://xahlee.org/
Xah Lee wrote:
in March, i posted a essay "What is Expressiveness in a Computer
Language", archived at:
http://xahlee.org/perl-python/what_i...esiveness.html
...
On the Expressive Power of Programming Languages, by Matthias
Felleisen, 1990. http://www.ccs.neu.edu/home/cobbe/pl...ive-slides.pdf

Joe Marshall wrote: The gist of the paper is this: Some computer languages seem to be
`more expressive' than
others. But anything that can be computed in one Turing complete
language can be computed in any other Turing complete language.
Clearly the notion of
expressiveness isn't concerned with ultimately computing the answer.

Felleisen's paper puts forth a formal definition of expressiveness in
terms of semantic
equivilances of small, local constructs. In his definition, wholescale
program transformation is
disallowed so you cannot appeal to Turing completeness to claim program
equivalence.

Expressiveness isn't necessarily a good thing. For instance, in C, you
can express the
addresses of variables by using pointers. You cannot express the same
thing in Java, and
most people consider this to be a good idea.


Jun 14 '06 #8

P: n/a
"Joe Marshall" <ev********@gmail.com> writes:

On the Expressive Power of Programming Languages, by Matthias
Felleisen, 1990.
http://www.ccs.neu.edu/home/cobbe/pl...ive-slides.pdf
The gist of the paper is this: Some computer languages seem to be
`more expressive' than others. But anything that can be computed in
one Turing complete language can be computed in any other Turing
complete language. Clearly the notion of expressiveness isn't
concerned with ultimately computing the answer.

Felleisen's paper puts forth a formal definition of expressiveness
in terms of semantic equivilances of small, local constructs. In
his definition, wholescale program transformation is disallowed so
you cannot appeal to Turing completeness to claim program
equivalence.


I think expressiveness is more subtle than this. Basically, it boils
down to: "How quickly can I write a program to solve my problem?".

There are several aspects relevant to this issue, some of which are:

- Compactness: How much do I have to type to do what I want?

- Naturality: How much effort does it take to convert the concepts of
my problem into the concepts of the language?

- Feedback: Will the language provide sensible feedback when I write
nonsensical things?

- Reuse: How much effort does it take to reuse/change code to solve a
similar problem?

Compactness is hard to measure. It isn't really about the number of
characters needed in a program, as I don't think one-character symbols
instead of longer keywords make a language more expressive. It is
better to count lexical units, but if there are too many different
predefined keywords and operators, this isn't reasonable either.
Also, the presence of opaque one-liners doesn't make a language
expressible. Additionally, as mentioned above, Turing-completeness
(TC) allows you to implement any TC language in any other, so above a
certain size, the choice of language doesn't affect size. But
something like (number of symbols in program)/log(number of different
symbols) is not too bad. If programs are allowed to use standard
libraries, the identifiers in the libraries should be counted in the
number of different symbols.

Naturality is very difficult to get a grip on, and it strongly depends
on the type of problem you want to solve. So it only makes sense to
talk about expressiveness relative to a set of problem domains. If
this set is small, domain-specific languages win hands down, so if you
want to compare expressiveness of general-purpose languages, you need
a large set of very different problems. And even with a single
problem, it is hard to get an objective measure of how difficult it is
to map the problem's concepts to those of the language. But you can
normally observe whether you need to overspecify the concept (i.e.,
you are required to make arbitrary decisions when mapping from concept
to data), if the mapping is onto (i.e., can you construct data that
isn't sensible in the problem domain) and how much redundancy your
representation has.

Feedback is a mixture of several things. Partly, it is related to
naturality, as a close match between problem concepts and language
concepts makes it less likely that you will express nonsense (relative
to the problem domain) that makes sense in the language. For example,
if you have to code everything as natural numbers, untyped pure lambda
calculus or S-expressions, there is a good chance that you can get
nonsense past the compiler. Additionally, it is about how difficult
it is to tie an observation about a computed result to a point in the
program.

Measuring reuse depends partly on what is meant by problems being
similar and also on whether you at the time you write the original
code can predict what types of problems you might later want to solve,
i.e., if you can prepare the code for reuse. Some languages provide
strong mechanisms for reuse (templates, object hierarchies, etc.), but
many of those require that you can predict how the code is going to be
reused. So, maybe, you should measure how difficult it is to reuse a
piece of code that is _not_ written with reuse in mind.

This reminds me a bit of last years ICFP contest, where part of the
problem was adapting to a change in specification after the code was
written.
Expressiveness isn't necessarily a good thing. For instance, in C,
you can express the addresses of variables by using pointers. You
cannot express the same thing in Java, and most people consider this
to be a good idea.


I think this is pretty much covered by the above points on naturality
and feedback: Knowing the address of a value or object is an
overspecification unless the address maps back into something in the
problem domain.

On a similar note, is a statically typed langauge more or less
expressive than a dynamically typed language? Some would say less, as
you can write programs in a dynamically typed language that you can't
compile in a statically typed language (without a lot of encoding),
whereas the converse isn't true. However, I think this is misleading,
as it ignores the feedback issue: It takes longer for the average
programmer to get the program working in the dynamically typed
language.

Torben
Jun 14 '06 #9

P: n/a
On 2006-06-14 09:42:25 -0400, to*****@app-1.diku.dk (Torben gidius
Mogensen) said:
It takes longer for the average
programmer to get the program working in the dynamically typed
language.


Though I agree with much of your post I would say that many here find
the opposite to be true - it takes us longer to get a program working
in a statically typed language because we have to keep adding/changing
things to get the compiler to stop complaining and actually compile and
run a program which would be perfectly permissible in a dynamically
typed language such as common lisp - for example - heterogeneous lists
and forward references to as yet non-existent functions.

Jun 14 '06 #10

P: n/a
Torben gidius Mogensen wrote:
On a similar note, is a statically typed langauge more or less
expressive than a dynamically typed language? Some would say less, as
you can write programs in a dynamically typed language that you can't
compile in a statically typed language (without a lot of encoding),
whereas the converse isn't true. However, I think this is misleading,
as it ignores the feedback issue: It takes longer for the average
programmer to get the program working in the dynamically typed
language. From the point of view purely of expressiveness I'd say it's rather

different.

If a language can express constraints of one kind that is an increase
in expressiveness.
If a language requires constraint to be in one particular way thats a
decrease in expressiveness.

So I would say languages that can be statically typed and can be
dynamically typed are the most expressive. Languages that require
static typing or are dynamic but cannot express static typing are less
expressive.

This meets my experience of what useful in practice too, static typing
for everything is painful for writing simple code. Pure dynamic typing
is painful when writing complex code because it makes impossible a
layer of error checking that could otherwise be useful.

Jun 14 '06 #11

P: n/a
Torben gidius Mogensen schrieb:
For example,
if you have to code everything as natural numbers, untyped pure lambda
calculus or S-expressions, there is a good chance that you can get
nonsense past the compiler.


Also past peer review and/or debugging runs. And, most importantly, past
your own mental review processes.

Regards,
Jo
Jun 14 '06 #12

P: n/a
Raffael Cavallaro schrieb:
On 2006-06-14 09:42:25 -0400, to*****@app-1.diku.dk (Torben gidius
Mogensen) said:
It takes longer for the average
programmer to get the program working in the dynamically typed
language.
Though I agree with much of your post I would say that many here find
the opposite to be true - it takes us longer to get a program working in
a statically typed language because we have to keep adding/changing
things to get the compiler to stop complaining and actually compile and
run


I think Torben was assuming a language with type inference. You write
only those type annotations that really carry meaning (and some people
let the compiler infer even these).
a program which would be perfectly permissible in a dynamically
typed language such as common lisp - for example - heterogeneous lists
and forward references to as yet non-existent functions.


Um... heterogenous lists are not necessarily a sign of expressiveness.
The vast majority of cases can be transformed to homogenous lists
(though these might then contain closures or OO objects).

As to references to nonexistent functions - heck, I never missed these,
not even in languages without type inference :-)

I don't hold that they are a sign of *in*expressiveness either. They are
just typical of highly dynamic programming environments such as Lisp or
Smalltalk.

Regards,
Jo
Jun 14 '06 #13

P: n/a
Rob Thorpe schrieb:

If a language can express constraints of one kind that is an increase
in expressiveness.
Agreed.
If a language requires constraint to be in one particular way thats a
decrease in expressiveness.
Unless alternatives would be redundant.
Having redundant ways to express the same thing doesn't make a language
more or less expressive (but programs written in it become more
difficult to maintain).
So I would say languages that can be statically typed and can be
dynamically typed are the most expressive. Languages that require
static typing or are dynamic but cannot express static typing are less
expressive.


Note that this is a different definition of expressiveness.
(The term is very diffuse...)

I think Felleisen's paper defines something that should be termed
"conciseness".
Whether there's a way to express constraints or other static properties
of the software is something different. I don't have a good word for it,
but "expressiveness" covers too much for my taste to really fit.

Regards,
Jo
Jun 14 '06 #14

P: n/a
Torben gidius Mogensen wrote:
On a similar note, is a statically typed langauge more or less
expressive than a dynamically typed language? Some would say less, as
you can write programs in a dynamically typed language that you can't
compile in a statically typed language (without a lot of encoding),
whereas the converse isn't true.
It's important to get the levels right here: A programming language with
a rich static type system is more expressive at the type level, but less
expressive at the base level (for some useful notion of expressiveness ;).
However, I think this is misleading,
as it ignores the feedback issue: It takes longer for the average
programmer to get the program working in the dynamically typed
language.


This doesn't seem to capture what I hear from Haskell programmers who
say that it typically takes quite a while to convince the Haskell
compiler to accept their programs. (They perceive this to be worthwhile
because of some benefits wrt correctness they claim to get in return.)
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 14 '06 #15

P: n/a
Joachim Durchholz <jo@durchholz.org> writes:
Raffael Cavallaro schrieb:
a program which would be perfectly permissible in a dynamically
typed language such as common lisp - for example - heterogeneous
lists and forward references to as yet non-existent functions.


Um... heterogenous lists are not necessarily a sign of
expressiveness. The vast majority of cases can be transformed to
homogenous lists (though these might then contain closures or OO
objects).


In lisp, all lists are homogenous: lists of T.
--
__Pascal Bourguignon__ http://www.informatimago.com/

ADVISORY: There is an extremely small but nonzero chance that,
through a process known as "tunneling," this product may
spontaneously disappear from its present location and reappear at
any random place in the universe, including your neighbor's
domicile. The manufacturer will not be responsible for any damages
or inconveniences that may result.
Jun 14 '06 #16

P: n/a
In article <4f*************@individual.net>, Pascal Costanza wrote:
Torben gidius Mogensen wrote:
On a similar note, is a statically typed langauge more or less
expressive than a dynamically typed language? Some would say less, as
you can write programs in a dynamically typed language that you can't
compile in a statically typed language (without a lot of encoding),
whereas the converse isn't true.
It's important to get the levels right here: A programming language
with a rich static type system is more expressive at the type level,
but less expressive at the base level (for some useful notion of
expressiveness ;).


This doesn't seem obviously the case to me. If you have static
information about your program, the compiler can use this information
to automate a lot of grunt work away.

Haskell's system of typeclasses work this way. If you tell the
compiler how to print integers, and how to print lists, then when you
call a print function on a list of list of integers, then the compiler
will automatically figure out the right print function using your base
definitions. This yields an increase in Felleisen-expressiveness over
a dynamically typed language, because you would need to globally
restructure your program to achieve a similar effect.

More dramatic are the "polytypic" programming languages, which let you
automate even more by letting you write generic map, fold, and print
functions which work at every type.
This doesn't seem to capture what I hear from Haskell programmers
who say that it typically takes quite a while to convince the
Haskell compiler to accept their programs. (They perceive this to be
worthwhile because of some benefits wrt correctness they claim to
get in return.)


This is true, if you are a novice learning the language, or if you are
an expert programming with good style.

If you encode your invariants in the types, then type errors will
signal broken invariants. But: learning how to use the type system to
encode significantly complex invariants (eg, that an abstract syntax
tree representing an HTML document actually satisfies all of the
constraints on valid HTML) takes experience to do well.

--
Neel Krishnaswami
ne***@cs.cmu.edu
Jun 14 '06 #17

P: n/a
On 2006-06-14 15:04:34 -0400, Joachim Durchholz <jo@durchholz.org> said:
Um... heterogenous lists are not necessarily a sign of expressiveness.
The vast majority of cases can be transformed to homogenous lists
(though these might then contain closures or OO objects).

As to references to nonexistent functions - heck, I never missed these,
not even in languages without type inference :-)

I don't hold that they are a sign of *in*expressiveness either. They
are just typical of highly dynamic programming environments such as
Lisp or Smalltalk.


This is a typical static type advocate's response when told that users
of dynamically typed languages don't want their hands tied by a type
checking compiler:

"*I* don't find those features expressive so *you* shouldn't want them."

You'll have to excuse us poor dynamically typed language rubes - we
find these features expressive and we don't want to give them up just
to silence a compiler whose static type checks are of dubious value in
a world where user inputs of an often unpredictable nature can come at
a program from across a potentially malicious internet making run-time
checks a practical necessity.

Jun 15 '06 #18

P: n/a
On 2006-06-14 16:36:52 -0400, Pascal Bourguignon <pj*@informatimago.com> said:
In lisp, all lists are homogenous: lists of T.


CL-USER 123 > (loop for elt in (list #\c 1 2.0d0 (/ 2 3)) collect
(type-of elt))
(CHARACTER FIXNUM DOUBLE-FLOAT RATIO)

i.e., "heterogenous" in the common lisp sense: having different dynamic
types, not in the H-M sense in which all lisp values are of the single
union type T.

Jun 15 '06 #19

P: n/a
Neelakantan Krishnaswami wrote:
In article <4f*************@individual.net>, Pascal Costanza wrote:
Torben gidius Mogensen wrote:
On a similar note, is a statically typed langauge more or less
expressive than a dynamically typed language? Some would say less, as
you can write programs in a dynamically typed language that you can't
compile in a statically typed language (without a lot of encoding),
whereas the converse isn't true.

It's important to get the levels right here: A programming language
with a rich static type system is more expressive at the type level,
but less expressive at the base level (for some useful notion of
expressiveness ;).


This doesn't seem obviously the case to me. If you have static
information about your program, the compiler can use this information
to automate a lot of grunt work away.

Haskell's system of typeclasses work this way. If you tell the
compiler how to print integers, and how to print lists, then when you
call a print function on a list of list of integers, then the compiler
will automatically figure out the right print function using your base
definitions. This yields an increase in Felleisen-expressiveness over
a dynamically typed language, because you would need to globally
restructure your program to achieve a similar effect.

More dramatic are the "polytypic" programming languages, which let you
automate even more by letting you write generic map, fold, and print
functions which work at every type.


Yes, but these decisions are taken at compile time, without running the
program.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 15 '06 #20

P: n/a
Raffael Cavallaro <raffaelcavallaro@pas-d'espam-s'il-vous-plait-mac.com> writes:
On 2006-06-14 16:36:52 -0400, Pascal Bourguignon <pj*@informatimago.com> said:
In lisp, all lists are homogenous: lists of T.


CL-USER 123 > (loop for elt in (list #\c 1 2.0d0 (/ 2 3)) collect
(type-of elt))
(CHARACTER FIXNUM DOUBLE-FLOAT RATIO)

i.e., "heterogenous" in the common lisp sense: having different
dynamic types, not in the H-M sense in which all lisp values are of
the single union type T.


What's the difference? Dynamically types values _are_ all members of
a single tagged union type. The main difference is that the tages
aren't always visible and that there are only a fixed, predefined
number of them.

Torben

Jun 16 '06 #21

P: n/a
Pascal Costanza <pc@p-cos.net> writes:
Torben gidius Mogensen wrote:
On a similar note, is a statically typed langauge more or less
expressive than a dynamically typed language? Some would say less, as
you can write programs in a dynamically typed language that you can't
compile in a statically typed language (without a lot of encoding),
whereas the converse isn't true.


It's important to get the levels right here: A programming language
with a rich static type system is more expressive at the type level,
but less expressive at the base level (for some useful notion of
expressiveness ;).
However, I think this is misleading,
as it ignores the feedback issue: It takes longer for the average
programmer to get the program working in the dynamically typed
language.


This doesn't seem to capture what I hear from Haskell programmers who
say that it typically takes quite a while to convince the Haskell
compiler to accept their programs. (They perceive this to be
worthwhile because of some benefits wrt correctness they claim to get
in return.)


That's the point: Bugs that in dynamically typed languages would
require testing to find are found by the compiler in a statically
typed language. So whil eit may take onger to get a program thatgets
past the compiler, it takes less time to get a program that works.

Torben

Jun 16 '06 #22

P: n/a
Raffael Cavallaro schrieb:
On 2006-06-14 15:04:34 -0400, Joachim Durchholz <jo@durchholz.org> said:
Um... heterogenous lists are not necessarily a sign of expressiveness.
The vast majority of cases can be transformed to homogenous lists
(though these might then contain closures or OO objects).

As to references to nonexistent functions - heck, I never missed
these, not even in languages without type inference :-)

[[snipped - doesn't seem to relate to your answer]]


This is a typical static type advocate's response when told that users
of dynamically typed languages don't want their hands tied by a type
checking compiler:

"*I* don't find those features expressive so *you* shouldn't want them."


And this is a typical dynamic type advocate's response when told that
static typing has different needs:

"*I* don't see the usefulness of static typing so *you* shouldn't want
it, either."

No ad hominem arguments, please. If you find my position undefendable,
give counterexamples.
Give a heterogenous list that would to too awkward to live in a
statically-typed language.
Give a case of calling nonexistent functions that's useful.
You'll get your point across far better that way.

Regards,
Jo
Jun 16 '06 #23

P: n/a
Torben gidius Mogensen wrote:
Raffael Cavallaro <raffaelcavallaro@pas-d'espam-s'il-vous-plait-mac.com> writes:
On 2006-06-14 16:36:52 -0400, Pascal Bourguignon <pj*@informatimago.com> said:
In lisp, all lists are homogenous: lists of T. CL-USER 123 > (loop for elt in (list #\c 1 2.0d0 (/ 2 3)) collect
(type-of elt))
(CHARACTER FIXNUM DOUBLE-FLOAT RATIO)

i.e., "heterogenous" in the common lisp sense: having different
dynamic types, not in the H-M sense in which all lisp values are of
the single union type T.


What's the difference? Dynamically types values _are_ all members of
a single tagged union type.


Yes, but that's mostly a meaningless statement in a dynamically typed
language. In a dynamically typed language, you typically don't care
about the static types.
The main difference is that the tages
aren't always visible and that there are only a fixed, predefined
number of them.


Depending on the language, the number of "tags" is not fixed.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 16 '06 #24

P: n/a
Torben gidius Mogensen wrote:
Pascal Costanza <pc@p-cos.net> writes:
Torben gidius Mogensen wrote:
On a similar note, is a statically typed langauge more or less
expressive than a dynamically typed language? Some would say less, as
you can write programs in a dynamically typed language that you can't
compile in a statically typed language (without a lot of encoding),
whereas the converse isn't true. It's important to get the levels right here: A programming language
with a rich static type system is more expressive at the type level,
but less expressive at the base level (for some useful notion of
expressiveness ;).
However, I think this is misleading,
as it ignores the feedback issue: It takes longer for the average
programmer to get the program working in the dynamically typed
language.

This doesn't seem to capture what I hear from Haskell programmers who
say that it typically takes quite a while to convince the Haskell
compiler to accept their programs. (They perceive this to be
worthwhile because of some benefits wrt correctness they claim to get
in return.)


That's the point: Bugs that in dynamically typed languages would
require testing to find are found by the compiler in a statically
typed language.


Yes. However, unfortunately statically typed languages also reject
programs that don't have such bugs. It's a tradeoff whether you want to
spend time to deal with them or not.
So whil eit may take onger to get a program thatgets
past the compiler, it takes less time to get a program that works.


That's incorrect. See http://haskell.org/papers/NSWC/jfp.ps - especially
Figure 3.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 16 '06 #25

P: n/a

"Joachim Durchholz" <jo@durchholz.org> wrote in message
news:e6**********@online.de...
Raffael Cavallaro schrieb:
On 2006-06-14 15:04:34 -0400, Joachim Durchholz <jo@durchholz.org> said:
Um... heterogenous lists are not necessarily a sign of expressiveness.
The vast majority of cases can be transformed to homogenous lists
(though these might then contain closures or OO objects).

As to references to nonexistent functions - heck, I never missed these,
not even in languages without type inference :-)

[[snipped - doesn't seem to relate to your answer]]

Give a heterogenous list that would to too awkward to live in a
statically-typed language.


Many lists are heterogenous, even in statically typed languages.
For instance lisp code are lists, with several kinds of atoms and
sub-lists..
A car dealer will sell cars, trucks and equipment..
In a statically typed language you would need to type the list on a common
ancestor...
What would then be the point of statical typing , as you stilll need to type
check
each element in order to process that list ? Sure you can do this in a
statically-typed
language, you just need to make sure some relevant ancestor exists. In my
experience
you'll end up with the base object-class more often than not, and that's
what i call
dynamic typing.
Give a case of calling nonexistent functions that's useful.


I might want to test some other parts of my program before writing this
function.
Or maybe will my program compile that function depending on user input.
As long as i get a warning for calling a non-existing function, everything
is fine.

Sacha
Jun 16 '06 #26

P: n/a
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Sacha schreef:
"Joachim Durchholz" <jo@durchholz.org> wrote in message
news:e6**********@online.de...
Raffael Cavallaro schrieb:
On 2006-06-14 15:04:34 -0400, Joachim Durchholz <jo@durchholz.org> said:

Um... heterogenous lists are not necessarily a sign of expressiveness.
The vast majority of cases can be transformed to homogenous lists
(though these might then contain closures or OO objects).

As to references to nonexistent functions - heck, I never missed these,
not even in languages without type inference :-)

[[snipped - doesn't seem to relate to your answer]]

Give a heterogenous list that would to too awkward to live in a
statically-typed language.


Many lists are heterogenous, even in statically typed languages.
For instance lisp code are lists, with several kinds of atoms and
sub-lists..
A car dealer will sell cars, trucks and equipment..
In a statically typed language you would need to type the list on a common
ancestor...
What would then be the point of statical typing , as you stilll need to type
check
each element in order to process that list ? Sure you can do this in a
statically-typed
language, you just need to make sure some relevant ancestor exists. In my
experience
you'll end up with the base object-class more often than not, and that's
what i call
dynamic typing.


In my experience you won’t. I almost never have a List<Object> (Java),
and when I have one, I start thinking on how I can improve the code to
get rid of it.

H.
- --
Hendrik Maryns

==================
http://aouw.org
Ask smart questions, get good answers:
http://www.catb.org/~esr/faqs/smart-questions.html
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFEkofme+7xMGD3itQRAo0XAJ9DiG228ZKMLX8JH+u9X+ 6YGEHwgQCdH/Jn
kC/F/b5rbmUvUzKYIv8agis=
=iuV0
-----END PGP SIGNATURE-----
Jun 16 '06 #27

P: n/a
Torben gidius Mogensen wrote:
Pascal Costanza <pc@p-cos.net> writes:
Torben gidius Mogensen wrote:
On a similar note, is a statically typed langauge more or less
expressive than a dynamically typed language? Some would say less, as
you can write programs in a dynamically typed language that you can't
compile in a statically typed language (without a lot of encoding),
whereas the converse isn't true.


It's important to get the levels right here: A programming language
with a rich static type system is more expressive at the type level,
but less expressive at the base level (for some useful notion of
expressiveness ;).
However, I think this is misleading,
as it ignores the feedback issue: It takes longer for the average
programmer to get the program working in the dynamically typed
language.


This doesn't seem to capture what I hear from Haskell programmers who
say that it typically takes quite a while to convince the Haskell
compiler to accept their programs. (They perceive this to be
worthwhile because of some benefits wrt correctness they claim to get
in return.)


That's the point: Bugs that in dynamically typed languages would
require testing to find are found by the compiler in a statically
typed language. So whil eit may take onger to get a program thatgets
past the compiler, it takes less time to get a program that works.


In my experience the opposite is true for many programs.
Having to actually know the precise type of every variable while
writing the program is not necessary, it's a detail often not relevant
to the core problem. The language should be able to take care of
itself.

In complex routines it can be useful for the programmer to give types
and for the compiler to issue errors when they are contradicted. But
for most routines it's just an unnecessary chore that the compiler
forces on the programmer.

Jun 16 '06 #28

P: n/a
Pascal Costanza <pc@p-cos.net> writes:
Torben gidius Mogensen wrote:

So while it may take longer to get a program that gets
past the compiler, it takes less time to get a program that works.


That's incorrect. See http://haskell.org/papers/NSWC/jfp.ps -
especially Figure 3.


There are many other differences between these languages than static
vs. dynamic types, and some of these differences are likely to be more
significant. What you need to test is langauges with similar features
and syntax, except one is statically typed and the other dynamically
typed.

And since these languages would be quite similar, you can use the same
test persons: First let one half solve a problem in the statically
typed language and the other half the same problem in the dynamically
typed language, then swap for the next problem. If you let a dozen
persons each solve half a dozen problems, half in the statically typed
language and half in the dynamically typed language (using different
splits for each problem), you might get a useful figure.

Torben

Jun 16 '06 #29

P: n/a
"Rob Thorpe" <ro***********@antenova.com> writes:
Torben gidius Mogensen wrote:

That's the point: Bugs that in dynamically typed languages would
require testing to find are found by the compiler in a statically
typed language. So whil eit may take onger to get a program thatgets
past the compiler, it takes less time to get a program that works.


In my experience the opposite is true for many programs.
Having to actually know the precise type of every variable while
writing the program is not necessary, it's a detail often not relevant
to the core problem. The language should be able to take care of
itself.

In complex routines it can be useful for the programmer to give types
and for the compiler to issue errors when they are contradicted. But
for most routines it's just an unnecessary chore that the compiler
forces on the programmer.


Indeed. So use a language with type inference.

Torben

Jun 16 '06 #30

P: n/a
Torben gidius Mogensen schreef:
Bugs that in dynamically typed languages would
require testing to find are found by the compiler in a statically
typed language. So whil[e ]it may take [l]onger to get a program that[ ] gets past the compiler, it takes less time to get a program that

works.

If it were that simple, I would say: compile time type inference is the
only way to go.

--
Affijn, Ruud

"Gewoon is een tijger."
Jun 16 '06 #31

P: n/a
Torben gidius Mogensen wrote:
Pascal Costanza <pc@p-cos.net> writes:
Torben gidius Mogensen wrote:

So while it may take longer to get a program that gets
past the compiler, it takes less time to get a program that works.

That's incorrect. See http://haskell.org/papers/NSWC/jfp.ps -
especially Figure 3.


There are many other differences between these languages than static
vs. dynamic types, and some of these differences are likely to be more
significant. What you need to test is langauges with similar features
and syntax, except one is statically typed and the other dynamically
typed.

And since these languages would be quite similar, you can use the same
test persons: First let one half solve a problem in the statically
typed language and the other half the same problem in the dynamically
typed language, then swap for the next problem. If you let a dozen
persons each solve half a dozen problems, half in the statically typed
language and half in the dynamically typed language (using different
splits for each problem), you might get a useful figure.


....and until then claims about the influence of static type systems on
the speed with which you can implement working programs are purely
guesswork. That's the only point I need to make to show that your
original unqualified statement, namely that it takes less time to get a
program that works, is incorrect.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 16 '06 #32

P: n/a

Torben gidius Mogensen wrote:
There are several aspects relevant to this issue, some of which are:
- Compactness: How much do I have to type to do what I want? ...... - Naturality: How much effort does it take to convert the concepts of
my problem into the concepts of the language?
- Feedback: Will the language provide sensible feedback when I write
nonsensical things?
- Reuse: How much effort does it take to reuse/change code to solve a
similar problem?

......

I am fairly new to Haskell, but the compactness of the language and the
way you can express a lot in a very small amount of real estate is very
important.. I used to program back in the 80's in forth a lot.... okay
I'm a dinosaur!, but "good" definitions were usually very short, and
sweet. Unicon/Icon that I used {still do!} in the imperative world,
very compact. I will give an example that covers compact, reusable,
and because of static typing when will give back mis-type info when you
load a new "a or xs" into it to form a new function.
-- what is happening below is a is being replaced with the curried
lambda: ((++) 3) and
-- xs a list: [1..6], so when that definition of f is used it is type
checked to see if the
-- elements in xs match the type of a, so if this were going to be
compiled, it would
-- checked and guaranteed to work.

Prelude> :t f
f :: forall a b. (Show b) => (a -> b) -> [a] -> IO ()
Prelude> let f a xs = putStr $ foldr (++) "\n" $ map (((++) "\n").
show . a ) xs
Prelude> f ((*) 3) [1..6]
3
6
9
12
15
18
Prelude>

another substitution of parameters.. using the same definition of f
allowed by the the
polymorphic parameters allowed in Haskell add to the versatility and
reusability angle:

Prelude> f sqrt [0.5,1.0..4]
0.7071067811865476
1.0
1.224744871391589
1.4142135623730951
1.5811388300841898
1.7320508075688772
1.8708286933869707
2.0

Same function 'f" now used with a different type..
[0.5,1.0..4] :: forall a. (Fractional a, Enum a) => [a]

I don't know, but this just makes programming fun, for me anyway, and
if it is fun, it is expressive.. I've heard this statement made about
Ruby and Unicon, to name a few... some would say Python.. but it really
applies to the functional languages too, with all their strict typing,
with the type inference mechanisms, it isn't usually that big a deal..
If you just get into it, and can learn to take some constructive
criticism from your compiler, well hey, it is a really patient
teacher... you might get frustrated at times.. but the compiler will
happily remind you of the same type mis-matches, until you get a handle
on some concept and never once complain...
Happy Programming to all!
-- gene

Jun 16 '06 #33

P: n/a
On 2006-06-16 05:22:08 -0400, Joachim Durchholz <jo@durchholz.org> said:
And this is a typical dynamic type advocate's response when told that
static typing has different needs:

"*I* don't see the usefulness of static typing so *you* shouldn't want
it, either."


But I haven't made this sort of argument. I never said you shouldn't
use static typing if you want to. There are indeed types of software
where one wants the guarantees provided by static type checks. For
example, software that controls irreplaceable or very expensive
equipment such as space craft, or software that can kill people if it
fails such as software for aircraft or medical devices. The problem for
static typing advocates is that most software is not of this type.

There is a very large class of software where user inputs are
unpredictable and/or where input data comes from an untrusted source.
In these cases run-time checks are going to be needed anyway so the
advantages of static type checking are greatly reduced - you end up
doing run-time checks anyway, precisely the thing you were trying to
avoid by doing static analysis. In software like this it isn't worth
satisfying a static type checker because you don't get much of the
benefit anyway and it means forgoing such advantages of dynamic typing
as being able to run and test portions of a program before other parts
are written (forward references to as yet nonexistent functions).

Ideally one wants a language with switchable typing - static where
possible and necessary, dynamic elsewhere. To a certain extent this is
what common lisp does but it requires programmer declarations. Some
implementations try to move beyond this by doing type inference and
alerting the programmer to potential static guarantees that the
programmer could make that would allow the compiler to do a better job.

In effect the argument comes down to which kind of typing one thinks
should be the default. Dynamic typing advocates think that static
typing is the wrong default. The notion that static typing can prove
program correctness is flawed - it can only prove that type constraints
are not violated but not necessarily that program logic is correct. It
seems to me that if we set aside that class of software where safety is
paramount - mostly embedded software such as aircraft and medical
devices - we are left mostly with efficiency concerns. The 80-20 rule
suggests that most code doesn't really need the efficiency provided by
static guarantees. So static typing should be invoked for that small
portion of a program where efficiency is really needed and that dynamic
typing should be the default elswhere. This is how common lisp works -
dynamic typing by default with static guarantees available where one
needs them.

Jun 16 '06 #34

P: n/a
On 2006-06-16 05:22:08 -0400, Joachim Durchholz <jo@durchholz.org> said:
And this is a typical dynamic type advocate's response when told that static typing has different needs: "*I* don't see the usefulness of static typing so *you* shouldn't

want it, either."

But I haven't made this sort of argument. I never said you shouldn't
use static typing if you want to. There are indeed types of software
where one wants the guarantees provided by static type checks. For
example, software that controls irreplaceable or very expensive
equipment such as space craft, or software that can kill people if it
fails such as software for aircraft or medical devices. The problem for
static typing advocates is that most software is not of this type.

There is a very large class of software where user inputs are
unpredictable and/or where input data comes from an untrusted source.
In these cases run-time checks are going to be needed anyway so the
advantages of static type checking are greatly reduced - you end up
doing run-time checks anyway, precisely the thing you were trying to
avoid by doing static analysis. In software like this it isn't worth
satisfying a static type checker because you don't get much of the
benefit anyway and it means forgoing such advantages of dynamic typing
as being able to run and test portions of a program before other parts
are written (forward references to as yet nonexistent functions).

Ideally one wants a language with switchable typing - static where
possible and necessary, dynamic elsewhere. To a certain extent this is
what common lisp does but it requires programmer declarations. Some
implementations try to move beyond this by doing type inference and
alerting the programmer to potential static guarantees that the
programmer could make that would allow the compiler to do a better job.

In effect the argument comes down to which kind of typing one thinks
should be the default. Dynamic typing advocates think that static
typing is the wrong default. The notion that static typing can prove
program correctness is flawed - it can only prove that type constraints
are not violated but not necessarily that program logic is correct. It
seems to me that if we set aside that class of software where safety is
paramount - mostly embedded software such as aircraft and medical
devices - we are left mostly with efficiency concerns. The 80-20 rule
suggests that most code doesn't really need the efficiency provided by
static guarantees. So static typing should be invoked for that small
portion of a program where efficiency is really needed and that dynamic
typing should be the default elswhere. This is how common lisp works -
dynamic typing by default with static guarantees available where one
needs them.
Jun 16 '06 #35

P: n/a
Joachim Durchholz wrote:
Give a heterogenous list that would to too awkward to live in a
statically-typed language.
Write a function that takes an arbitrary set of arguments and stores
them into a structure allocated on the heap.
Give a case of calling nonexistent functions that's useful.


See the Tcl "unknown" proc, used for interactive command expansion,
dynamic loading of code on demand, etc.

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.
Jun 16 '06 #36

P: n/a
Joachim Durchholz wrote:
Give a heterogenous list that would to too awkward to live in a
statically-typed language.


Printf()?

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.
Jun 16 '06 #37

P: n/a
Darren New <dn**@san.rr.com> writes:
Joachim Durchholz wrote:
Give a heterogenous list that would to too awkward to live in a
statically-typed language.


Printf()?


Very good statically typed versions of printf exist. See, e.g.,
Danvy's unparsing combinators.
Jun 16 '06 #38

P: n/a
Matthias Blume wrote:
Very good statically typed versions of printf exist. See, e.g.,
Danvy's unparsing combinators.


That seems to ignore the fact that the pattern is a string, which means
that printf's first argument in Danvy's mechanism has to be a literal.
You can't read the printf format from a configuration file (for example)
to support separate languages. It doesn't look like the version of
printf that can print its arguments in an order different from the order
provided in the argument list is supported either; something like "%3$d"
or some such.

Second, what's the type of the argument that printf, sprintf, fprintf,
kprintf, etc all pass to the subroutine that actually does the
formatting? (Called vprintf, I think?)

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.
Jun 16 '06 #39

P: n/a
Darren New <dn**@san.rr.com> writes:
Matthias Blume wrote:
Very good statically typed versions of printf exist. See, e.g.,
Danvy's unparsing combinators.
That seems to ignore the fact that the pattern is a string, which
means that printf's first argument in Danvy's mechanism has to be a
literal.


In Danvy's solution, the format argument is not a string.
You can't read the printf format from a configuration file
(for example) to support separate languages.
You don't need to do that if you want to support separate languages.
Moreover, reading the format string from external input is a good way
of opening your program to security attacks, since ill-formed data on
external media are then able to crash you program.
It doesn't look like the
version of printf that can print its arguments in an order different
from the order provided in the argument list is supported either;
something like "%3$d" or some such.
I am not familiar with the version of printf you are refering to, but
I am sure one could adapt Danvy's solution to support such a thing.
Second, what's the type of the argument that printf, sprintf, fprintf,
kprintf, etc all pass to the subroutine that actually does the
formatting? (Called vprintf, I think?)


Obviously, a Danvy-style solution (see, e.g., the one in SML/NJ's
library) is not necessarily structured that way. I don't see the
problem with typing, though.

Matthias
Jun 16 '06 #40

P: n/a
Matthias Blume wrote:
In Danvy's solution, the format argument is not a string.
That's what I said, yes.
You can't read the printf format from a configuration file
(for example) to support separate languages.

You don't need to do that if you want to support separate languages.
That's kind of irrelevant to the discussion. We're talking about
collections of dynamically-typed objects, not the best mechanisms for
supporting I18N.
Moreover, reading the format string from external input is a good way
of opening your program to security attacks, since ill-formed data on
external media are then able to crash you program.
Still irrelevant to the point.
I am sure one could adapt Danvy's solution to support such a thing.
I'm not. It's consuming arguments as it goes, from what I understood of
the paper. It's translating, essentially, into a series of function
calls in argument order.
Obviously, a Danvy-style solution (see, e.g., the one in SML/NJ's
library) is not necessarily structured that way. I don't see the
problem with typing, though.


You asked for an example of a heterogenous list that would be awkward in
a statically strongly-typed language. The arguments to printf() count,
methinks. What would the second argument to apply be if the first
argument is printf (since I'm reading this in the LISP group)?

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.
Jun 16 '06 #41

P: n/a
Sacha schrieb:

Many lists are heterogenous, even in statically typed languages.
For instance lisp code are lists, with several kinds of atoms and
sub-lists..
Lisp isn't exactly a statically-typed language :-)
A car dealer will sell cars, trucks and equipment..
In a statically typed language you would need to type the list on a common
ancestor...
Where's the problem with that?

BTW the OO way isn't the only way to set up a list from heterogenous data.
In statically-typed FPL land, lists require homogenous data types all
right, but the list elements aren't restricted to data - they can be
functions as well.
Now the other specialty of FPLs is that you can construct functions at
run-time - you take a function, fill some of its parameters and leave
others open - the result is another function. And since you'll iterate
over the list and will do homogenous processing over it, you construct
the function so that it will do all the processing that you'll later need.

The advantage of the FPL way over the OO way is that you can use ad-hoc
functions. You don't need precognition to know which kinds of data
should be lumped under a common supertype - you simply write and/or
construct functions of a common type that will go into the list.
What would then be the point of statical typing , as you stilll need to type
check each element in order to process that list ?
Both OO and FPL construction allow static type checks.
Sure you can do this in a
statically-typed
language, you just need to make sure some relevant ancestor exists. In my
experience
you'll end up with the base object-class more often than not, and that's
what i call dynamic typing.
Not quite - the common supertype is more often than not actually useful.

However, getting the type hierarchy right requires a *lot* of
experimentation and fine-tuning. You can easily spend a year or more
(sometimes *much* more) with that (been there, done that). Even worse,
once the better hierarchy is available, you typically have to adapt all
the client code that uses it (been there, done that, too).

That's the problems in OO land. FPL land doesn't have these problems -
if the list type is just a function mapping two integers to another
integer, reworking the data types that went into the functions of the
list don't require those global changes.
Give a case of calling nonexistent functions that's useful.


I might want to test some other parts of my program before writing this
function.


That's unrelated to dynamic typing. All that's needed is an environment
that throws an exception once such an undefined function is called,
instead of aborting the compilation.

I'll readily admit that very few static languages offer such an
environment. (Though, actually, C interpreters do exist.)
Or maybe will my program compile that function depending on user input.
Hmm... do I really want this kind of power at the user's hand in the age
of malware?
As long as i get a warning for calling a non-existing function, everything
is fine.


That depends.
For software that's written to run once (or very few times), and where
somebody who's able to correct problems is always nearby, that's a
perfectly viable strategy.
For safety-critical software where problems must be handled within
seconds (or an even shorter period of time), you want to statically
ensure as many properties as you can. You'll take not just static
typing, you also want to ascertain value ranges and dozens of other
properties. (In Spark, an Ada subset, this is indeed done.)

Between those extremes, there's a broad spectrum.
Jun 16 '06 #42

P: n/a
Raffael Cavallaro schrieb:
There is a very large class of software where user inputs are
unpredictable and/or where input data comes from an untrusted source. In
these cases run-time checks are going to be needed anyway so the
advantages of static type checking are greatly reduced - you end up
doing run-time checks anyway, precisely the thing you were trying to
avoid by doing static analysis.
There's still a large class of errors that *can* be excluded via type
checking.
Ideally one wants a language with switchable typing - static where
possible and necessary, dynamic elsewhere.
That has been my position for a long time now.
To a certain extent this is
what common lisp does but it requires programmer declarations. Some
implementations try to move beyond this by doing type inference and
alerting the programmer to potential static guarantees that the
programmer could make that would allow the compiler to do a better job.
I think it's easier to start with a good (!) statically-typed language
and relax the checking, than to start with a dynamically-typed one and
add static checks.
With the right restrictions, a language can make all kinds of strong
guarantees, and it can make it easy to construct software where static
guarantees abound. If the mechanisms are cleverly chosen, they interfere
just minimally with the programming process. (A classical example it
Hindley-Milner type inference systems. Typical reports from languages
with HM systems say that you can have it verify thousand-line programs
without a single type annotation in the code. That's actually far better
than you'd need - you'd *want* to document the types at least on the
major internal interfaces after all *grin*.)
With a dynamically-typed language, programming style tends to evolve in
directions that make it harder to give static guarantees.
It seems to
me that if we set aside that class of software where safety is paramount
- mostly embedded software such as aircraft and medical devices - we are
left mostly with efficiency concerns.
Nope. Efficiency has taken a back seat. Software is getting slower
(barely offset by increasing machine speed), and newer languages even
don't statically typecheck everything (C++, Java). (Note that the
impossibility to statically typecheck everything in OO languages doesn't
mean that it's impossible to do rigorous static checking in general.
FPLs have been quite rigorous about static checks; the only cases when
an FPL needs to dynamically typecheck its data structures is after
unmarshalling one from an untyped data source such as a network stream,
a file, or an IPC interface.)

The prime factor nowadays seems to be maintainability.

And the difference here is this:
With dynamic typing, I have to rely on the discipline of the programmers
to document interfaces.
With static typing, the compiler will infer (and possibly document) at
least part of their semantics (namely the types).
So static typing should be invoked for that small portion of a program
where efficiency is really needed and that dynamic typing should be the
default elswhere. This is how common lisp works - dynamic typing by
default with static guarantees available where one needs them.


Actually static typing seems to become more powerful at finding errors
as the program size increases.
(Yes, that's a maintainability argument. Efficiency isn't *that*
important; since maintenance is usually the most important single
factor, squelching bugs even before testing is definitely helpful.)

Regards,
Jo
Jun 16 '06 #43

P: n/a
Darren New schrieb:
Joachim Durchholz wrote:
Give a heterogenous list that would to too awkward to live in a
statically-typed language.


Write a function that takes an arbitrary set of arguments and stores
them into a structure allocated on the heap.


If the set of arguments is really arbitrary, then the software can't do
anything with it. In that case, the type is simply "opaque data block",
and storing it in the heap requires nothing more specific than that of
"opaque data block".
There's more in this. If we see a function with a parameter type of
"opaque data block", and there's no function available except copying
that data and comparing it for equality, then from simply looking at the
function's signature, we'll know that it won't inspect the data. More
interestingly, we'll know that funny stuff in the data might trigger
bugs in the code - in the context of a security audit, that's actually a
pretty strong guarantee, since the analysis can stop at the function't
interface and doesn't have to dig into the function's implementation.
Give a case of calling nonexistent functions that's useful.


See the Tcl "unknown" proc, used for interactive command expansion,
dynamic loading of code on demand, etc.


Not related to dynamic typing, I fear - I can easily envision
alternatives to that in a statically-typed context.

Of course, you can't eliminate *all* run-time type checking. I already
mentioned unmarshalling data from an untyped source; another possibility
is run-time code compilation (highly dubious in a production system but
of value in a development system).

However, that's some very specialized applications, easily catered for
by doing a dynamic type check plus a thrown exception in case the types
don't match. I still don't see a convincing argument for making dynamic
typing the standard policy.

Regards,
Jo
Jun 16 '06 #44

P: n/a
On 2006-06-16 17:59:07 -0400, Joachim Durchholz <jo@durchholz.org> said:
I think it's easier to start with a good (!) statically-typed language
and relax the checking, than to start with a dynamically-typed one and
add static checks.
With the right restrictions, a language can make all kinds of strong
guarantees, and it can make it easy to construct software where static
guarantees abound. If the mechanisms are cleverly chosen, they
interfere just minimally with the programming process. (A classical
example it Hindley-Milner type inference systems. Typical reports from
languages with HM systems say that you can have it verify thousand-line
programs without a single type annotation in the code. That's actually
far better than you'd need - you'd *want* to document the types at
least on the major internal interfaces after all *grin*.)
With a dynamically-typed language, programming style tends to evolve in
directions that make it harder to give static guarantees.


This is purely a matter of programming style. For explorative
programming it is easier to start with dynamic typing and add static
guarantees later rather than having to make decisions about
representation and have stubs for everything right from the start. The
lisp programming style is arguably all about using heterogenous lists
and forward references in the repl for everything until you know what
it is that you are doing, then choosing a more appropriate
representation and filling in forward references once the program gels.
Having to choose representation right from the start and needing
working versions (even if only stubs) of *every* function you call may
ensure type correctness, but many programmers find that it also ensures
that you never find the right program to code in the first place. This
is because you don't have the freedom to explore possible solutions
without having to break your concentration to satisfy the nagging of a
static type checker.

Jun 16 '06 #45

P: n/a
Raffael Cavallaro schrieb:
On 2006-06-16 17:59:07 -0400, Joachim Durchholz <jo@durchholz.org> said:
I think it's easier to start with a good (!) statically-typed language
and relax the checking, than to start with a dynamically-typed one and
add static checks.


This is purely a matter of programming style. For explorative
programming it is easier to start with dynamic typing and add static
guarantees later rather than having to make decisions about
representation and have stubs for everything right from the start.


Sorry for being ambiguous - I meant to talk about language evolution.

I agree that static checking could (and probably should) be slightly
relaxed: compilers should still do all the diagnostics that current-day
technology allows, but any problems shouldn't abort the compilation.
It's always possible to generate code that will throw an exception as
soon as a problematic piece of code becomes actually relevant; depending
on the kind of run-time support, this might abort the program, abort
just the computation, or open an interactive facility to correct and/or
modify the program on the spot (the latter is the norm in highly dynamic
systems like those for Lisp and Smalltalk, and I consider this actually
useful).

I don't see static checking and explorative programming as opposites.
Of course, in practice, environments that combine these don't seem to
exist (except maybe in experimental or little-known state).

Regards,
Jo
Jun 17 '06 #46

P: n/a
On 2006-06-17 07:03:19 -0400, Joachim Durchholz <jo@durchholz.org> said:
I don't see static checking and explorative programming as opposites.
Of course, in practice, environments that combine these don't seem to
exist (except maybe in experimental or little-known state).


Right. Unfortunately the philosophical leanings of those who design
these two types of languages tend to show themselves as different
tastes in development style - for example, static type advocates don't
often want a very dynamic development environment that would allow a
program to run for testing even when parts of it arent defined yet, and
dynamic type advocates don't want a compiler preventing them from doing
so because the program can't yet be proven statically correct. Dynamic
typing advocates don't generally want a compiler error for ambiguous
typing - for example, adding a float and an int - but static typing
advocates generally do. Of course there's little reason one couldn't
have a language that allowed the full range to be switchable so that
programmers could tighten up compiler warnings and errors as the
program becomes more fully formed. Unfortunately we're not quite there
yet. For my tastes something like sbcl*, with its type inference and
very detailed warnings and notes is as good as it gets for now. I can
basically ignore warnings and notes early on, but use them to allow the
compiler to improve the code it generates once the program is doing
what I want correctly.
[*] I don't mean to exclude other common lisp implementations that do
type inference here - I just happen to use sbcl.

Jun 17 '06 #47

P: n/a
Raffael Cavallaro <raffaelcavallaro@pas-d'espam-s'il-vous-plait-mac.com> writes:
This is purely a matter of programming style. For explorative
programming it is easier to start with dynamic typing and add static
guarantees later rather than having to make decisions about
representation and have stubs for everything right from the start.
I think you are confusing static typing with having to write types
everywhere. With type inference, you only have to write a minimum of
type information (such as datatype declarations), so you can easily do
explorative progrmming in such languages -- I don't see any advantage
of dynamic typing in this respect.
The
lisp programming style is arguably all about using heterogenous lists
and forward references in the repl for everything until you know what
it is that you are doing, then choosing a more appropriate
representation and filling in forward references once the program
gels. Having to choose representation right from the start and needing
working versions (even if only stubs) of *every* function you call may
ensure type correctness, but many programmers find that it also
ensures that you never find the right program to code in the first
place.
If you don't have definitions (stubs or complete) of the functions you
use in your code, you can only run it up to the point where you call
an undefined function. So you can't really do much exploration until
you have some definitions.

I expect a lot of the exploration you do with incomplete programs
amount to the feedback you get from type inference.
This is because you don't have the freedom to explore possible
solutions without having to break your concentration to satisfy the
nagging of a static type checker.


I tend to disagree. I have programmed a great deal in Lisp, Scheme,
Prolog (all dynamically typed) and SML and Haskell (both statically
typed). And I don't find that I need more stubs etc. in the latter.
In fact, I do a lot of explorative programming when writing programs
in ML and Haskell. And I find type inference very helpful in this, as
it guides the direction of the exploration, so it is more like a
safari with a local guide than a blindfolded walk in the jungle.

Torben
Jun 19 '06 #48

P: n/a
Torben gidius Mogensen wrote:
"Rob Thorpe" <ro***********@antenova.com> writes:
Torben gidius Mogensen wrote:

That's the point: Bugs that in dynamically typed languages would
require testing to find are found by the compiler in a statically
typed language. So whil eit may take onger to get a program thatgets
past the compiler, it takes less time to get a program that works.


In my experience the opposite is true for many programs.
Having to actually know the precise type of every variable while
writing the program is not necessary, it's a detail often not relevant
to the core problem. The language should be able to take care of
itself.

In complex routines it can be useful for the programmer to give types
and for the compiler to issue errors when they are contradicted. But
for most routines it's just an unnecessary chore that the compiler
forces on the programmer.


Indeed. So use a language with type inference.


Well, for most purposes that's the same as dynamic typing since the
compiler doesn't require you to label the type of your variables. I
occasionally use CMUCL and SBCL which do type inference, which is
useful at improving generated code quality. It also can warn the
programmer if they if they reuse a variable in a context implying that
it's a different type which is useful.

I see type inference as an optimization of dynamic typing rather than a
generalization of static typing. But I suppose you can see it that way
around.

Jun 19 '06 #49

P: n/a
"Rob Thorpe" <ro***********@antenova.com> writes:
Torben gidius Mogensen wrote:
"Rob Thorpe" <ro***********@antenova.com> writes:
Torben gidius Mogensen wrote:
Indeed. So use a language with type inference.


Well, for most purposes that's the same as dynamic typing since the
compiler doesn't require you to label the type of your variables.


That's not really the difference between static and dynamic typing.
Static typing means that there exist a typing at compile-time that
guarantess against run-time type violations. Dynamic typing means
that such violations are detected at run-time. This is orthogonal to
strong versus weak typing, which is about whether such violations are
detected at all. The archetypal weakly typed language is machine code
-- you can happily load a floating point value from memory, add it to
a string pointer and jump to the resulting value. ML and Scheme are
both strongly typed, but one is statically typed and the other
dynamically typed.

Anyway, type inference for statically typed langauges don't make them
any more dynamically typed. It just moves the burden of assigning the
types from the programmer to the compiler. And (for HM type systems)
the compiler doesn't "guess" at a type -- it finds the unique most
general type from which all other legal types (within the type system)
can be found by instantiation.
I
occasionally use CMUCL and SBCL which do type inference, which is
useful at improving generated code quality. It also can warn the
programmer if they if they reuse a variable in a context implying that
it's a different type which is useful.

I see type inference as an optimization of dynamic typing rather than a
generalization of static typing. But I suppose you can see it that way
around.


Some compilers for dynamically typed languages will do a type analysis
similar to type inference, but they will happily compile a program
even if they can't guarantee static type safety.

Such "type inference" can be seen as an optimisation of dynamic
typing, as it allows the compiler to omit _some_ of the runtime type
checks. I prefer the term "soft typing" for this, though, so as not
to confuse with static type inference.

Soft typing can give feedback similar to that of type inference in
terms of identifying potential problem spots, so in that respect it is
similar to static type inference, and you might get similar fast code
development. You miss some of the other benefits of static typing,
though, such as a richer type system -- soft typing often lacks
features like polymorphism (it will find a set of monomorphic
instances rather than the most general type) and type classes.

Torben

Jun 19 '06 #50

669 Replies

This discussion thread is closed

Replies have been disabled for this discussion.