473,503 Members | 1,188 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

What is Expressiveness in a Computer Language

in March, i posted a essay “What is Expressiveness in a Computer
Language”, archived at:
http://xahlee.org/perl-python/what_i...esiveness.html

I was informed then that there is a academic paper written on this
subject.

On the Expressive Power of Programming Languages, by Matthias
Felleisen, 1990.
http://www.ccs.neu.edu/home/cobbe/pl...ive-slides.pdf

Has anyone read this paper? And, would anyone be interested in giving a
summary?

thanks.

Xah
xa*@xahlee.org
http://xahlee.org/

Jun 9 '06
669 25391
On 19 Jun 2006 10:19:05 +0200, to*****@app-3.diku.dk (Torben gidius
Mogensen) wrote:

If you don't have definitions (stubs or complete) of the functions you
use in your code, you can only run it up to the point where you call
an undefined function. So you can't really do much exploration until
you have some definitions.
Well in Lisp that just drops you into the debugger where you can
supply the needed return data and continue. I agree that it is not
something often needed.

I expect a lot of the exploration you do with incomplete programs
amount to the feedback you get from type inference.


The ability to write functions and test them immediately without
writing a lot of supporting code is _far_ more useful to me than type
inference.

I'm not going to weigh in on the static v dynamic argument ... I think
both approaches have their place. I am, however, going to ask what
information you think type inference can provide that substitutes for
algorithm or data structure exploration.

George
--
for email reply remove "/" from address
Jun 19 '06 #51
George Neuner <gneuner2/@comcast.net> writes:
On 19 Jun 2006 10:19:05 +0200, to*****@app-3.diku.dk (Torben gidius
Mogensen) wrote:
I expect a lot of the exploration you do with incomplete programs
amount to the feedback you get from type inference.


The ability to write functions and test them immediately without
writing a lot of supporting code is _far_ more useful to me than type
inference.


I can't see what this has to do with static/dynamic typing. You can
test individula functions in isolation is statically typed languages
too.
I'm not going to weigh in on the static v dynamic argument ... I think
both approaches have their place. I am, however, going to ask what
information you think type inference can provide that substitutes for
algorithm or data structure exploration.


None. But it can provide a framework for both and catch some types of
mistakes early.

Torben
Jun 19 '06 #52
Torben gidius Mogensen <to*****@app-3.diku.dk> wrote:
That's not really the difference between static and dynamic typing.
Static typing means that there exist a typing at compile-time that
guarantess against run-time type violations. Dynamic typing means
that such violations are detected at run-time. This is orthogonal to
strong versus weak typing, which is about whether such violations are
detected at all. The archetypal weakly typed language is machine code
-- you can happily load a floating point value from memory, add it to
a string pointer and jump to the resulting value. ML and Scheme are
both strongly typed, but one is statically typed and the other
dynamically typed.
Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
interject my plea not to abuse the word "type" with a phrase like
"dynamically typed". If anyone considers "untyped" to be perjorative,
as some people apparently do, then I'll note that another common term is
"type-free," which is marketing-approved but doesn't carry the
misleading connotations of "dynamically typed." We are quickly losing
any rational meaning whatsoever to the word "type," and that's quite a
shame.

By way of extending the point, let me mention that there is no such
thing as a universal class of things that are called "run-time type
violations". At runtime, there is merely correct code and incorrect
code. To the extent that anything is called a "type" at runtime, this
is a different usage of the word from the usage by which we may define
languages as being statically typed (which means just "typed"). In
typed OO languages, this runtime usage is often called the "class", for
example, to distinguish it from type.

This cleaner terminology eliminates a lot of confusion. For example, it
clarifies that there is no binary division between strongly typed
languages and weakly typed languages, since the division between a "type
error" and any other kind of error is arbitrary, depending only on
whether the type system in a particular language happens to catch that
error. For example, type systems have been defined to try to catch unit
errors in scientific programming, or to catch out-of-bounds array
indices... yet these are not commonly called "type errors" only because
such systems are not in common use.

This also leads us to define something like "language safety" to
encapsulate what we previously would have meant by the phrase "strongly
dynamically typed language". This also is a more general concept than
we had before. Language safety refers to a language having well-defined
behavior for as many operations as feasible, so that it's less likely
that someone will do something spectacularly bad. Language safety may
be achieved either by a type system or by runtime checks. Again, it's
not absolute... I'm aware of no language that is completely determinate,
at least if it supports any kind of concurrency.

This isn't just a matter of preference in terminology. The definitions
above (which are, in my experience, used widely by most non-academic
language design discussions) actually limit our understanding of
language design by pretending that certain delicate trade-offs such as
the extent of the type system, or which language behavior is allowed to
be non-deterministic or undefined, are etched in stone. This is simply
not so. If types DON'T mean a compile-time method for proving the
absence of certain program behaviors, then they don't mean anything at
all. Pretending that there's a distinction at runtime between "type
errors" and "other errors" serves only to confuse things and
artificially limit which problems we are willing to concieve as being
solvable by types.
Anyway, type inference for statically typed langauges don't make them
any more dynamically typed.


Indeed it does not. Unless it weakens the ability of a compiler to
prove the absence of certain program behaviors (which type inference
does not), it doesn't move anything on the scale toward a type-free
language.

That being said, though, it is considered a feature of some programming
languages that the programmer is asked to repeat type information in a
few places. The compiler may not need the information, but for
precisely the reason that the information is redundant, the compiler is
then able to check the consistency of the programmer in applying the
type. I won't get into precisely how useful this is, but it is
nevertheless present as an advantage to outweigh the wordiness.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 19 '06 #53
Chris Smith wrote:
Torben gidius Mogensen <to*****@app-3.diku.dk> wrote:
That's not really the difference between static and dynamic typing.
Static typing means that there exist a typing at compile-time that
guarantess against run-time type violations. Dynamic typing means
that such violations are detected at run-time. This is orthogonal to
strong versus weak typing, which is about whether such violations are
detected at all. The archetypal weakly typed language is machine code
-- you can happily load a floating point value from memory, add it to
a string pointer and jump to the resulting value. ML and Scheme are
both strongly typed, but one is statically typed and the other
dynamically typed.


Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
interject my plea not to abuse the word "type" with a phrase like
"dynamically typed". If anyone considers "untyped" to be perjorative,
as some people apparently do, then I'll note that another common term is
"type-free," which is marketing-approved but doesn't carry the
misleading connotations of "dynamically typed." We are quickly losing
any rational meaning whatsoever to the word "type," and that's quite a
shame.


I don't think dynamic typing is that nebulous. I remember this being
discussed elsewhere some time ago, I'll post the same reply I did then
...
A language is statically typed if a variable has a property - called
it's type - attached to it, and given it's type it can only represent
values defined by a certain class.

A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class.

Some people use dynamic typing as a word for latent typing, others use
it to mean something slightly different. But for most purposes the
definition above works for dynamic typing also.

Untyped and type-free mean something else: they mean no type checking
is done.

Jun 19 '06 #54
Chris Smith wrote:
Torben gidius Mogensen <to*****@app-3.diku.dk> wrote:
That's not really the difference between static and dynamic typing.
Static typing means that there exist a typing at compile-time that
guarantess against run-time type violations. Dynamic typing means
that such violations are detected at run-time. This is orthogonal to
strong versus weak typing, which is about whether such violations are
detected at all. The archetypal weakly typed language is machine code
-- you can happily load a floating point value from memory, add it to
a string pointer and jump to the resulting value. ML and Scheme are
both strongly typed, but one is statically typed and the other
dynamically typed.
Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
interject my plea not to abuse the word "type" with a phrase like
"dynamically typed". If anyone considers "untyped" to be perjorative,
as some people apparently do, then I'll note that another common term is
"type-free," which is marketing-approved but doesn't carry the
misleading connotations of "dynamically typed." We are quickly losing
any rational meaning whatsoever to the word "type," and that's quite a
shame.


The words "untyped" or "type-free" only make sense in a purely
statically typed setting. In a dynamically typed setting, they are
meaningless, in the sense that there are _of course_ types that the
runtime system respects.

Types can be represented at runtime via type tags. You could insist on
using the term "dynamically tagged languages", but this wouldn't change
a lot. Exactly _because_ it doesn't make sense in a statically typed
setting, the term "dynamically typed language" is good enough to
communicate what we are talking about - i.e. not (static) typing.
By way of extending the point, let me mention that there is no such
thing as a universal class of things that are called "run-time type
violations". At runtime, there is merely correct code and incorrect
code.
No, there is more: There is safe and unsafe code (i.e., code that throws
exceptions or that potentially just does random things). There are also
runtime systems where you have the chance to fix the reason that caused
the exception and continue to run your program. The latter play very
well with dynamic types / type tags.
To the extent that anything is called a "type" at runtime, this
is a different usage of the word from the usage by which we may define
languages as being statically typed (which means just "typed"). In
typed OO languages, this runtime usage is often called the "class", for
example, to distinguish it from type.
What type of person are you to tell other people what terminology to use? ;)

Ha! Here I used "type" in just another sense of the word. ;)

It is important to know the context in which you are discussing things.
For example, "we" Common Lispers use the term "type" as defined in
http://www.lispworks.com/documentati...y/26_glo_t.htm . You
cannot possibly argue that "our" use of the word "type" is incorrect
because in "our" context, when we talk about Common Lisp, the use of the
word "type" better be consistent with that definition. (You can say that
you don't like the definition, that it is unsound, or whatever, but that
doesn't change anything here.)
This cleaner terminology eliminates a lot of confusion. For example, it
clarifies that there is no binary division between strongly typed
languages and weakly typed languages, since the division between a "type
error" and any other kind of error is arbitrary, depending only on
whether the type system in a particular language happens to catch that
error. For example, type systems have been defined to try to catch unit
errors in scientific programming, or to catch out-of-bounds array
indices... yet these are not commonly called "type errors" only because
such systems are not in common use.
What type system catches division by zero? That is, statically? Would
you like to program in such a language?
This isn't just a matter of preference in terminology. The definitions
above (which are, in my experience, used widely by most non-academic
language design discussions) actually limit our understanding of
language design by pretending that certain delicate trade-offs such as
the extent of the type system, or which language behavior is allowed to
be non-deterministic or undefined, are etched in stone. This is simply
not so. If types DON'T mean a compile-time method for proving the
absence of certain program behaviors, then they don't mean anything at
all. Pretending that there's a distinction at runtime between "type
errors" and "other errors" serves only to confuse things and
artificially limit which problems we are willing to concieve as being
solvable by types.


Your problem doesn't exist. Just say "types" when you're amongst your
own folks, and "static types" when you're amongst a broader audience,
and everything's fine. Instead of focusing on terminology, just focus on
the contents.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 19 '06 #55
Rob Thorpe <ro***********@antenova.com> wrote:
A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class.
I'm assuming you mean "class" in the general sense, rather than in the
sense of a specific construct of some subset of OO programming
languages.

Now I define a class of values called "correct" values. I define these
to be those values for which my program will produce acceptable results.
Clearly there is a defined class of such values: (1) they are
immediately defined by the program's specification for those lines of
code that produce output; (2) if they are defined for the values that
result from any expression, then they are defined for the values that
are used by that expression; and (3) for any value for which correctness
is not defined by (1) or (2), we may define its "correct" values as the
class of all possible values. Now, by your definition, any language
which provides checking of that property of correctness for values is
latently typed. Of course, there are no languages that assign this
specific class of values; but ANY kind of correctness checking on values
that a language does (if it's useful at all) is a subset of the perfect
correctness checking system above. Apparently, we should call all such
systems "latent type systems". Can you point out a language that is not
latently typed?

I'm not trying to poke holes in your definition for fun. I am proposing
that there is no fundamental distinction between the kinds of problems
that are "type problems" and those that are not. Types are not a class
of problems; they are a class of solutions. Languages that solve
problems in ways that don't assign types to variables are not typed
languages, even if those same problems may have been originally solved
by type systems.
Untyped and type-free mean something else: they mean no type checking
is done.


Hence, they don't exist, and the definitions being used here are rather
pointless.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 19 '06 #56
On 19 Jun 2006 13:53:01 +0200, to*****@app-1.diku.dk (Torben gidius
Mogensen) wrote:
George Neuner <gneuner2/@comcast.net> writes:
On 19 Jun 2006 10:19:05 +0200, to*****@app-3.diku.dk (Torben gidius
Mogensen) wrote:

>I expect a lot of the exploration you do with incomplete programs
>amount to the feedback you get from type inference.


The ability to write functions and test them immediately without
writing a lot of supporting code is _far_ more useful to me than type
inference.


I can't see what this has to do with static/dynamic typing. You can
test individul functions in isolation is statically typed languages
too.


It has nothing to do with static/dynamic typing and that was the point
.... that support for exploratory programming is orthogonal to the
language's typing scheme.

George
--
for email reply remove "/" from address
Jun 19 '06 #57
Pascal Costanza <pc@p-cos.net> wrote:
Types can be represented at runtime via type tags. You could insist on
using the term "dynamically tagged languages", but this wouldn't change
a lot. Exactly _because_ it doesn't make sense in a statically typed
setting, the term "dynamically typed language" is good enough to
communicate what we are talking about - i.e. not (static) typing.
Okay, fair enough. It's certainly possible to use the same sequence of
letters to mean two different things in different contexts. The problem
arises, then, when Torben writes:

: That's not really the difference between static and dynamic typing.
: Static typing means that there exist a typing at compile-time that
: guarantess against run-time type violations. Dynamic typing means
: that such violations are detected at run-time.

This is clearly not using the word "type" to mean two different things
in different contexts. Rather, it is speaking under the mistaken
impression that "static typing" and "dynamic typing" are varieties of
some general thing called "typing." In fact, the phrase "dynamically
typed" was invented to do precisely that. My argument is not really
with LISP programmers talking about types, by which they would mean
approximately the same thing Java programmers mean by "class." My point
here concerns the confusion that results from the conception that there
is this binary distinction (or continuum, or any other simple
relationship) between a "statically typed" and a "dynamically typed"
language.

Torben's (and I don't mean to single out Torben -- the terminology is
used quite widely) classification of dynamic versus static type systems
depends on the misconception that there is some universal definition to
the term "type error" or "type violation" and that the only question is
how we address these well-defined things. It's that misconception that
I aim to challenge.
No, there is more: There is safe and unsafe code (i.e., code that throws
exceptions or that potentially just does random things). There are also
runtime systems where you have the chance to fix the reason that caused
the exception and continue to run your program. The latter play very
well with dynamic types / type tags.
Yes, I was oversimplifying.
What type system catches division by zero? That is, statically?
I can define such a type system trivially. To do so, I simply define a
type for integers, Z, and a subtype for non-zero integers, Z'. I then
define the language such that division is only possible in an expression
that looks like << z / z' >>, where z has type Z and z' has type Z'.
The language may then contain an expression:

z 0? t1 : t2

in which t1 is evaluated in the parent type environment, but t2 is
evaluated in the type environment augmented by (z -> Z'), the type of
the expression is the intersection type of t1 and t2 evaluated in those
type environments, and the evaluation rules are defined as you probably
expect.
Would you like to program in such a language?
No. Type systems for real programming languages are, of course, a
balance between rigor and usability. This particular set of type rules
doesn't seem to exhibit a good balance. Perhaps there is a way to
achieve it in a way that is more workable, but it's not immediately
obvious.

As another example, from Pierce's text "Types and Programming
Languages", Pierce writes: "Static elimination of array-bounds checking
is a long-standing goal for type system designers. In principle, the
necessary mechanisms (based on dependent types) are well understood, but
packaging them in a form that balances expressive power, predictability
and tractability of typechecking, and complexity of program annotations
remains a significant challenge." Again, this could quite validly be
described as a type error, just like division by zero or ANY other
program error... it's just that the type system that solves it doesn't
look appealing, so everyone punts the job to runtime checks (or, in some
cases, to the CPU's memory protection features and/or the user's ability
to fix resulting data corruption).

Why aren't these things commonly considered type errors? There is only
one reason: there exists no widely used language which solves them with
types. (I mean in the programming language type theory sense of "type";
since many languages "tag" arrays with annotations indicating their
dimensions, I guess you could say that we do solve them with types in
the LISP sense).
Your problem doesn't exist. Just say "types" when you're amongst your
own folks, and "static types" when you're amongst a broader audience,
and everything's fine.


I think I've explained why that's not the case. I don't have a
complaint about anyone speaking of types. It's the confusion from
pretending that the two definitions are comparable that I'm pointing
out.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 19 '06 #58
George Neuner <gneuner2/@comcast.net> writes:
I am, however, going to ask what
information you think type inference can provide that substitutes for
algorithm or data structure exploration.


Nobody wants to do such a substitution, of course. In /my/
experience, however, I find that doing algorithm and data structure
exploration is greatly aided by a language with static types and type
inference. YMMV.
Jun 19 '06 #59
Chris Smith <cd*****@twu.net> wrote in
news:MP************************@news.altopia.net:
Rob Thorpe <ro***********@antenova.com> wrote:
A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent
values defined by a certain class.
Now I define a class of values called "correct" values. I define
these to be those values for which my program will produce acceptable
results. Clearly there is a defined class of such values: (1) they
are immediately defined by the program's specification for those lines
of code that produce output; ...

I'm not trying to poke holes in your definition for fun. I am
proposing that there is no fundamental distinction between the kinds
of problems that are "type problems" and those that are not.


That sounds like a lot to demand of a type system. It almost sounds like
it's supposed to test and debug the whole program. In general, defining the
exact set of values for a given variable that generate acceptable output
from your program will require detailed knowledge of the program and all
its possible inputs. That goes beyond simple typing. It sounds more like
contracts. Requiring an array index to be an integer is considered a typing
problem because it can be checked based on only the variable itself,
whereas checking whether it's in bounds requires knowledge about the array.

--
YAD
Jun 19 '06 #60
"Rob Thorpe" <ro***********@antenova.com> writes:
I don't think dynamic typing is that nebulous. I remember this being
discussed elsewhere some time ago, I'll post the same reply I did then
..
A language is statically typed if a variable has a property - called
it's type - attached to it, and given it's type it can only represent
values defined by a certain class.
By this definition, all languages are statically typed (by making that
"certain class" the set of all values). Moreover, this "definition",
when read the way you probably wanted it to be read, requires some
considerable stretch to accommodate existing static type systems such
as F_\omega.

Perhaps better: A language is statically typed if its definition
includes (or ever better: is based on) a static type system, i.e., a
static semantics with typing judgments derivable by typing rules.
Usually typing judgmets associate program phrases ("expressions") with
types given a typing environment.
A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class.
This "definition" makes little sense. Any given value can obviously
only represent one value: itself. "Dynamic types" are nothing more
than sets of values, often given by computable predicates.
Untyped and type-free mean something else: they mean no type checking
is done.


Look up "untyped lambda calculus".
Jun 19 '06 #61
Matthias Blume wrote:
"Rob Thorpe" <ro***********@antenova.com> writes:
I don't think dynamic typing is that nebulous. I remember this being
discussed elsewhere some time ago, I'll post the same reply I did then
..
A language is statically typed if a variable has a property - called
it's type - attached to it, and given it's type it can only represent
values defined by a certain class.


By this definition, all languages are statically typed (by making that
"certain class" the set of all values). Moreover, this "definition",
when read the way you probably wanted it to be read, requires some
considerable stretch to accommodate existing static type systems such
as F_\omega.

Perhaps better: A language is statically typed if its definition
includes (or ever better: is based on) a static type system, i.e., a
static semantics with typing judgments derivable by typing rules.
Usually typing judgmets associate program phrases ("expressions") with
types given a typing environment.


How does your definition exclude the trivial type system in which the
only typing judgment states that every expression is acceptable?
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 19 '06 #62
Chris Smith wrote:
Pascal Costanza <pc@p-cos.net> wrote:
Types can be represented at runtime via type tags. You could insist on
using the term "dynamically tagged languages", but this wouldn't change
a lot. Exactly _because_ it doesn't make sense in a statically typed
setting, the term "dynamically typed language" is good enough to
communicate what we are talking about - i.e. not (static) typing.


Okay, fair enough. It's certainly possible to use the same sequence of
letters to mean two different things in different contexts. The problem
arises, then, when Torben writes:

: That's not really the difference between static and dynamic typing.
: Static typing means that there exist a typing at compile-time that
: guarantess against run-time type violations. Dynamic typing means
: that such violations are detected at run-time.

This is clearly not using the word "type" to mean two different things
in different contexts. Rather, it is speaking under the mistaken
impression that "static typing" and "dynamic typing" are varieties of
some general thing called "typing." In fact, the phrase "dynamically
typed" was invented to do precisely that. My argument is not really
with LISP programmers talking about types, by which they would mean
approximately the same thing Java programmers mean by "class." My point
here concerns the confusion that results from the conception that there
is this binary distinction (or continuum, or any other simple
relationship) between a "statically typed" and a "dynamically typed"
language.


There is an overlap in the sense that some static type systems cover
only types as sets of values whose correct use could as well be checked
dynamically.

Yes, it's correct that more advanced static type systems can provide
more semantics than that (and vice versa).
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 19 '06 #63
Pascal Costanza <pc@p-cos.net> wrote:
How does your definition exclude the trivial type system in which the
only typing judgment states that every expression is acceptable?


It is not necessary to exclude that trivial type system. Since it is
useless, no one will implement it. However, if pressed, I suppose one
would have to admit that that definition includes a type system that is
just useless.

I do, though, prefer Pierce's definition:

A type system is a tractable syntactic method for proving the
absence of certain program behaviors by classifying phrases
according to the kinds of values they compute.

(Benjamin Pierce, Types and Programming Languages, MIT Press, pg. 1)

Key words include:

- tractable: it's not sufficient to just evaluate the program

- syntactic: types are tied to the kinds of expressions in the language

- certain program behaviors: while perhaps confusing out of context,
there is nowhere in the book a specification of which program
behaviors may be prevented by type systems and which may not. In
context, the word "certain" there is meant to make it clear that type
systems should be able to specifically identify which behaviors they
prevent, and not that there is some universal set.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 19 '06 #64
Pascal Costanza <pc@p-cos.net> writes:
Matthias Blume wrote:
"Rob Thorpe" <ro***********@antenova.com> writes:
I don't think dynamic typing is that nebulous. I remember this being
discussed elsewhere some time ago, I'll post the same reply I did then
..
A language is statically typed if a variable has a property - called
it's type - attached to it, and given it's type it can only represent
values defined by a certain class.

By this definition, all languages are statically typed (by making
that
"certain class" the set of all values). Moreover, this "definition",
when read the way you probably wanted it to be read, requires some
considerable stretch to accommodate existing static type systems such
as F_\omega.
Perhaps better: A language is statically typed if its definition
includes (or ever better: is based on) a static type system, i.e., a
static semantics with typing judgments derivable by typing rules.
Usually typing judgmets associate program phrases ("expressions") with
types given a typing environment.


How does your definition exclude the trivial type system in which the
only typing judgment states that every expression is acceptable?


It does not.
Jun 19 '06 #65

Chris Smith wrote:

Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
interject my plea not to abuse the word "type" with a phrase like
"dynamically typed".
Allow me to strenuously object. The static typing community has its
own set of
terminology and that's fine. However, we Lisp hackers are not used to
this terminology.
It confuses us. *We* know what we mean by `dynamically typed', and we
suspect *you* do, too.

This cleaner terminology eliminates a lot of confusion.
Hah! Look at the archives.
This isn't just a matter of preference in terminology.
No?
If types DON'T mean a compile-time method for proving the
absence of certain program behaviors, then they don't mean anything at
all.


Nonsense.

Jun 19 '06 #66
Joe Marshall <ev********@gmail.com> wrote:

Chris Smith wrote:

Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
interject my plea not to abuse the word "type" with a phrase like
"dynamically typed".


Allow me to strenuously object. The static typing community has its
own set of
terminology and that's fine. However, we Lisp hackers are not used to
this terminology.
It confuses us. *We* know what we mean by `dynamically typed', and we
suspect *you* do, too.


I know what you mean by types in LISP. The phrase "dynamically typed,"
though, was explicitly introduced as a counterpart to "statically
typed" in order to imply (falsely) that the word "typed" has related
meanings in those two cases. Nevertheless, I did not really object,
since it's long since passed into common usage, until Torben attempted
to give what I believe are rather meaningless definitions to those
words, in terms of some mythical thing called "type violations" that he
seems to believe exist apart from any specific type systems.
(Otherwise, how could you define kinds of type systems in terms of when
they catch type violations?)
This cleaner terminology eliminates a lot of confusion.


Hah! Look at the archives.


I'm not sure what you mean here. You would like me to look at the
archives of which of the five groups that are part of this conversation?
In any case, the confusion I'm referring to pertains to comparison of
languages, and it's already been demonstrated once in the half-dozen or
so responses to this branch of this thread.
If types DON'T mean a compile-time method for proving the
absence of certain program behaviors, then they don't mean anything at
all.


Nonsense.


Please accept my apologies for not making the context clear. I tried to
clarify, in my response to Pascal, that I don't mean that the word
"type" can't have any possible meaning except for the one from
programming language type theory. I should modify my statement as
follows:

An attempt to generalize the definition of "type" from programming
language type theory to eliminate the requirement that they are
syntactic in nature yields something meaningless. Any concept of
"type" that is not syntactic is a completely different thing from
static types.

Basically, I start objecting when someone starts comparing "statically
typed" and "dynamically typed" as if they were both varieties of some
general concept called "typed". They aren't. Furthermore, these two
phrases were invented under the misconception that that are. If you
mean something else by types, such as the idea that a value has a tag
indicating its range of possible values, then I tend to think it would
be less confusing to just say "type" and then clarify the meaning it
comes into doubt, rather than adopting language that implies that those
types are somehow related to types from type theory.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 19 '06 #67
Chris Smith wrote:

Basically, I start objecting when someone starts comparing "statically
typed" and "dynamically typed" as if they were both varieties of some
general concept called "typed". They aren't. Furthermore, these two
phrases were invented under the misconception that that are. If you
mean something else by types, such as the idea that a value has a tag
indicating its range of possible values, then I tend to think it would
be less confusing to just say "type" and then clarify the meaning it
comes into doubt, rather than adopting language that implies that those
types are somehow related to types from type theory.


While I am quite sympathetic to this point, I have to say that
this horse left the barn quite some time ago.
Marshall

PS. Hi Chris!

Jun 19 '06 #68
Marshall <ma*************@gmail.com> wrote:
While I am quite sympathetic to this point, I have to say that
this horse left the barn quite some time ago.
I don't think so. Perhaps it's futile to go scouring the world for uses
of the phrase "dynamic type" and eliminating them. It's not useless to
point out when the term is used in a particularly confusing way, though,
as when it's implied that there is some class of "type errors" that is
strictly a subset of the class of "errors". Terminology is often
confused for historical reasons, but incorrect statements eventually get
corrected.
PS. Hi Chris!


Hi! Where are you posting from these days?

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 19 '06 #69
Chris Smith wrote:
Marshall <ma*************@gmail.com> wrote:
While I am quite sympathetic to this point, I have to say that
this horse left the barn quite some time ago.


I don't think so. Perhaps it's futile to go scouring the world for uses
of the phrase "dynamic type" and eliminating them. It's not useless to
point out when the term is used in a particularly confusing way, though,
as when it's implied that there is some class of "type errors" that is
strictly a subset of the class of "errors". Terminology is often
confused for historical reasons, but incorrect statements eventually get
corrected.


That's fair.

One thing that is frustrating to me is that I really want to build
an understanding of what dynamic typing is and what its
advantages are, but it's difficult to have a productive discussion
on the static vs. dynamic topic. Late binding of every function
invocation: how does that work, what are the implications of that,
what does that buy you?

I have come to believe that the two actually represent
very different ways of thinking about programming. Which
goes a long way to explaining why there are such difficulties
communicating. Each side tries to describe to the other how
the tools and systems they use facilitate doing tasks that
don't exist in the other side's mental model.

PS. Hi Chris!


Hi! Where are you posting from these days?


I'm mostly on comp.databases.theory, but I also lurk
on comp.lang.functional, which is an awesome group, and
if you're reading Pierce then you might like it too.
Marshall

Jun 19 '06 #70
Yet Another Dan sez:

.... Requiring an array index to be an integer is considered a typing
problem because it can be checked based on only the variable itself,
whereas checking whether it's in bounds requires knowledge about the array.


You mean like
subtype MyArrayIndexType is INTEGER 7 .. 11
type MyArrayType is array (MyArrayIndexType) of MyElementType

Dima
--
We're sysadmins. Sanity happens to other people. -- Chris King
Jun 19 '06 #71
Chris Smith wrote:
Marshall <ma*************@gmail.com> wrote:
While I am quite sympathetic to this point, I have to say that
this horse left the barn quite some time ago.


I don't think so. Perhaps it's futile to go scouring the world for uses
of the phrase "dynamic type" and eliminating them. It's not useless to
point out when the term is used in a particularly confusing way, though,
as when it's implied that there is some class of "type errors" that is
strictly a subset of the class of "errors". Terminology is often
confused for historical reasons, but incorrect statements eventually get
corrected.


Ok, so you (Chris Smith) object to the term "dynamic type". From a
historical perspective it makes sense, perticular in the sense of
tags. A dynamic type system involves associating tags with values and
expresses variant operations in terms of those tags, with certain
disallowed combinations checked (dynamicly) at run-time. A static
type system eliminates some set of tags on values by syntactic
analysis of annotations (types) written with or as part of the program
and detects some of the disallowed compuatations (staticly) at compile
time.

Type errors are the errors caught by ones personal sense of which
annotations are expressible, computable, and useful. Thus, each
person has a distinct sense of which errors can (and/or should) be
staticly caught.

A personal example, I read news using Emacs, which as most readers in
these newsgroups will know is a dialect of lisp that includes
primitives to edit files. Most of emacs is quite robust, perhaps due
to it being lisp. However, some commands (reading news being one of
them) are exceptionally fragile and fail in the most undebuggable
ways, often just complaining about "nil being an invalid argument" (or
"stack overflow in regular expression matcher".)

This is particularly true, when I am composing lisp code. If I write
some lisp code and make a typographical error, the code may appear to
run on some sample case and fail due to a path not explored in my
sample case when applied to real data.

I consider such an error to be a "type error" because I believe if I
used a languages that required more type annotations, the compiler
would have caught my typographical error (and no I'm not making a pun
on type and typographical). Because I do a lot of this--it is easy
enough for me to conjure up a small lisp macro to make some edit that
it is a primary tool in my toolkit, I wish I could make my doing so
more robust. It would not bother me to have to type more to do so. I
simply make too many stupid errors to use emacs lisp as effectively as
I would like.

Now, this has nothing to do with real lisp programming, and so I do
not wish to offend those who do that. However, I personally would
like a staticly typed way of writing macros (and more importantly of
annotating some of the existing emacs code) to remove some of the
fragilities that I experience. I'm not taking avantage of the
exploratory nature of lisp, except in the sense that the exploratory
nature has created lots of packages which mostly work most of the
time.

Now, I will return this group to its much more erudite discussion of
the issue.

Thank you,
-Chris

************************************************** ***************************
Chris Clark Internet : co*****@world.std.com
Compiler Resources, Inc. Web Site : http://world.std.com/~compres
23 Bailey Rd voice : (508) 435-5016
Berlin, MA 01503 USA fax : (978) 838-0263 (24 hours)
------------------------------------------------------------------------------

Jun 19 '06 #72

Chris Smith wrote:
Joe Marshall <ev********@gmail.com> wrote:

Chris Smith wrote:

Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
interject my plea not to abuse the word "type" with a phrase like
"dynamically typed".
Allow me to strenuously object. The static typing community has its
own set of
terminology and that's fine. However, we Lisp hackers are not used to
this terminology.
It confuses us. *We* know what we mean by `dynamically typed', and we
suspect *you* do, too.


I know what you mean by types in LISP. The phrase "dynamically typed,"
though, was explicitly introduced as a counterpart to "statically
typed" in order to imply (falsely) that the word "typed" has related
meanings in those two cases.


They *do* have a related meaning. Consider this code fragment:
(car "a string")

Obviously this code is `wrong' in some way. In static typing terms, we
could say that
we have a type error because the primitive procedure CAR doesn't
operate on strings.
Alternatively, we could say that since Lisp has one universal type (in
static type terms)
the code is correctly statically typed - that is, CAR is defined on all
input, but it's definition is to raise a runtime exception when passed
a string.

But regardless of which way you want to look at it, CAR is *not* going
to perform its usual computation and it is *not* going to return a
value. The reason behind this is that you cannot take the CAR of a
string. A string is *not* a valid argument to CAR. Ask anyone why and
they will tell you `It's the wrong type.'

Both `static typing' and `dynamic typing' (in the colloquial sense) are
strategies to detect this sort of error.
Nevertheless, I did not really object,
since it's long since passed into common usage,
Exactly. And you are far more likely to encounter this sort of usage
outside of a type theorist's convention.

until Torben attempted
to give what I believe are rather meaningless definitions to those
words, in terms of some mythical thing called "type violations" that he
seems to believe exist apart from any specific type systems.
It's hardly mythical. (car "a string") is obviously an error and you
don't need a static type system to know that.
This cleaner terminology eliminates a lot of confusion.


Hah! Look at the archives.


I'm not sure what you mean here. You would like me to look at the
archives of which of the five groups that are part of this conversation?
In any case, the confusion I'm referring to pertains to comparison of
languages, and it's already been demonstrated once in the half-dozen or
so responses to this branch of this thread.


I mean that this has been argued time and time again in comp.lang.lisp
and probably the other groups as well. You may not like the fact that
we say that Lisp is dynamically typed, but *we* are not confused by
this usage. In fact, we become rather confused when you say `a
correctly typed program cannot go wrong at runtime' because we've seen
plenty of runtime errors from code that is `correctly typed'.
If types DON'T mean a compile-time method for proving the
absence of certain program behaviors, then they don't mean anything at
all.


Nonsense.


Please accept my apologies for not making the context clear. I tried to
clarify, in my response to Pascal, that I don't mean that the word
"type" can't have any possible meaning except for the one from
programming language type theory. I should modify my statement as
follows:

An attempt to generalize the definition of "type" from programming
language type theory to eliminate the requirement that they are
syntactic in nature yields something meaningless. Any concept of
"type" that is not syntactic is a completely different thing from
static types.


Agreed. That is why there is the qualifier `dynamic'. This indicates
that it is a completely different thing from static types.
Basically, I start objecting when someone starts comparing "statically
typed" and "dynamically typed" as if they were both varieties of some
general concept called "typed". They aren't.
I disagree. There is clearly a concept that there are different
varieties of data and they are not interchangable. In some languages,
it is up to the programmer to ensure that mistakes in data usage do not
happen. In other languages, the computer can detect such mistakes and
prevent them. If this detection is performed by syntactic analysis
prior to running the program, it is static typing. Some languages like
Lisp defer the detection until the program is run. Call it what you
want, but here in comp.lang.lisp we tend to call it `dynamic typing'.
Furthermore, these two
phrases were invented under the misconception that that are. If you
mean something else by types, such as the idea that a value has a tag
indicating its range of possible values, then I tend to think it would
be less confusing to just say "type" and then clarify the meaning it
comes into doubt, rather than adopting language that implies that those
types are somehow related to types from type theory.


You may think it would be less confusing, but if you look at the
archives of comp.lang.lisp you would see that it is not.

We're all rubes here, so don't try to educate us with your
high-falutin' technical terms.

Jun 19 '06 #73
Joe Marshall wrote:

They *do* have a related meaning. Consider this code fragment:
(car "a string")
[...]
Both `static typing' and `dynamic typing' (in the colloquial sense) are
strategies to detect this sort of error.
The thing is though, that putting it that way makes it seems as
if the two approaches are doing the same exact thing, but
just at different times: runtime vs. compile time. But they're
not the same thing. Passing the static check at compile
time is universally quantifying the absence of the class
of error; passing the dynamic check at runtime is existentially
quantifying the absence of the error. A further difference is
the fact that in the dynamically typed language, the error is
found during the evaluation of the expression; in a statically
typed language, errors are found without attempting to evaluate
the expression.

I find everything about the differences between static and
dynamic to be frustratingly complex and subtle.

(To be clear, I do know that Joe understands these issues
quite well.)

So I kind of agree with Chris, insofar as I think the terminology
plays a role in obscuring rather than illuminating the differences.

On the other hand I agree with Joe in that:
I mean that this has been argued time and time again in comp.lang.lisp
and probably the other groups as well. You may not like the fact that
we say that Lisp is dynamically typed, but *we* are not confused by
this usage. In fact, we become rather confused when you say `a
correctly typed program cannot go wrong at runtime' because we've seen
plenty of runtime errors from code that is `correctly typed'.


Yes; as I said ealier, the horse has left the barn on this one.

The conversation I would *really* like to have is the one where we
discuss what all the differences are, functionally, between the two,
and what the implications of those differences are, without trying
to address which approach is "right" or "better", because those are
dependent on the problem domain anyway, and because I can
make up my own mind just fine about which one I prefer.

The comp.lang.functional and comp.lang.lisp people are probably
two of the smartest groups on usenet. (I do not consider myself
a member of either group.) You wouldn't *think* that conversation
would be *so* hard to have.
Marshall

Jun 20 '06 #74
Joe Marshall <ev********@gmail.com> wrote:
They *do* have a related meaning. Consider this code fragment:
(car "a string")
My feeling is that this code is obviously wrong. It is so obviously
wrong, in fact, that the majority of automated error detection systems,
if written for Lisp, would probably manage to identify it as wrong at
some point. This includes the Lisp runtime. So far, I haven't
mentioned types.
A string is *not* a valid argument to CAR. Ask anyone why and
they will tell you `It's the wrong type.'
Indeed, they will. We're assuming, of course that they know a little
Lisp... otherwise, it may be entirely reasonable for someone to expect
that (car "a string") is 'a' and (cdr "a string") is " string"... but
I'll ignore that, even though I haven't yet convinced myself that it's
not relevant to why this is colloquially considered a type error.

I believe that in this sense, the 'anyone' actually means "type" in the
sense that you mean "type". The fact that a static type system detects
this error is somewhat coincidental (except for the fact that, again,
any general error-detection scheme worth its salt probably detects this
error), and orthogonal to whether it is considered a type error by our
hypothetical 'anyone'.
Both `static typing' and `dynamic typing' (in the colloquial sense) are
strategies to detect this sort of error.
I will repeat that static typing is a strategy for detecting errors in
general, on the basis of tractable syntactic methods. There are some
types of errors that are easier to detect in such a system than
others... but several examples have been given of problems solved by
static type systems that are not of the colloquial "It's the wrong
type" variety that you mention here. The examples so far have included
detecting division by zero, or array bounds checking. Other type
systems can check dimensionality (correct units).

Another particularly interesting example may be the following from
Ocaml:

let my_sqrt x = if x < 0.0 then None else Some(sqrt(x));;

Then, if I attempt to use my_sqrt in a context that requires a float,
the compiler will complain about a type violation, since the type of the
expression is "float option". So this is a type error *in Ocaml*, but
it's not the kind of thing that gets intuitively classified as a type
error. In fact, it's roughly equivalent to a NullPointerException at
runtime in Java, and few Java programmers would consider a
NullPointerException to be somehow "actually a type error" that the
compiler just doesn't catch. In this case, when the error appears in
Ocaml, it appears to be "obviously" a type error, but that's only
because the type system was designed to catch some class of program
errors, of which this is a member.
It's hardly mythical. (car "a string") is obviously an error and you
don't need a static type system to know that.
Sure. The question is whether it means much to say that it's a "type
error". So far, I'd agree with either of two statements, depending on
the usage of the word "type":

a) Yes, it means something, but Torben's definition of a static type
system was wrong, because static type systems are not specifically
looking for type errors.

or

b) No, "type error" just means "error that can be caught by the type
system", so it is circular and meaningless to use the phrase in defining
a kind of type system.
I mean that this has been argued time and time again in comp.lang.lisp
and probably the other groups as well.
My apologies, then. It has not been discussed so often in any newsgroup
that I followed up until now, though Marshall has now convinced me to
read comp.lang.functional, so I might see these endless discussions from
now on.
In fact, we become rather confused when you say `a
correctly typed program cannot go wrong at runtime' because we've seen
plenty of runtime errors from code that is `correctly typed'.
Actually, I become a little confused by that as well. I suppose it
would be true of a "perfect" static type system, but I haven't seen one
of those yet. (Someone did email me earlier today to point out that the
type system of a specification language called LOTOS supposedly is
perfect in that sense, that every correctly typed program is also
correct, but I've yet to verify this for myself. It seems rather
difficult to believe.)

Unless I suddenly have some kind of realization in the future about the
feasibility of a perfect type system, I probably won't make that
statement that you say confuses you.
An attempt to generalize the definition of "type" from programming
language type theory to eliminate the requirement that they are
syntactic in nature yields something meaningless. Any concept of
"type" that is not syntactic is a completely different thing from
static types.


Agreed. That is why there is the qualifier `dynamic'. This indicates
that it is a completely different thing from static types.


If we agree about this, then there is no need to continue this
discussion. I'm not sure we do agree, though, because I doubt we'd be
right here in this conversation if we did.

This aspect of being a "completely different thing" is why I objected to
Torben's statement of the form: static type systems detect type
violations at compile time, whereas dynamic type systems detect type
violations at runtime. The problem with that statement is that "type
violations" means different things in the first and second half, and
that's obscured by the form of the statement. It would perhaps be
clearer to say something like:

Static type systems detect some bugs at compile time, whereas
dynamic type systems detect type violations at runtime.

Here's one interesting consequence of the change. It is easy to
recognize that static and dynamic type systems are largely orthogonal to
each other in the sense above. Java, for example (since that's one of
the newsgroups on the distribution list for this thread), restricting
the field of view to reference types for simplicity's sake, clearly has
both a very well-developed static type system, and a somewhat well-
developed dynamic type system. There are dynamic "type" errors that
pass the compiler and are caught by the runtime; and there are errors
that are caught by the static type system. There is indeed considerable
overlap involved, but nevertheless, neither is made redundant. In fact,
one way of understanding the headaches of Java 1.5 generics is to note
that the two different meanings of "type errors" are no longer in
agreement with each other!
We're all rubes here, so don't try to educate us with your
high-falutin' technical terms.


That's not my intention.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 20 '06 #75
I <cd*****@twu.net> wrote:
Static type systems detect some bugs at compile time, whereas
dynamic type systems detect type violations at runtime.


PS: In order to satisfy the Python group (among others not on the cross-
post list), we'd need to include "duck typing," which fits neither of
the two definitions above. We'd probably need to modify the definition
of dynamic type systems, since most source tend to classify it as a
dynamic type system. It's getting rather late, though, and I don't
intend to think about how to do that.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 20 '06 #76
In article <11*********************@h76g2000cwa.googlegroups. com>,
"Marshall" <ma*************@gmail.com> wrote:
The conversation I would *really* like to have is the one where we
discuss what all the differences are, functionally, between the two,
and what the implications of those differences are, without trying
to address which approach is "right" or "better", because those are
dependent on the problem domain anyway, and because I can
make up my own mind just fine about which one I prefer.

The comp.lang.functional and comp.lang.lisp people are probably
two of the smartest groups on usenet. (I do not consider myself
a member of either group.) You wouldn't *think* that conversation
would be *so* hard to have.


It's hard to have because there's very little to say, which leaves the
academics without enough to do to try to justify their existence. This
is the long and the short of it:

1. There are mismatches between the desired behavior of code and its
actual behavior. Such mismatches are generally referred to as "bugs" or
"errors" (and, occasionally, "features").

2. Some of those mismatches can be detected before the program runs.
Some can be detected while the program runs. And some cannot be
detected until after the program has finished running.

3. The various techniques for detecting those mismatches impose varying
degrees of burden upon the programmer and the user.

That's it. Everything else, including but not limited to quibbling over
the meaning of the word "type", is nothing but postmodernist claptrap.

IMHO of course.

rg
Jun 20 '06 #77
In comp.lang.functional Chris Smith <cd*****@twu.net> wrote:
[...]
Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
interject my plea not to abuse the word "type" with a phrase like
"dynamically typed". If anyone considers "untyped" to be perjorative,
as some people apparently do, then I'll note that another common term is
"type-free," which is marketing-approved but doesn't carry the
misleading connotations of "dynamically typed." We are quickly losing
any rational meaning whatsoever to the word "type," and that's quite a
shame.

[...]

FWIW, I agree and have argued similarly on many occasions (both on the
net (e.g. http://groups.google.fi/group/comp.p...734313a?hl=fi&)
and person-to-person). The widely used terminology (statically /
dynamically typed, weakly / strongly typed) is extremely confusing to
beginners and even to many with considerable practical experience.

-Vesa Karvonen
Jun 20 '06 #78
Chris Smith wrote:
Rob Thorpe <ro***********@antenova.com> wrote:
A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class.
I'm assuming you mean "class" in the general sense, rather than in the
sense of a specific construct of some subset of OO programming
languages.


Yes.
Now I define a class of values called "correct" values. I define these
to be those values for which my program will produce acceptable results.
Clearly there is a defined class of such values: (1) they are
immediately defined by the program's specification for those lines of
code that produce output; (2) if they are defined for the values that
result from any expression, then they are defined for the values that
are used by that expression; and (3) for any value for which correctness
is not defined by (1) or (2), we may define its "correct" values as the
class of all possible values.
I'm not talking about correctness, I'm talking about typing.
Now, by your definition, any language
which provides checking of that property of correctness for values is
latently typed.
No, that isn't what I said. What I said was:
"A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class."

I said nothing about the values producing correct outputs, or being
correct inputs. I only said that they have types.

What I mean is that each value in the language is defined by two peice
of information, its contents X and its type T.
Of course, there are no languages that assign this
specific class of values; but ANY kind of correctness checking on values
that a language does (if it's useful at all) is a subset of the perfect
correctness checking system above. Apparently, we should call all such
systems "latent type systems".
No, I'm speaking about type checking not correctness.
Can you point out a language that is not
latently typed?
Easy, any statically typed language is not latently typed. Values have
no type associated with them, instead variables have types.

If I tell a C program to print out a string but give it a number it
will give an error telling me that the types mismatch where the
variable the number is held in is used to assign to a variable that
must hold a string. Similarly if I have a lisp function that prints
out a string it will also fail when given a number, but it will fail at
a different point, it will fail when the type of the value is examined
and found to be incorrect.
I'm not trying to poke holes in your definition for fun. I am proposing
that there is no fundamental distinction between the kinds of problems
that are "type problems" and those that are not. Types are not a class
of problems; they are a class of solutions.
Exactly. Which is why they are only tangentially associated with
correctness.
Typing is a set of rules put in place to aid correctness, but it is not
a system that attempts to create correctness itself.
Languages that solve
problems in ways that don't assign types to variables are not typed
languages, even if those same problems may have been originally solved
by type systems.


Well, you can think of things that way. But to the rest of the
computing world languages that don't assign types to variables but do
assign them to values are latently typed.
Untyped and type-free mean something else: they mean no type checking
is done.


Hence, they don't exist, and the definitions being used here are rather
pointless.


No they aren't, types of data exist even if there is no system in place
to check them. Ask an assembly programmer whether his programs have
string and integers in them and he will probably tell you that they do.

Jun 20 '06 #79
Chris F Clark wrote:

A static
type system eliminates some set of tags on values by syntactic
analysis of annotations (types) written with or as part of the program
and detects some of the disallowed compuatations (staticly) at compile
time.


Explicit annotations are not a necessary ingredient of a type system,
nor is "eliminating tags" very relevant to its function.

- Andreas
Jun 20 '06 #80
Marshall wrote:
The conversation I would *really* like to have is the one where we
discuss what all the differences are, functionally, between the two,
and what the implications of those differences are, without trying
to address which approach is "right" or "better", because those are
dependent on the problem domain anyway, and because I can
make up my own mind just fine about which one I prefer.


My current take on this is that static typing and dynamic typing are
incompatible, at least in their "extreme" variants.

The simplest examples I have found are this:

- In a statically typed language, you can have variables that contain
only first-class functions at runtime that are guaranteed to have a
specific return type. Other values are rejected, and the rejection
happens at compile time.

In dynamically typed languages, this is impossible because you can never
be sure about the types of return values - you cannot predict the
future. This can at best be approximated.
- In a dynamically typed language, you can run programs successfully
that are not acceptable by static type systems. Here is an example in
Common Lisp:

; A class "person" with no superclasses and with the only field "name":
(defclass person ()
(name))

; A test program:
(defun test ()
(let ((p (make-instance 'person)))
(eval (read))
(slot-value p 'address)))

(slot-value p 'address) is an attempt to access the field 'address in
the object p. In many languages, the notation for this is p.address.

Although the class definition for person doesn't mention the field
address, the call to (eval (read)) allows the user to change the
definition of the class person and update its existing instances.
Therefore at runtime, the call to (slot-value p 'adress) has a chance to
succeed.

(Even without the call to (eval (read)), in Common Lisp the call to
(slot-value p 'address) would raise an exception which gives the user a
chance to fix things and continue from the point in the control flow
where the exception was raised.)

I cannot imagine a static type system which has a chance to predict that
this program can successfully run without essentially accepting all
kinds of programs.

At least, development environments for languages like Smalltalk, Common
Lisp, Java, etc., make use of such program updates to improve
edit-compile-test cycles. However, it is also possible (and done in
practice) to use such program updates to minimize downtimes when adding
new features to deployed systems.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 20 '06 #81
Rob Thorpe wrote:

No, that isn't what I said. What I said was:
"A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class."
"it [= a value] [...] can [...] represent values"?
Easy, any statically typed language is not latently typed. Values have
no type associated with them, instead variables have types.


A (static) type system assigns types to (all) *expressions*. This
includes values as well as variables.

Don't confuse type assignment with type annotation (which many
mainstream languages enforce for, but also only allow for, variable
declarations).

- Andreas
Jun 20 '06 #82
Andreas Rossberg wrote:
Rob Thorpe wrote:

No, that isn't what I said. What I said was:
"A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class."
"it [= a value] [...] can [...] represent values"?


???
Easy, any statically typed language is not latently typed. Values have
no type associated with them, instead variables have types.


A (static) type system assigns types to (all) *expressions*.


That's right most of the time yes, I probably should have said
expressions. Though I can think of static typed languages where the
resulting type of an expression depends on the type of the variable it
is being assigned to.
This includes values as well as variables.
Well I haven't programmed in any statically typed language where values
have types themselves. Normally the language only knows that variable
Z is of type Q because it's in a variable of type Q, or as you mention
an expression of type Q. There are many things that could be
considered more than one type. The integer 45 could be unsigned 45 or
signed 45 or even float 45 depending on the variable it's in, but
without context it doesn't have a precise type.

It does imply the type to some extent though, you could imagine a
language where every value has a precise type. So, you've found a hole
in my definition.

Maybe a better definition would be:-

if (variables have types || expressions have types) <lang is statically
typed>
else if (values have types) <lang is latently/dynamically typed>
else <lang is untyped>

That seems to fit usage, quite well.

Even then there are problems. Perl has static types for arrays, hashs
and scalars. But scalars themselves can be integers, strings, etc.
Don't confuse type assignment with type annotation (which many
mainstream languages enforce for, but also only allow for, variable
declarations).


Point taken.

Jun 20 '06 #83
Andreas Rossberg <ro******@ps.uni-sb.de> writes:
"A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class."


I thought the point was to separate the (bitwise) representation of a
value from its interpretation (which is its type). In a static
system, the interpretation is derived from context, in a dynamic
system values must carry some form of tags specifying which
interpretation to use.

I think this applies - conceptually, at least - also to expressions?

My impression is that dynamic typers tend to include more general
properties in their concept of types (div by zero, srqt of negatives,
etc).

-k
--
If I haven't seen further, it is by standing in the footprints of giants
Jun 20 '06 #84
Rob Thorpe wrote:
No, that isn't what I said. What I said was:
"A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class."
"it [= a value] [...] can [...] represent values"?


???


I just quoted, in condensed form, what you said above: namely, that a
value represents values - which I find a strange and circular definition.
A (static) type system assigns types to (all) *expressions*.


That's right most of the time yes, I probably should have said
expressions. Though I can think of static typed languages where the
resulting type of an expression depends on the type of the variable it
is being assigned to.


Yes, but that's no contradiction. A type system does not necessarily
assign *unique* types to individual expressions (consider overloading,
subtyping, polymorphism, etc).
Well I haven't programmed in any statically typed language where values
have types themselves.


They all have - the whole purpose of a type system is to ensure that any
expression of type T always evaluates to a value of type T. So when you
look at type systems formally then you certainly have to assign types to
values, otherwise you couldn't prove any useful property about those
systems (esp. soundness).

- Andreas
Jun 20 '06 #85
Rob Thorpe <ro***********@antenova.com> wrote:
I'm not talking about correctness, I'm talking about typing.

Since you wrote that, I've come to understand that you meant something
specific by "property" which I didn't understand at first. From my
earlier perspective, it was obvious that correctness was a property of a
value. I now realize that you meant a property that's explicitly
associated with the value and plays a role in determining the behavior
of the language. Sorry for the confusion.
No, that isn't what I said. What I said was:
"A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class."
No, to answer Andreas' concern, you would only need to say:

... if a value has a property - called it's type - attached to it,
and the language semantics guarantees that only values defined by a
certain class may have that same property attached.
Easy, any statically typed language is not latently typed.


I'm actually not sure I agree with this at all. I believe that
reference values in Java may be said to be latently typed. This is the
case because each reference value (except null, which may be tested
separately) has an explicit property (called its "class", but surely the
word doesn't make any difference), such that the language semantics
guarantees that only a certain class of values may have that same
property, and the property is used to determine behavior of the language
in many cases (for example, in the case of type-based polymorphism, or
use of Java's instanceof operator). Practically all class-based OO
languages are subject to similar consideration, as it turns out.

I'm unsure whether to consider explicitly stored array lengths, which
are present in most statically typed languages, to be part of a "type"
in this sense or not.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 20 '06 #86
Andreas Rossberg wrote:
Rob Thorpe wrote:

No, that isn't what I said. What I said was:
"A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class."

"it [= a value] [...] can [...] represent values"?


???


I just quoted, in condensed form, what you said above: namely, that a
value represents values - which I find a strange and circular definition.


But you left out the most significant part: "given it's type it can only
represent values *defined by a certain class*" (my emphasis). In C-ish
notation:

unsigned int x;

means that x can only represent elements that are integers elements of
the set (class) of values [0, MAX_INT]. Negative numbers and non-integer
numbers are excluded, as are all sorts of other things.

You over-condensed.

DS

NB. This is not a comment on static, latent, derived or other typing,
merely on summarization.

Jun 20 '06 #87
Andreas Rossberg wrote:
Rob Thorpe wrote:

No, that isn't what I said. What I said was:
"A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class."

"it [= a value] [...] can [...] represent values"?
???


I just quoted, in condensed form, what you said above: namely, that a
value represents values - which I find a strange and circular definition.


Yes, but the point is, as the other poster mentioned: values defined by
a class.
For example, in lisp:
"xyz" is a string, #(1 2 3) is an array, '(1 2 3) is a list, 45 is a
fixed-point number.
Each item that can be stored has a type, no item can be stored without
one. The types can be tested. Most dynamic typed languages are like
that.

Compare this to an untyped language where types cannot generally be
tested.
A (static) type system assigns types to (all) *expressions*.


That's right most of the time yes, I probably should have said
expressions. Though I can think of static typed languages where the
resulting type of an expression depends on the type of the variable it
is being assigned to.


Yes, but that's no contradiction. A type system does not necessarily
assign *unique* types to individual expressions (consider overloading,
subtyping, polymorphism, etc).


That's a fair way of looking at it.
Well I haven't programmed in any statically typed language where values
have types themselves.


They all have - the whole purpose of a type system is to ensure that any
expression of type T always evaluates to a value of type T.


But it only gaurantees this because the variables themselves have a
type, the values themselves do not. Most of the time the fact that the
variable they are held in has a type infers that the value does too.
But the value itself has no type, in a C program for example I can take
the value from some variable and recast it in any way I feel and the
language cannot correct any errors I make because their is no
information in the variable to indicate what type it is.
So when you
look at type systems formally then you certainly have to assign types to
values, otherwise you couldn't prove any useful property about those
systems (esp. soundness).


Yes, but indirectly.

Jun 20 '06 #88
David Squire wrote:
Andreas Rossberg wrote:
Rob Thorpe wrote:

> No, that isn't what I said. What I said was:
> "A language is latently typed if a value has a property - called it's
> type - attached to it, and given it's type it can only represent
> values
> defined by a certain class."
"it [= a value] [...] can [...] represent values"?
???
I just quoted, in condensed form, what you said above: namely, that a
value represents values - which I find a strange and circular definition.


But you left out the most significant part: "given it's type it can only
represent values *defined by a certain class*" (my emphasis).


That qualification does not remove the circularity from the definition.
In C-ish notation:

unsigned int x;

means that x can only represent elements that are integers elements of
the set (class) of values [0, MAX_INT]. Negative numbers and non-integer
numbers are excluded, as are all sorts of other things.


I don't see how that example is relevant, since the above definition
does not mention variables.

- Andreas
Jun 20 '06 #89
"Rob Thorpe" <ro***********@antenova.com> writes:
But it only gaurantees this because the variables themselves have a
type, the values themselves do not.


I think statements like this are confusing, because there are
different interpretations of what a "value" is. I would say that the
integer '4' is a value, and that it has type Integer (for instance).
This value is different from 4 the Int16, or 4 the double-precision
floating point number. From this viewpoint, all values in statically
typed languages have types, but I think you use 'value' to denote the
representation of a datum in memory, which is a different thing.

-k
--
If I haven't seen further, it is by standing in the footprints of giants
Jun 20 '06 #90
Pascal Costanza <pc@p-cos.net> writes:
- In a dynamically typed language, you can run programs successfully
that are not acceptable by static type systems.
This statement is false.

For every program that can run successfully to completion there exists
a static type system which accepts that program. Moreover, there is
at least one static type system that accepts all such programs.

What you mean is that for static type systems that are restrictive
enough to be useful in practice there always exist programs which
(after type erasure in an untyped setting, i.e., by switching to a
different language) would run to completion, but which are rejected by
the static type system.

By the way, the parenthetical remark is important: If a language's
definition is based on a static type system, then there are *no*
programs in that language which are rejected by its type checker.
That's trivially so because strings that do not type-check are simply
not considered programs.

Here is an example in Common Lisp:

; A class "person" with no superclasses and with the only field "name":
(defclass person ()
(name))

; A test program:
(defun test ()
(let ((p (make-instance 'person)))
(eval (read))
(slot-value p 'address)))

(slot-value p 'address) is an attempt to access the field 'address in
the object p. In many languages, the notation for this is p.address.

Although the class definition for person doesn't mention the field
address, the call to (eval (read)) allows the user to change the
definition of the class person and update its existing
instances. Therefore at runtime, the call to (slot-value p 'adress)
has a chance to succeed.


I am quite comfortable with the thought that this sort of evil would
get rejected by a statically typed language. :-)
Jun 20 '06 #91
David Squire <Da**********@no.spam.from.here.au> writes:
Andreas Rossberg wrote:
Rob Thorpe wrote:

> No, that isn't what I said. What I said was:
> "A language is latently typed if a value has a property - called it's
> type - attached to it, and given it's type it can only represent values
> defined by a certain class."

"it [= a value] [...] can [...] represent values"?

??? I just quoted, in condensed form, what you said above: namely, that
a value represents values - which I find a strange and circular
definition.


But you left out the most significant part: "given it's type it can
only represent values *defined by a certain class*" (my emphasis). In
C-ish notation:

unsigned int x;

means that x can only represent elements that are integers elements of
the set (class) of values [0, MAX_INT]. Negative numbers and
non-integer numbers are excluded, as are all sorts of other things.


This x is not a value. It is a name of a memory location.
You over-condensed.


Andreas condensed correctly.
Jun 20 '06 #92
Matthias Blume wrote:
David Squire <Da**********@no.spam.from.here.au> writes:
Andreas Rossberg wrote:
Rob Thorpe wrote:
>> No, that isn't what I said. What I said was:
>> "A language is latently typed if a value has a property - called it's
>> type - attached to it, and given it's type it can only represent values
>> defined by a certain class."
> "it [= a value] [...] can [...] represent values"?
???
I just quoted, in condensed form, what you said above: namely, that
a value represents values - which I find a strange and circular
definition.

But you left out the most significant part: "given it's type it can
only represent values *defined by a certain class*" (my emphasis). In
C-ish notation:

unsigned int x;

means that x can only represent elements that are integers elements of
the set (class) of values [0, MAX_INT]. Negative numbers and
non-integer numbers are excluded, as are all sorts of other things.


This x is not a value. It is a name of a memory location.
You over-condensed.


Andreas condensed correctly.


I should have stayed out of this. I had not realised that it had
degenerated to point-scoring off someone typing "value" when it is clear
from context that he meant "variable".

Bye.

DS
Jun 20 '06 #93
Rob Thorpe wrote:
Yes, but the point is, as the other poster mentioned: values defined by
a class.
A value can only represent one value, right? Or can a value have
multiple values?
For example, in lisp:
"xyz" is a string,


"xyz" cannot represent values from the class of strings. It can only
represent one value.

I think that's what the others are getting at.
They all have - the whole purpose of a type system is to ensure that any
expression of type T always evaluates to a value of type T.


But it only gaurantees this because the variables themselves have a
type, the values themselves do not.


Sure they do. 23.5e3 is a "real" in Pascal, and there's no variable there.

("hello" % "there") is illegal in most languages, because the modulo
operator doesn't apply to strings. How could this be determined at
compile time if "hello" and "there" don't have types?

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.
Jun 20 '06 #94
"Rob Thorpe" <ro***********@antenova.com> writes:
Andreas Rossberg wrote:
Rob Thorpe wrote:
>>
>>>No, that isn't what I said. What I said was:
>>>"A language is latently typed if a value has a property - called it's
>>>type - attached to it, and given it's type it can only represent values
>>>defined by a certain class."
>>
>>"it [= a value] [...] can [...] represent values"?
>
> ???
I just quoted, in condensed form, what you said above: namely, that a
value represents values - which I find a strange and circular definition.


Yes, but the point is, as the other poster mentioned: values defined by
a class.
For example, in lisp:
"xyz" is a string, #(1 2 3) is an array, '(1 2 3) is a list, 45 is a
fixed-point number.
Each item that can be stored has a type, no item can be stored without
one. The types can be tested. Most dynamic typed languages are like
that.


Your "types" are just predicates.
Compare this to an untyped language where types cannot generally be
tested.
You mean there are no predicates in untyped languages?
They all have - the whole purpose of a type system is to ensure that any
expression of type T always evaluates to a value of type T.


But it only gaurantees this because the variables themselves have a
type, the values themselves do not.


Of course they do.
Most of the time the fact that the
variable they are held in has a type infers that the value does too.
But the value itself has no type, in a C program for example I can take
the value from some variable and recast it in any way I feel and the
language cannot correct any errors I make because their is no
information in the variable to indicate what type it is.


Casting in C takes values of one type to values of another type.
So when you
look at type systems formally then you certainly have to assign types to
values, otherwise you couldn't prove any useful property about those
systems (esp. soundness).


Yes, but indirectly.


No, this is usually done very directly and very explicitly.
Jun 20 '06 #95
David Squire <Da**********@no.spam.from.here.au> writes:
Matthias Blume wrote:
David Squire <Da**********@no.spam.from.here.au> writes:
Andreas Rossberg wrote:
Rob Thorpe wrote:
>>> No, that isn't what I said. What I said was:
>>> "A language is latently typed if a value has a property - called it's
>>> type - attached to it, and given it's type it can only represent values
>>> defined by a certain class."
>> "it [= a value] [...] can [...] represent values"?
> ???
I just quoted, in condensed form, what you said above: namely, that
a value represents values - which I find a strange and circular
definition.

But you left out the most significant part: "given it's type it can
only represent values *defined by a certain class*" (my emphasis). In
C-ish notation:

unsigned int x;

means that x can only represent elements that are integers elements of
the set (class) of values [0, MAX_INT]. Negative numbers and
non-integer numbers are excluded, as are all sorts of other things. This x is not a value. It is a name of a memory location.
You over-condensed.

Andreas condensed correctly.


I should have stayed out of this. I had not realised that it had
degenerated to point-scoring off someone typing "value" when it is
clear from context that he meant "variable".


If he really had meant "variable" then he would have been completely wrong.
Bye.


Bye.
Jun 20 '06 #96
Rob Thorpe wrote:
Andreas Rossberg wrote:
Rob Thorpe wrote:
>"A language is latently typed if a value has a property - called it's
>type - attached to it, and given it's type it can only represent values
>defined by a certain class."

"it [= a value] [...] can [...] represent values"?

???
I just quoted, in condensed form, what you said above: namely, that a
value represents values - which I find a strange and circular definition.


Yes, but the point is, as the other poster mentioned: values defined by
a class.


I'm sorry, but I still think that the definition makes little sense.
Obviously, a value simply *is* a value, it does not "represent" one, or
several ones, regardless how you qualify that statement.
Well I haven't programmed in any statically typed language where values
have types themselves.


They all have - the whole purpose of a type system is to ensure that any
expression of type T always evaluates to a value of type T.


But it only gaurantees this because the variables themselves have a
type,


No, variables are insignificant in this context. You can consider a
language without variables at all (such languages exist, and they can
even be Turing-complete) and still have evaluation, values, and a
non-trivial type system.
But the value itself has no type
You mean that the type of the value is not represented at runtime? True,
but that's simply because the type system is static. It's not the same
as saying it has no type.
in a C program for example I can take
the value from some variable and recast it in any way I feel and the
language cannot correct any errors I make because their is no
information in the variable to indicate what type it is.


Nothing in the C spec precludes an implementation from doing just that.
The problem with C rather is that its semantics is totally
underspecified. In any case, C is about the worst example to use when
discussing type systems. For starters, it is totally unsound - which is
what your example exploits.

- Andreas
Jun 20 '06 #97
Chris F Clark wrote:
A static
type system eliminates some set of tags on values by syntactic
analysis of annotations (types) written with or as part of the program
and detects some of the disallowed compuatations (staticly) at compile
time.
Adreas relied: Explicit annotations are not a necessary ingredient of a type system,
nor is "eliminating tags" very relevant to its function.


While this is true, I disagree at some level with the second part. By
eliminating tags, I mean allowing one to perform "type safe"
computations without requiring the values to be tagged. One could
argue that the tags were never there. However, many of the
interesting polymorphic computations reaquire either that the values
be tagged or that some other process assures that at each point one
can determine apriori what the correct variant of computation is which
applies.

To me a static type system is one which does this apriori
determination. A dynamic type system does not do a apriori and
instead includes explicit information in the values being computed to
select the corret variant computations.

In that sense, a static type system is eliminating tags, because the
information is pre-computed and not explicitly stored as a part of the
computation. Now, you may not view the tag as being there, but in my
mind if there exists a way of perfoming the computation that requires
tags, the tag was there and that tag has been eliminated.

To put it another way, I consider the tags to be axiomatic. Most
computations involve some decision logic that is driven by distinct
values that have previously been computed. The separation of the
values which drive the compuation one-way versus another is a tag.
That tag can potentially be eliminated by some apriori computation.

In what I do, it is very valuable to move information from being
explicitly represented in the computed result into the tag, so that I
often have distinct "types" (i.e. tags) for an empty list, a list with
one element, a list with two elements, and a longer list. In that
sense, I agree with Chris Smith's assertion that "static typing" is
about asserting general properties of the algorithm/data. These
assertions are important to the way I am manipulating the data. They
are part of my type model, but they may not be part of anyone else's,
and to me toe pass a empty list to a function requiring a list with
two elements is a "type error", because it is something I expect the
type system to detect as incorrect. The fact that no one else may
have that expectation does not seem relevant to me.

Now, to carry this farther, since I expect the type system to validate
that certain values are of certain types and only be used in certain
contexts, I am happy when it does not require certain "tags" to be
actualy present in the data. However, because other bits of code are
polymorphic, I do expect certain values to require tags. In the end,
this is still a win for me. I had certain data elements that in the
abstract had to be represented explicitly. I have encoded that
information into the type system and in some cases the type system is
not using any bits in the computed representation to hold that
information. Whenever that happens, I win and that solves one of the
problems that I need solved.

Thus, a type system is a way for me to express certain axioms about my
algorithm. A dynamic type system encodes those facts as part of the
computation. A static type system pre-calculates certain "theorems"
from my axioms and uses those theorems to allow my algorithm to be
computed without all the facts being stored as part of the computation.

Hopefully, this makes my point of view clear.

-Chris

************************************************** ***************************
Chris Clark Internet : co*****@world.std.com
Compiler Resources, Inc. Web Site : http://world.std.com/~compres
23 Bailey Rd voice : (508) 435-5016
Berlin, MA 01503 USA fax : (978) 838-0263 (24 hours)
------------------------------------------------------------------------------
Jun 20 '06 #98
Ketil Malde wrote:
"Rob Thorpe" <ro***********@antenova.com> writes:
But it only gaurantees this because the variables themselves have a
type, the values themselves do not.


I think statements like this are confusing, because there are
different interpretations of what a "value" is. I would say that the
integer '4' is a value, and that it has type Integer (for instance).
This value is different from 4 the Int16, or 4 the double-precision
floating point number. From this viewpoint, all values in statically
typed languages have types, but I think you use 'value' to denote the
representation of a datum in memory, which is a different thing.


Well I haven't been consistent so far :)

But I mean the value as the semantics of the program itself sees it.
Which mostly means the datum in memory.

Jun 20 '06 #99
Matthias Blume wrote:
"Rob Thorpe" <ro***********@antenova.com> writes:
Andreas Rossberg wrote:
Rob Thorpe wrote:
>>
>>>No, that isn't what I said. What I said was:
>>>"A language is latently typed if a value has a property - called it's
>>>type - attached to it, and given it's type it can only represent values
>>>defined by a certain class."
>>
>>"it [= a value] [...] can [...] represent values"?
>
> ???

I just quoted, in condensed form, what you said above: namely, that a
value represents values - which I find a strange and circular definition.


Yes, but the point is, as the other poster mentioned: values defined by
a class.
For example, in lisp:
"xyz" is a string, #(1 2 3) is an array, '(1 2 3) is a list, 45 is a
fixed-point number.
Each item that can be stored has a type, no item can be stored without
one. The types can be tested. Most dynamic typed languages are like
that.


Your "types" are just predicates.


You can call them predicates if you like. Most people programming in
python, perl, or lisp will call them types though.
Compare this to an untyped language where types cannot generally be
tested.


You mean there are no predicates in untyped languages?


Well, no there aren't. That's what defines a language as untyped.

Of-course you can make them with your own code, in for example
assembler, but that's outside the language.
They all have - the whole purpose of a type system is to ensure that any
expression of type T always evaluates to a value of type T.


But it only gaurantees this because the variables themselves have a
type, the values themselves do not.


Of course they do.


I think we're discussing this at cross-purposes. In a language like C
or another statically typed language there is no information passed
with values indicating their type. Have a look in a C compiler if you
don't believe me. You store 56 in a location in memory then in most
compilers all that will be store is 56, nothing more. The compiler
relys entirely on the types of the variables to know how to correctly
operate on the values. The values themselves have no type information
associated with them.
Most of the time the fact that the
variable they are held in has a type infers that the value does too.
But the value itself has no type, in a C program for example I can take
the value from some variable and recast it in any way I feel and the
language cannot correct any errors I make because their is no
information in the variable to indicate what type it is.


Casting in C takes values of one type to values of another type.


No it doesn't. Casting reinterprets a value of one type as a value of
another type.
There is a difference. If I cast an unsigned integer 2000000000 to a
signed integer in C on the machine I'm using then the result I will get
will not make any sense.
So when you
look at type systems formally then you certainly have to assign types to
values, otherwise you couldn't prove any useful property about those
systems (esp. soundness).


Yes, but indirectly.


No, this is usually done very directly and very explicitly.


No it isn't.
If I say "4 is a integer" that is a direct statement about 4.
If I say "X is and integer and X is 4, therefore 4 is an integer" that
is a slightly less direct statement about 4.

The difference in directness is only small, and much more indirectness
is needed to prove anything useful about a system formally.

Jun 20 '06 #100

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

220
18803
by: Brandon J. Van Every | last post by:
What's better about Ruby than Python? I'm sure there's something. What is it? This is not a troll. I'm language shopping and I want people's answers. I don't know beans about Ruby or have...
24
2559
by: Xah Lee | last post by:
What is Expresiveness in a Computer Language 20050207, Xah Lee. In languages human or computer, there's a notion of expressiveness. English for example, is very expressive in manifestation,...
23
3587
by: Xah Lee | last post by:
The Concepts and Confusions of Pre-fix, In-fix, Post-fix and Fully Functional Notations Xah Lee, 2006-03-15 Let me summarize: The LISP notation, is a functional notation, and is not a...
0
7203
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
7089
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
7282
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
7339
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
7463
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
5581
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development projectplanning, coding, testing,...
1
5017
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...
0
4678
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and...
0
389
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.