473,441 Members | 1,905 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,441 software developers and data experts.

What is Expressiveness in a Computer Language

in March, i posted a essay “What is Expressiveness in a Computer
Language”, archived at:
http://xahlee.org/perl-python/what_i...esiveness.html

I was informed then that there is a academic paper written on this
subject.

On the Expressive Power of Programming Languages, by Matthias
Felleisen, 1990.
http://www.ccs.neu.edu/home/cobbe/pl...ive-slides.pdf

Has anyone read this paper? And, would anyone be interested in giving a
summary?

thanks.

Xah
xa*@xahlee.org
http://xahlee.org/

Jun 9 '06
669 25340
David Hopwood wrote:

Oh, but it *does* make sense to talk about dynamic tagging in a statically
typed language.

That's part of what makes the term "dynamically typed" harmful: it implies
a dichotomy between "dynamically typed" and "statically typed" languages,
when in fact dynamic tagging and static typing are (mostly) independent
features.


That's really coming home to me in this thread: the terminology is *so*
bad. I have noticed this previously in the differences between
structural
and nominal typing; many typing issues associated with this distinction
are falsely labeled as a static-vs-dynamic issues, since so many
statically
type languages are nominally typed.

We need entirely new, finer grained terminology.
Marshall

Jun 21 '06 #151
Chris Uppal wrote:
It's worth noting, too, that (in some sense) the type of an object can change
over time[*]. That can be handled readily (if not perfectly) in the informal
internal type system(s) which programmers run in their heads (pace the very
sensible post by Anton van Straaten today in this thread -- several branches
away), but cannot be handled by a type system based on sets-of-values (and is
also a counter-example to the idea that "the" dynamic type of an object/value
can be identified with its tag).

([*] if the set of operations in which it can legitimately partake changes.
That can happen explicitly in Smalltalk (using DNU proxies for instance if the
proxied object changes, or even using #becomeA:), but can happen anyway in less
"free" languages -- the State Pattern for instance, or even (arguably) in the
difference between an empty list and a non-empty list).


Dynamic changes in object behaviour are not incompatible with type systems based
on sets of values (e.g. semantic subtyping). There are some tricky issues in
making such a system work, and I'm not aware of any implemented language that
does it currently, but in principle it's quite feasible.

For a type system that can handle dynamic proxying, see
<http://www.doc.ic.ac.uk/~scd/FOOL11/FCM.pdf>.

--
David Hopwood <da******************@blueyonder.co.uk>
Jun 21 '06 #152
Marshall schreef:
"dynamic types." I don't have a firm definition for
that term, but my working model is runtime type tags. In which
case, I would say that among statically typed languages,
Java does have dynamic types, but C does not. C++ is
somewhere in the middle.


C has union.

--
Affijn, Ruud

"Gewoon is een tijger."
Jun 21 '06 #153
Andreas Rossberg schrieb:
Chris Uppal wrote:
It's worth noting, too, that (in some sense) the type of an object can
change over time[*].


No. Since a type expresses invariants, this is precisely what may *not*
happen.


No. A type is a set of allowable values, allowable operations, and
constraints on the operations (which are often called "invariants" but
they are invariant only as long as the type is invariant).

I very much agree that the association of a type with a value or a name
should be constant over their lifetime - but that doesn't follow from
the definition of "type", it follows from general maintainability
considerations (quite strong ones actually).

Regards,
Jo
Jun 21 '06 #154
Nice post! One question:

Anton van Straaten wrote:

3. A really natural term to refer to types which programmers reason
about, even if they are not statically checked, is "latent types". It
captures the situation very well intuitively, and it has plenty of
precedent -- e.g. it's mentioned in the Scheme reports, R5RS and its
predecessors, going back at least a decade or so (haven't dug to check
when it first appeared).


Can you be more explicit about what "latent types" means?
I'm sorry to say it's not at all natural or intuitive to me.
Are you referring to the types in the programmers head,
or the ones at runtime, or what?
Marshall

Jun 21 '06 #155
Torben gidius Mogensen wrote:

That's not true. ML has variables in the mathematical sense of
variables -- symbols that can be associated with different values at
different times. What it doesn't have is mutable variables (though it
can get the effect of those by having variables be immutable
references to mutable memory locations).


While we're on the topic of terminology, here's a pet peeve of
mine: "immutable variable."

immutable = can't change
vary-able = can change

Clearly a contradiction in terms.

If you have a named value that cannot be updated, it makes
no sense to call it "variable" since it isn't *able* to *vary.*
Let's call it a named constant.
Marshall

Jun 21 '06 #156
Chris Uppal wrote:
doesn't fit with my intuitions very well -- most noticeably in that the sets
are generally unbounded
Errr, not in Ada. Indeed, not in any machine I know of with a limited
address space.

Andreas Rossberg wrote: Indeed, this view is much too narrow. In particular, it cannot explain
abstract types, which is *the* central aspect of decent type systems.


Well, it's Ada's view. I didn't say it was right for theoretical
languages or anything like that. As far as I know, LOTOS is the only
language that *actually* uses abstract data types - you have to use the
equivalent of #include to bring in the integers, for example. Everything
else uses informal rules to say how types work.

But Ada's definition gives you a very nice way of talking about things
like whether integers that overflow are the same type as integers that
don't overflow, or whether an assignment of an integer to a positive is
legal, or adding a CountOfApples to a CountOfOranges is legal, or
whether passing a "Dog" object to an "Animal" function parameter makes
sense in a particular context.

Indeed, the ability to declare a new type that has the exact same
underlying representation and isomorphically identical operations but
not be the same type is something I find myself often missing in
languages. It's nice to be able to say "this integer represents vertical
pixel count, and that represents horizontal pixel count, and you don't
get to add them together."

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.
Jun 21 '06 #157
"Marshall" <ma*************@gmail.com> writes:
Torben gidius Mogensen wrote:

That's not true. ML has variables in the mathematical sense of
variables -- symbols that can be associated with different values at
different times. What it doesn't have is mutable variables (though it
can get the effect of those by having variables be immutable
references to mutable memory locations).
While we're on the topic of terminology, here's a pet peeve of
mine: "immutable variable."

immutable = can't change
vary-able = can change

Clearly a contradiction in terms.


No, it is not a contradiction. See the mathematical usage of the word
"variable". Immutable variables can stand for different values at
different times, even without mutation involved, usually because they
are bound by some construct such as lambda.
If you have a named value that cannot be updated, it makes
no sense to call it "variable" since it isn't *able* to *vary.*


Mutation is not the only way for an expression to evaluate to
different values over time.
Jun 21 '06 #158
Darren New <dn**@san.rr.com> writes:
[ ... ] As far as I know, LOTOS is the only
language that *actually* uses abstract data types - you have to use
the equivalent of #include to bring in the integers, for
example. Everything else uses informal rules to say how types work.


There are *tons* of languages that "actually" facilitate abstract data
types, and some of these languages are actually used by real people.
Jun 21 '06 #159
Matthias Blume wrote:
"Rob Thorpe" <ro***********@antenova.com> writes:
>> > No it doesn't. Casting reinterprets a value of one type as a value of
>> > another type.
>> > There is a difference. If I cast an unsigned integer 2000000000 to a
>> > signed integer in C on the machine I'm using then the result I will get
>> > will not make any sense.
>>
>> Which result are you getting? What does it mean to "make sense"?
>
> Well the right one actually, bad example.
>
> But, if I cast an unsigned int 2500000000 to signed I get -1794967296.

So, why do you think this "does not make sense"?
Well, it makes sense in terms of the C spec certainly.
It does not make sense in that it does not emit an error.


Why not?


Well it's really a matter of perspective isn't it. In some cases it
may even be what the programmer intends.
From my perspective I don't expect a large positive number to turn into

a negative one just because I've requested that it be signed. The
behaviour I would expect is that either the variable being set gets the
right number, or an error occurs. I don't care if the reporting is at
runtime or compile time, so long as it can be seen.

--
Anyway, this is a small point. Does the fact that you only disagreed
with me on this point indicate that you agreed with everything else?

.... Only joking :) I think you and I have irreconcilable views on this
subject.
I think we can agree that value mostly have some sort of type in a
statically typed language. You may say that they have directly have
types, but I can't see how that can be, as far as I can see they have
types only indirectly. This is only a minor point though.

The issue of typing of values vs variables is not practically
meaningless though.

The main problem I find with statically typed programs is that much of
the world outside the program is not statically typed. Library and API
writers in-particular seem to treat static typing carelessly. As a
result it's not generally possible to write a program meaningfully
inside the static type system, as the type systems designers intended.
You're force to break it all over the place with casts etc anyway. And
at every place you break it the possibility of error occurs and flow
from that place into the rest of the program. So, you end up actually
checking the values of variables rather than relying on their types
anyway. And this can be difficult because their values have no
associated information telling you their types.

It's different writing a function deep inside a large program, in this
case data can be cleaned up beforehand. And in this case the static
typing works are intended and becomes more useful.
Anyway, I don't think it would help either of us to continue this
sub-thread any more. Feel free to reply to this post, but I might not
reply back.

Static typing is rather off topic on most of the newsgroups this thread
is in anyway, and I suspect the people in them are getting rather sick
of it.

Jun 21 '06 #160
Marshall wrote:

While we're on the topic of terminology, here's a pet peeve of
mine: "immutable variable."

immutable = can't change
vary-able = can change

Clearly a contradiction in terms.

If you have a named value that cannot be updated, it makes
no sense to call it "variable" since it isn't *able* to *vary.*
Let's call it a named constant.


The name of a function argument is a variable. Its denotation changes
between calls. Still it cannot be mutated. Likewise, local "constants"
depending on an argument are not constant.

- Andreas
Jun 21 '06 #161
Joachim Durchholz wrote:
It's worth noting, too, that (in some sense) the type of an object
can change over time[*].


No. Since a type expresses invariants, this is precisely what may
*not* happen.


No. A type is a set of allowable values, allowable operations, and
constraints on the operations (which are often called "invariants" but
they are invariant only as long as the type is invariant).


The purpose of a type system is to derive properties that are known to
hold in advance. A type is the encoding of these properties. A type
varying over time is an inherent contradiction (or another abuse of the
term "type").

- Andreas
Jun 21 '06 #162
David Hopwood wrote:
Rob Thorpe wrote:
Matthias Blume wrote:
"Rob Thorpe" <ro***********@antenova.com> writes:

I think we're discussing this at cross-purposes. In a language like C
or another statically typed language there is no information passed
with values indicating their type.

You seem to be confusing "does not have a type" with "no type
information is passed at runtime".

Have a look in a C compiler if you don't believe me.

Believe me, I have.


In a C compiler the compiler has no idea what the values are in the program.
It knows only their type in that it knows the type of the variable they
are contained within.
Would you agree with that?


No. In any language, it may be possible to statically infer that the
value of an expression will belong to a set of values smaller than that
allowed by the expression's type in that language's type system. For
example, all constants have a known value, but most constants have a
type which allows more than one value.

(This is an essential point, not just nitpicking.)


Yes, I agree. That does not apply in general though.
In general the value of the variable could be, for example, read from a
file, in which case the compiler may know it's type, but not any more.

I was talking about the general case.

Jun 21 '06 #163
In comp.lang.functional Anton van Straaten <an***@appsolutions.com> wrote:
[...]

This static vs dynamic type thing reminds me of one article written by
Bjarne Stroustrup where he notes that "Object-Oriented" has become a
synonym for "good". More precisely, it seems to me that both camps
(static & dynamic) think that "typed" is a synonym for having
"well-defined semantics" or being "safe" and therefore feel the need
to be able to speak of their language as "typed" whether or not it
makes sense.
Let me add another complex subtlety, then: the above description misses
an important point, which is that *automated* type checking is not the
whole story. I.e. that compile time/runtime distinction is a kind of
red herring.
I agree. I think that instead of "statically typed" we should say
"typed" and instead of "(dynamically|latently) typed" we should say
"untyped".
In a statically-checked language, people tend to confuse automated
static checking with the existence of types, because they're thinking in
a strictly formal sense: they're restricting their world view to what
they see "within" the language.
That is not unreasonable. You see, you can't have types unless you
have a type system. Types without a type system are like answers
without questions - it just doesn't make any sense.
Then they look at programs in a dynamically-checked language, and see
checks happening at runtime, and they assume that this means that the
program is "untyped".
Not in my experience. Either a *language* specifies a type system or
not. There is little room for confusion. Well, at least unless you
equate "typing" with being "well-defined" or "safe" and go to great
lengths to convince yourself that your program has "latent types" even
without specifying a type system.
It's certainly close enough to say that the *language* is untyped.
Indeed. Either a language has a type system and is typed or has no
type system and is untyped. I see very little room for confusion
here. In my experience, the people who confuse these things are
people from the dynamic/latent camp who wish to see types everywhere
because they confuse typing with safety or having well-defined
semantics.
But a program as seen by the programmer has types: the programmer
performs (static) type inference when reasoning about the program, and
debugs those inferences when debugging the program, finally ending up
with a program which has a perfectly good type scheme. It's may be
messy compared to say an HM type scheme, and it's usually not proved to
be perfect, but that again is an orthogonal issue.
There is a huge hole in your argument above. Types really do not make
sense without a type system. To claim that a program has a type
scheme, you must first specify the type system. Otherwise it just
doesn't make any sense.
Mathematicians operated for thousands of years without automated
checking of proofs, so you can't argue that because a
dynamically-checked program hasn't had its type scheme proved correct,
that it somehow doesn't have types. That would be a bit like arguing
that we didn't have Math until automated theorem provers came along.
No - not at all. First of all, mathematics has matured quite a bit
since the early days. I'm sure you've heard of the axiomatic method.
However, what you are missing is that to prove that your program has
types, you first need to specify a type system. Similarly, to prove
something in math you start by specifying [fill in the rest].
1. "Untyped" is really quite a misleading term, unless you're talking
about something like the untyped lambda calculus. That, I will agree,
can reasonably be called untyped.
Untyped is not misleading. "Typed" is not a synonym for "safe" or
"having well-defined semantics".
So, will y'all just switch from using "dynamically typed" to "latently
typed"


I won't (use "latently typed"). At least not without further
qualification.

-Vesa Karvonen
Jun 21 '06 #164
George Neuner sez:
On Mon, 19 Jun 2006 22:02:55 +0000 (UTC), Dimitri Maziuk
<di**@127.0.0.1> wrote:
Yet Another Dan sez:

... Requiring an array index to be an integer is considered a typing
problem because it can be checked based on only the variable itself,
whereas checking whether it's in bounds requires knowledge about the array.


You mean like
subtype MyArrayIndexType is INTEGER 7 .. 11
type MyArrayType is array (MyArrayIndexType) of MyElementType


If the index computation involves wider types it can still produce
illegal index values. The runtime computation of an illegal index
value is not prevented by narrowing subtypes and cannot be statically
checked.


My vague recollection is that no, it won't unless _you_ explicitly code an
unchecked type conversion. But it's been a while.

HTH
Dima
--
I have not been able to think of any way of describing Perl to [person]
"Hello, blind man? This is color." -- DPM
Jun 21 '06 #165
Rob Thorpe wrote:
Chris Smith wrote:
Torben gidius Mogensen <to*****@app-3.diku.dk> wrote:
That's not really the difference between static and dynamic typing.
Static typing means that there exist a typing at compile-time that
guarantess against run-time type violations. Dynamic typing means
that such violations are detected at run-time. This is orthogonal to
strong versus weak typing, which is about whether such violations are
detected at all. The archetypal weakly typed language is machine code
-- you can happily load a floating point value from memory, add it to
a string pointer and jump to the resulting value. ML and Scheme are
both strongly typed, but one is statically typed and the other
dynamically typed.


Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
interject my plea not to abuse the word "type" with a phrase like
"dynamically typed". If anyone considers "untyped" to be perjorative,
as some people apparently do, then I'll note that another common term is
"type-free," which is marketing-approved but doesn't carry the
misleading connotations of "dynamically typed." We are quickly losing
any rational meaning whatsoever to the word "type," and that's quite a
shame.


I don't think dynamic typing is that nebulous. I remember this being
discussed elsewhere some time ago, I'll post the same reply I did then
..
A language is statically typed if a variable has a property - called
it's type - attached to it, and given it's type it can only represent
values defined by a certain class.

A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class.

Some people use dynamic typing as a word for latent typing, others use
it to mean something slightly different. But for most purposes the
definition above works for dynamic typing also.

Untyped and type-free mean something else: they mean no type checking
is done.


Since people have found some holes in this definition I'll have another
go:-

Firstly, a definition, General expression (gexpr) are variables
(mutable or immutable), expressions and the entities functions return.

A statically typed language has a parameter associated with each gexpr
called it's type. The code may test the type of a gexpr. The language
will check if the gexprs of an operator/function have types that match
what is required, to some criteria of sufficiency. It will emit an
error/warning when they don't. It will do so universally.

A latently typed language has a parameter associated with each value
called it's type. The code may test the type of a value. The language
may check if the gexprs of an operator/function have types that match
what is required, to some criteria of sufficiency. It will not
necessarily do so universally.

An untyped language is one that does not possess either a static or
latent type system. In an untyped language gexprs possess no type
information, and neither do values.

--

These definitions still have problems, they don't say anything about
languages that sit between the various categories for example. I don't
know where HM type system would come in them.

Jun 21 '06 #166
Darren New wrote:

As far as I know, LOTOS is the only
language that *actually* uses abstract data types
Maybe I don't understand what you mean with ADT here, but all languages
with a decent module system support ADTs in the sense it is usually
understood, see ML for a primary example. Classes in most OOPLs are
essentially beefed-up ADTs as well.
Indeed, the ability to declare a new type that has the exact same
underlying representation and isomorphically identical operations but
not be the same type is something I find myself often missing in
languages. It's nice to be able to say "this integer represents vertical
pixel count, and that represents horizontal pixel count, and you don't
get to add them together."


Not counting C/C++, I don't know when I last worked with a typed
language that does *not* have this ability... (which is slightly
different from ADTs, btw)

- Andreas
Jun 21 '06 #167
In comp.lang.functional Andreas Rossberg <ro******@ps.uni-sb.de> wrote:
Darren New wrote: [...]
Indeed, the ability to declare a new type that has the exact same
underlying representation and isomorphically identical operations but
not be the same type is something I find myself often missing in
languages. It's nice to be able to say "this integer represents vertical
pixel count, and that represents horizontal pixel count, and you don't
get to add them together."

Not counting C/C++, I don't know when I last worked with a typed
language that does *not* have this ability... (which is slightly
different from ADTs, btw)


Would Java count?

-Vesa Karvonen
Jun 21 '06 #168
Vesa Karvonen wrote:
In comp.lang.functional Anton van Straaten <an***@appsolutions.com> wrote:
Let me add another complex subtlety, then: the above description misses
an important point, which is that *automated* type checking is not the
whole story. I.e. that compile time/runtime distinction is a kind of
red herring.


I agree. I think that instead of "statically typed" we should say
"typed" and instead of "(dynamically|latently) typed" we should say
"untyped".
In a statically-checked language, people tend to confuse automated
static checking with the existence of types, because they're thinking in
a strictly formal sense: they're restricting their world view to what
they see "within" the language.


That is not unreasonable. You see, you can't have types unless you
have a type system. Types without a type system are like answers
without questions - it just doesn't make any sense.
Then they look at programs in a dynamically-checked language, and see
checks happening at runtime, and they assume that this means that the
program is "untyped".


Not in my experience. Either a *language* specifies a type system or
not. There is little room for confusion. Well, at least unless you
equate "typing" with being "well-defined" or "safe" and go to great
lengths to convince yourself that your program has "latent types" even
without specifying a type system.


The question is: What do you mean by "type system"?
Scheme and Lisp both define how types work in their specifications
clearly, others may do too, I don't know.
Of-course you may not consider that as a type system if you mean "type
system" to mean a static type system.
It's certainly close enough to say that the *language* is untyped.


Indeed. Either a language has a type system and is typed or has no
type system and is untyped. I see very little room for confusion
here. In my experience, the people who confuse these things are
people from the dynamic/latent camp who wish to see types everywhere
because they confuse typing with safety or having well-defined
semantics.


No. It's because the things that we call latent types we use for the
same purpose that programmers of static typed languages use static
types for.

Statically typed programmers ensure that the value of some expression
is of some type by having the compiler check it. Programmers of
latently typed languages check, if they think it's important, by asking
what the type of the result is.

The objection here is that advocates of statically typed language seem
to be claiming the "type" as their own word, and asking that others use
their definitions of typing, which are really specific to their
subjects of interest. This doesn't help advocates of static languages/
latently typed languages, or anyone else. It doesn't help because
no-one else is likely to change their use of terms, there's no reason
why they would. All that may happen is that users of statically typed
languages change the words they use. This would confuse me, for one. I
would much rather understand what ML programmers, for example, are
saying and that's hard enough as it is.

There's also my other objection, if you consider latently typed
languages untyped, then what is assembly?

Jun 21 '06 #169
Vesa Karvonen wrote:
Indeed, the ability to declare a new type that has the exact same
underlying representation and isomorphically identical operations but
not be the same type is something I find myself often missing in
languages. It's nice to be able to say "this integer represents vertical
pixel count, and that represents horizontal pixel count, and you don't
get to add them together."

Not counting C/C++, I don't know when I last worked with a typed
language that does *not* have this ability... (which is slightly
different from ADTs, btw)


Would Java count?


Yes, you are right. And there certainly are more in the OO camp.

But honestly, I do not remember when I last had to actively work with
one of them, including Java... :-)

- Andreas
Jun 21 '06 #170
Matthias Blume wrote:
There are *tons* of languages that "actually" facilitate abstract data
types, and some of these languages are actually used by real people.


I don't know of any others in actual use. Could you name a couple?

Note that I don't consider things like usual OO languages (Eiffel,
Smalltalk, etc) to have "abstract data types".

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.
Jun 21 '06 #171
Andreas Rossberg wrote:
Maybe I don't understand what you mean with ADT here, but all languages
with a decent module system support ADTs in the sense it is usually
understood, see ML for a primary example.
OK. Maybe some things like ML and Haskell and such that I'm not
intimately familiar with do, now that you mention it, yes.
Classes in most OOPLs are essentially beefed-up ADTs as well.
Err, no. There's nothing really abstract about them. And their values
are mutable. So while one can think about them as an ADT, one actually
has to write code to (for example) calculate or keep track of how many
entries are on a stack, if you want that information.
Not counting C/C++, I don't know when I last worked with a typed
language that does *not* have this ability... (which is slightly
different from ADTs, btw)


Java? C#? Icon? Perl? (Hmmm... Pascal does, IIRC.) I guess you just work
with better languages than I do. :-)

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.
Jun 21 '06 #172
Marshall wrote:
Torben gidius Mogensen wrote:
That's not true. ML has variables in the mathematical sense of
variables -- symbols that can be associated with different values at
different times. What it doesn't have is mutable variables (though it
can get the effect of those by having variables be immutable
references to mutable memory locations).


While we're on the topic of terminology, here's a pet peeve of
mine: "immutable variable."

immutable = can't change
vary-able = can change

Clearly a contradiction in terms.


But one that is at least two hundred years old[*], and so unlikely to be
fixed now.

In any case, the intent of this usage (in both mathematics and programming)
is that different *instances* of a variable can be associated with different
values.
[*] introduced by Leibniz, according to <http://members.aol.com/jeff570/v.html>,
but that was presumably in Latin. The first use of "variable" as a noun
recorded by the OED in written English is in 1816.

--
David Hopwood <da******************@blueyonder.co.uk>
Jun 21 '06 #173
Dr.Ruud wrote:
Marshall schreef:
"dynamic types." I don't have a firm definition for
that term, but my working model is runtime type tags. In which
case, I would say that among statically typed languages,
Java does have dynamic types, but C does not. C++ is
somewhere in the middle.


C has union.


That's not the same thing. The value of a union in C can be any of a
set of specified types. But the program cannot find out which, and the
language doesn't know either.

With C++ and Java dynamic types the program can test to find the type.

Jun 21 '06 #174
Darren New wrote:
Maybe I don't understand what you mean with ADT here, but all
languages with a decent module system support ADTs in the sense it is
usually understood, see ML for a primary example.


OK. Maybe some things like ML and Haskell and such that I'm not
intimately familiar with do, now that you mention it, yes.


Well, Modula and CLU already had this, albeit in restricted form.
Classes in most OOPLs are essentially beefed-up ADTs as well.


Err, no. There's nothing really abstract about them. And their values
are mutable. So while one can think about them as an ADT, one actually
has to write code to (for example) calculate or keep track of how many
entries are on a stack, if you want that information.


Now you lost me completely. What has mutability to do with it? And the
stack?

AFAICT, ADT describes a type whose values can only be accessed by a
certain fixed set of operations. Classes qualify for that, as long as
they provide proper encapsulation.
Not counting C/C++, I don't know when I last worked with a typed
language that does *not* have this ability... (which is slightly
different from ADTs, btw)


Java? C#? Icon? Perl? (Hmmm... Pascal does, IIRC.) I guess you just work
with better languages than I do. :-)


OK, I admit that I exaggerated slightly. Although currently I'm indeed
able to mostly work with the more pleasant among languages. :-)

(Btw, Pascal did not have it either, AFAIK)

- Andreas
Jun 21 '06 #175
Andreas Rossberg wrote:
AFAICT, ADT describes a type whose values can only be accessed by a
certain fixed set of operations.
No. AFAIU, an ADT defines the type based on the operations. The stack
holding the integers 1 and 2 is the value (push(2, push(1, empty()))).
There's no "internal" representation. The values and operations are
defined by preconditions and postconditions.

Both a stack and a queue could be written in most languages as "values
that can only be accessed by a fixed set of operations" having the same
possible internal representations and the same function signatures.
They're far from the same type, because they're not abstract. The part
you can't see from outside the implementation is exactly the part that
defines how it works.

Granted, it's a common mistake to write that some piece of C++ code
implements a stack ADT.

For example, an actual ADT for a stack (and a set) is shown on
http://www.cs.unc.edu/~stotts/danish...man/umadt.html
Note that the "axia" for the stack type completely define its operation
and semantics. There is no other "implementation" involved. And in LOTOS
(which uses ACT.1 or ACT.ONE (I forget which)) as its type, this is
actually how you'd write the code for a stack, and then the compiler
would crunch a while to figure out a corresponding implementation.

I suspect "ADT" is by now so corrupted that it's useful to use a
different term, such as "algebraic type" or some such.
Classes qualify for that, as long as they provide proper encapsulation.
Nope.
OK, I admit that I exaggerated slightly. Although currently I'm indeed
able to mostly work with the more pleasant among languages. :-)
Yah. :-)
(Btw, Pascal did not have it either, AFAIK)


I'm pretty sure in Pascal you could say

Type Apple = Integer; Orange = Integer;
and then vars of type apple and orange were not interchangable.

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.
Jun 21 '06 #176
Rob Thorpe schreef:
Dr.Ruud:
Marshall:
"dynamic types." I don't have a firm definition for
that term, but my working model is runtime type tags. In which
case, I would say that among statically typed languages,
Java does have dynamic types, but C does not. C++ is
somewhere in the middle.
C has union.


That's not the same thing.


That is your opinion. In the context of this discussion I don't see any
problem to put C's union under "dynamic types".

The value of a union in C can be any of a
set of specified types. But the program cannot find out which, and
the language doesn't know either.

With C++ and Java dynamic types the program can test to find the type.


When such a test is needed for the program with the union, it has it.

--
Affijn, Ruud

"Gewoon is een tijger."
Jun 21 '06 #177
Joachim Durchholz <jo@durchholz.org> wrote:
Assume a language that
a) defines that a program is "type-correct" iff HM inference establishes
that there are no type errors
b) compiles a type-incorrect program anyway, with an establishes
rigorous semantics for such programs (e.g. by throwing exceptions as
appropriate).
So the compiler now attempts to prove theorems about the program, but
once it has done so it uses the results merely to optimize its runtime
behavior and then throws the results away. I'd call that not a
statically typed language, then. The type-checking behavior is actually
rather irrelevant both to the set of valid programs of the language, and
to the language semantics (since the same could be accomplished without
the type checking). It is only relevant to performance. Obviously, the
language probably qualifies as dynamically typed for most common
definitions of that term, but I'm not ready to accept one definition and
claim to understand it, yet, so I'll be cautious about classsifying the
language.
The compiler might actually refuse to compile type-incorrect programs,
depending on compiler flags and/or declarations in the code.
Then those compiler flags would cause the compiler to accept a different
language, and that different language would be a statically typed
language (by which I don't mean to exclude the possibility of its also
being dynamically typed).
Typed ("strongly typed") it is, but is it statically typed or
dynamically typed?


So my answer is that it's not statically typed in the first case, and is
statically typed in the second case, and it intuitively appears to be
dynamically typed at least in the first, and possibly in the second as
well.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 21 '06 #178
Marshall <ma*************@gmail.com> wrote:
I think what this highlights is the fact that our existing terminology
is not up to the task of representing all the possible design
choices we could make. Some parts of dynamic vs. static
a mutually exclusive; some parts are orthogonal.


Really? I can see that in a strong enough static type system, many
dynamic typing features would become unobservable and therefore would be
pragmatically excluded from any probable implementations... but I don't
see any other kind of mutual exclusion between the two.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 21 '06 #179

Marshall wrote:

That's really coming home to me in this thread: the terminology is *so*
bad. I have noticed this previously in the differences between
structural
and nominal typing; many typing issues associated with this distinction
are falsely labeled as a static-vs-dynamic issues, since so many
statically
type languages are nominally typed.

We need entirely new, finer grained terminology.


Agreed. That's why I've been biting my tongue and avoiding posting.
The discussion is going along the lines of the blind men and the
elephant. I've seen about seven different definitions of what a `type'
is, and most of the arguments seem to come from people misunderstanding
the other person's definition. I think that *most* of the people
arguing here would agree with each other (possibly in detail) if only
they understood each other.

Static type aficionados have a specialized jargon that has precise
meaning for a number of the terms being used. People that aren't into
that field of computer science use the same terms in a much looser
sense. But static type aficionados are definitely in the minority in
comp.lang.lisp, and probably in a few of the other comp.langs as well.

What we need is an FAQ entry for how to talk about types with people
who are technically adept, but non-specialists. Or alternatively, an
FAQ of how to explain the term `dynamic typing' to a type theorist.

Jun 21 '06 #180
Andreas Rossberg wrote:
Chris Uppal wrote:

I have never been very happy with relating type to sets of values (objects,
whatever).
Indeed, this view is much too narrow. In particular, it cannot explain
abstract types, which is *the* central aspect of decent type systems.


What prohibits us from describing an abstract type as a set of values?

There were papers observing this as early as 1970.
References?

(There are also theoretic problems with the types-as-sets view, because
sufficiently rich type systems can no longer be given direct models in
standard set theory. For example, first-class polymorphism would run
afoul the axiom of foundation.)


There is no reason why we must limit ourselves to "standard set theory"
any more than we have to limit ourselves to standard type theory.
Both are progressing, and set theory seems to me to be a good
choice for a foundation. What else would you use?

(Agree with the rest.)
Marshall

Jun 21 '06 #181
Chris Uppal wrote:
David Hopwood wrote:
When people talk about "types" being associated with values in a "latently typed"
or "dynamically typed" language, they really mean *tag*, not type.
I don't think that's true. Maybe /some/ people do confuse the two, but I am
certainly a counter-example ;-)

The tag (if any) is part of the runtime machinery (or, if not, then I don't
understand what you mean by the word), and while that is certainly a reasonably
approximation to the type of the object/value, it is only an approximation,
and -- what's more -- is only an approximation to the type as yielded by one
specific (albeit abstract, maybe even hypothetical) type system.


Yes. I should perhaps have mentioned that people sometimes mean "protocol"
rather than "tag" or "type" (a protocol being the set of messages that an object
can respond to, roughly speaking).
If I send #someMessage to a proxy object which has not had its referent set
(and assuming the default value, presumably some variant of nil, does not
understand #someMessage), then that's just as much a type error as sending
#someMessage to a variable holding a nil value.
It's an error, certainly. People usually call it a type error. But does that
terminology actually make sense?

Typical programming languages have many kinds of semantic error that can occur
at run-time: null references, array index out of bounds, assertion failures,
failed casts, "message not understood", ArrayStoreExceptions in Java,
arithmetic overflow, divide by zero, etc.

Conventionally, some of these errors are called "type errors" and some are
not. But there seems to be little rhyme or reason to this categorization, as
far as I can see. If in a particular language, both array index bounds errors
and "message not understood" can occur at run-time, then there's no objective
reason to call one a type error and the other not. Both *could* potentially
be caught by a type-based analysis in some cases, and both *are not* caught
by such an analysis in that language.

A more consistent terminology would reserve "type error" for errors that
occur when a typechecking/inference algorithm fails, or when an explicit
type coercion or typecheck fails.

According to this view, the only instances where a run-time error should be
called a "type error" are:

- a failed cast, or no match for any branch of a 'typecase' construct.
Here the construct that fails is a coercion of a value to a specific type,
or a check that it conforms to that type, and so the term "type error"
makes sense.

- cases where a typechecking/inference algorithm fails at run-time (e.g.
in a language with staged compilation, or dynamic loading with link-time
typechecking).

In other cases, just say "run-time error".
If I then assign the referent
of the proxy to some object which does understand #someMessage, then it is not
a type error to send #someMessage to the proxy. So the type has changed, but
nothing in the tag system of the language implementation has changed.


In the terminology I'm suggesting, the object has no type in this language
(assuming we're talking about a Smalltalk-like language without any type system
extensions). So there is no type error, and no inconsistency.

Objects in this language do have protocols, so this situation can be described
as a change to the object's protocol, which changes whether a given message
causes a protocol error.

--
David Hopwood <da******************@blueyonder.co.uk>
Jun 21 '06 #182
Dr.Ruud wrote:
Marshall schreef:
"dynamic types." I don't have a firm definition for
that term, but my working model is runtime type tags. In which
case, I would say that among statically typed languages,
Java does have dynamic types, but C does not. C++ is
somewhere in the middle.


C has union.


But it does not have tagged unions, so my point is unaffected.
Marshall

Jun 21 '06 #183
Matthias Blume wrote:
"Marshall" <ma*************@gmail.com> writes:
Torben gidius Mogensen wrote:

That's not true. ML has variables in the mathematical sense of
variables -- symbols that can be associated with different values at
different times. What it doesn't have is mutable variables (though it
can get the effect of those by having variables be immutable
references to mutable memory locations).
While we're on the topic of terminology, here's a pet peeve of
mine: "immutable variable."

immutable = can't change
vary-able = can change

Clearly a contradiction in terms.


No, it is not a contradiction. See the mathematical usage of the word
"variable".


I am not saying that this kind of terminology isn't common; what
I'm saying is that it isn't good. And I think I gave a pretty clear
and solid definition of the two words, and those definitions are
decidedly contradictory.

Immutable variables can stand for different values at
different times, even without mutation involved, usually because they
are bound by some construct such as lambda.


Well, that's a good point actually. Parameters are variable
no matter how you look at it. I was speaking more in terms
of locals.

If you have a named value that cannot be updated, it makes
no sense to call it "variable" since it isn't *able* to *vary.*


Mutation is not the only way for an expression to evaluate to
different values over time.


I suppose the difficulty arises from the difference
between the connotation of "mutate" in a PLT context, which is
support for destructive assignment, and the meaning of the word
in the larger context.

Anyway, I don't see how your sentence above contradicts
my sentence. We do not use the term "update" to describe
the process of binding values to parameters upon function
invocation. (Am I wrong?)
Marshall

Jun 21 '06 #184
Chris Smith wrote:
Marshall <ma*************@gmail.com> wrote:
I think what this highlights is the fact that our existing terminology
is not up to the task of representing all the possible design
choices we could make. Some parts of dynamic vs. static
a mutually exclusive; some parts are orthogonal.


Really? I can see that in a strong enough static type system, many
dynamic typing features would become unobservable and therefore would be
pragmatically excluded from any probable implementations... but I don't
see any other kind of mutual exclusion between the two.


Well, it strikes me that some of what the dynamic camp likes
is the actual *absence* of declared types, or the necessity
of having them. At the very least, requiring types vs. not requiring
types is mutually exclusive.

But again, my dynamic kung fu is very weak, so I may not know
what I'm talking about when I represent the dynamic side.
Marshall

Jun 21 '06 #185
On Wed, 21 Jun 2006 16:12:48 +0000 (UTC), Dimitri Maziuk
<di**@127.0.0.1> wrote:
George Neuner sez:
On Mon, 19 Jun 2006 22:02:55 +0000 (UTC), Dimitri Maziuk
<di**@127.0.0.1> wrote:
Yet Another Dan sez:

... Requiring an array index to be an integer is considered a typing
problem because it can be checked based on only the variable itself,
whereas checking whether it's in bounds requires knowledge about the array.

You mean like
subtype MyArrayIndexType is INTEGER 7 .. 11
type MyArrayType is array (MyArrayIndexType) of MyElementType


If the index computation involves wider types it can still produce
illegal index values. The runtime computation of an illegal index
value is not prevented by narrowing subtypes and cannot be statically
checked.


My vague recollection is that no, it won't unless _you_ explicitly code an
unchecked type conversion. But it's been a while.

You can't totally prevent it ... if the index computation involves
types having a wider range, frequently the solution is to compute a
wide index value and then narrow it. But if the wider value is out of
range for the narrow type you have a problem.

Using the illegal wide value in a checked narrowing conversion will
throw a CONSTRAINT_ERROR exception - it doesn't matter whether you
access the array directly using the wide value or try to assign the
value to a variable of the narrow index type. Using the wide value
unchecked will access memory beyond the array which is not what you
wanted and may cause a crash.

The point is really that the checks that prevent these things must be
performed at runtime and can't be prevented by any practical type
analysis performed at compile time. I'm not a type theorist but my
opinion is that a static type system that could, a priori, prevent the
problem is impossible.

George
--
for email reply remove "/" from address
Jun 21 '06 #186
Rob Thorpe wrote:
Vesa Karvonen wrote:
In comp.lang.functional Anton van Straaten <an***@appsolutions.com> wrote:
Let me add another complex subtlety, then: the above description misses
an important point, which is that *automated* type checking is not the
whole story. I.e. that compile time/runtime distinction is a kind of
red herring.


I agree. I think that instead of "statically typed" we should say
"typed" and instead of "(dynamically|latently) typed" we should say
"untyped". [...]
It's certainly close enough to say that the *language* is untyped.


Indeed. Either a language has a type system and is typed or has no
type system and is untyped. I see very little room for confusion
here. In my experience, the people who confuse these things are
people from the dynamic/latent camp who wish to see types everywhere
because they confuse typing with safety or having well-defined
semantics.


No. It's because the things that we call latent types we use for the
same purpose that programmers of static typed languages use static
types for.

Statically typed programmers ensure that the value of some expression
is of some type by having the compiler check it. Programmers of
latently typed languages check, if they think it's important, by asking
what the type of the result is.

The objection here is that advocates of statically typed language seem
to be claiming the "type" as their own word, and asking that others use
their definitions of typing, which are really specific to their
subjects of interest.


As far as I can tell, the people who advocate using "typed" and "untyped"
in this way are people who just want to be able to discuss all languages in
a unified terminological framework, and many of them are specifically not
advocates of statically typed languages.

--
David Hopwood <da******************@blueyonder.co.uk>
Jun 21 '06 #187
George Neuner wrote:
You can't totally prevent it ... if the index computation involves
types having a wider range, frequently the solution is to compute a
wide index value and then narrow it. But if the wider value is out of
range for the narrow type you have a problem.
....snip...
The point is really that the checks that prevent these things must be
performed at runtime and can't be prevented by any practical type
analysis performed at compile time. I'm not a type theorist but my
opinion is that a static type system that could, a priori, prevent the
problem is impossible.


I haven't been following this thread too closely, but I thought the
following article might be of interest...

Eliminating Array Bound Checking through Non-dependent types.
http://okmij.org/ftp/Haskell/types.html#branding

"There is a view that in order to gain static assurances such as an
array index being always in range or tail being applied to a non-empty
list, we must give up on something significant: on data structures such
as arrays (to be replaced with nested tuples), on general recursion, on
annotation-free programming, on clarity of code, on well-supported
programming languages. That does not have to be the case. The present
messages show non-trivial examples involving native Haskell arrays,
index computations, and general recursion. All arrays indexing
operations are statically guaranteed to be safe -- and so we can safely
use an efficient unsafeAt provided by GHC seemingly for that purpose.
The code is efficient; the static assurances cost us no run-time
overhead. The example uses only Haskell98 + higher-ranked types. No new
type classes are introduced. The safety is based on: Haskell type
system, quantified type variables, and a compact general-purpose
trusted kernel."

Jun 21 '06 #188
Chris F Clark schrieb:
In that sense, a static type system is eliminating tags, because the
information is pre-computed and not explicitly stored as a part of the
computation. Now, you may not view the tag as being there, but in my
mind if there exists a way of perfoming the computation that requires
tags, the tag was there and that tag has been eliminated.
Joachim Durchholz replied: On a semantic level, the tag is always there - it's the type (and
definitely part of an axiomatic definition of the language).
Tag elimination is "just" an optimization.


I agree the tag is always there in the abstract.

However, for the work I do the optimization of the tag at runtime is
important, and we specifically change things into types when we know
the system can do that optimization, because then we are getting the
system to do the work we would have to do and validating that the job
is done correctly. So, I care that the tag is eliminated in practice
(and remains in theory--I have to have both).

In the end, I'm trying to fit things which are "too big" and "too
slow" into much less space and time, using types to help me reliably
make my program smaller and faster is just one trick. It's a really
great and non-obvious one though, and one I'm glad I learned. Any
algebra I can learn that helps me solve my problems better is
appreciated.

However, I also know that my way of thinking about it is fringe. Most
people don't think that the purpose of types is to help one write
reliably tighter code.

Still, knowing about dynmic typing (tagging) and static typing, helped
me understand this trick. Thus, conflating the two meanings may at
some level be confusing. However, for me, they aided understanding
something that I needed to learn.

-Chris
Jun 21 '06 #189
Marshall wrote:
Chris Smith wrote:
Marshall <ma*************@gmail.com> wrote:
I think what this highlights is the fact that our existing terminology
is not up to the task of representing all the possible design
choices we could make. Some parts of dynamic vs. static
a mutually exclusive; some parts are orthogonal.
Really? I can see that in a strong enough static type system, many
dynamic typing features would become unobservable and therefore would be
pragmatically excluded from any probable implementations... but I don't
see any other kind of mutual exclusion between the two.


Well, it strikes me that some of what the dynamic camp likes
is the actual *absence* of declared types, or the necessity
of having them.


So why aren't they happy with something like, say, Alice ML, which is
statically typed, but has a "dynamic" type and type inference? I mean
this as a serious question.
At the very least, requiring types vs. not requiring
types is mutually exclusive.


Right, but it's pretty well established that languages that don't
require type *declarations* can still be statically typed.

--
David Hopwood <da******************@blueyonder.co.uk>
Jun 21 '06 #190
On 21 Jun 2006 15:04:23 -0700, "Greg Buchholz"
<sl**************@yahoo.com> wrote:
I haven't been following this thread too closely, but I thought the
following article might be of interest...

Eliminating Array Bound Checking through Non-dependent types.
http://okmij.org/ftp/Haskell/types.html#branding

That was interesting, but the authors' method still involves runtime
checking of the array bounds. IMO, all they really succeeded in doing
was turning the original recursion into CPS and making the code a
little bit clearer.

George
--
for email reply remove "/" from address
Jun 21 '06 #191
Pascal Costanza wrote:
There is, of course, room for research on performing static type checks
in a running system, for example immediately after or before a software
update is applied, or maybe even on separate type checking on software
increments such that guarantees for their composition can be derived.
However, I am not aware of a lot of work in that area, maybe because the
static typing community is too focused on compile-time issues.


Not everyone is. For instance, Don Stewart has been enormously successful in
deploying such a system for Haskell (very much a statically typed language)
in a practically usable way. It is called hs-plugins (see
http://www.cse.unsw.edu.au/~dons/hs-plugins/), a framework for run-time
loading and re-loading of Haskell modules (in source form or already
compiled, giving different levels of security). Far from being a purely
academic exercise, there are interesting applications, including yi, an
extensible editor, and lambdabot, an IRC bot, both available from the above
same home page.

Cheers,
Ben
Jun 22 '06 #192
Marshall <ma*************@gmail.com> wrote:
Well, it strikes me that some of what the dynamic camp likes
is the actual *absence* of declared types, or the necessity
of having them. At the very least, requiring types vs. not requiring
types is mutually exclusive.


So you're saying, then, that while static typing and dynamic typing are
not themselves mutually exclusive, there are people whose concerns run
as much in the "not statically typed" direction as in the "dynamically
typed" direction? I agree that this is undoubtedly true. That (not
statically typed) seems to be what gets all the attention, as a matter
of fact. Most programmers in modern languages assume, though, that
there will be some kind of safeguard against writing bad code with
unpredictable consequences, so in practice "not statically typed"
correlates strongly with "dynamically typed".

Nevertheless, the existence of languages that are clearly "both"
suggests that they should be considered separately to at least some
extent.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 22 '06 #193
David Hopwood <da******************@blueyonder.co.uk> wrote:
Typical programming languages have many kinds of semantic error that can occur
at run-time: null references, array index out of bounds, assertion failures,
failed casts, "message not understood", ArrayStoreExceptions in Java,
arithmetic overflow, divide by zero, etc.

Conventionally, some of these errors are called "type errors" and some are
not. But there seems to be little rhyme or reason to this categorization, as
far as I can see. If in a particular language, both array index bounds errors
and "message not understood" can occur at run-time, then there's no objective
reason to call one a type error and the other not. Both *could* potentially
be caught by a type-based analysis in some cases, and both *are not* caught
by such an analysis in that language.
Incidentally, yes! Filtering out the terminology stuff [as hard as this
may be to believe on USENET where half the world seems to be trolls, I
really was not so aware when I originally posted of how some communities
use terminology and where the motivations come from], this was my
original point. In the static sense, there is no such thing as a type
error; only an error that's caught by a type system. I don't know if
the same can be said of dynamic types. Some people here seem to be
saying that there is a universal concept of "type error" in dynamic
typing, but I've still yet to see a good precise definition (nor a good
precise definition of dynamic typing at all).

In either case it doesn't make sense, then, to compare how static type
systems handle type errors versus how dynamic systems handle type
errors. That's akin to asking how comparing how much many courses are
offered at a five star restaurant versus how many courses are offered by
the local university. (Yes, that's an exaggeration, of course. The
word "type" in the static/dynamic typing world at least has a closer to
common root.)
A more consistent terminology would reserve "type error" for errors that
occur when a typechecking/inference algorithm fails, or when an explicit
type coercion or typecheck fails.
I am concerned as to whether that actually would turn out to have any
meaning.

If we are considering array length bounds checking by type systems (and
just to confuse ourselves, by both static and dynamic type systems),
then is the error condition that gets raised by the dynamic system a
type error? Certainly, if the system keeps track of the fact that this
is an array of length 5, and uses that information to complain when
someone tries to treat the array as a different type (such as an array
of length >= 7, for example), certainly that's a type error, right?
Does the reference to the seventh index constitute an "explicit" type
coercion? I don't know. It seems pretty explicit to me, but I suspect
some may not agree.

The same thing would certainly be a type error in a static system, if
indeed the static system solved the array bounds problem at all.

While this effort to salvage the term "type error" in dynamic languages
is interesting, I fear it will fail. Either we'll all have to admit
that "type" in the dynamic sense is a psychological concept with no
precise technical definition (as was at least hinted by Anton's post
earlier, whether intentionally or not) or someone is going to have to
propose a technical meaning that makes sense, independently of what is
meant by "type" in a static system.
In the terminology I'm suggesting, the object has no type in this language
(assuming we're talking about a Smalltalk-like language without any type system
extensions).


I suspect you'll see the Smalltalk version of the objections raised in
response to my post earlier. In other words, whatever terminology you
think is consistent, you'll probably have a tough time convincing
Smalltalkers to stop saying "type" if they did before. If you exclude
"message not understood" as a type error, then I think you're excluding
type errors from Smalltalk entirely, which contradicts the psychological
understanding again.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 22 '06 #194
Rob Thorpe <ro***********@antenova.com> wrote:
+---------------
| > So, will y'all just switch from using "dynamically typed" to "latently
| > typed", and stop talking about any real programs in real programming
| > languages as being "untyped" or "type-free", unless you really are
| > talking about situations in which human reasoning doesn't come into play?
|
| I agree with most of what you say except regarding "untyped".
|
| In machine language or most assembly the type of a variable is
| something held only in the mind of the programmer writing it, and
| nowhere else. In latently typed languages though the programmer can
| ask what they type of a particular value is. There is a vast
| difference to writing code in the latter kind of language to writing
| code in assembly.
|
| I would suggest that at least assembly should be referred to as
| "untyped".
+---------------

Another language which has *neither* latent ("dynamic") nor
manifest ("static") types is (was?) BLISS[1], in which, like
assembler, variables are "just" addresses[2], and values are
"just" a machine word of bits.

However, while in BLISS neither variable nor values are typed,
operators *are* "typed"; that is, each operator specifies how
it will treat its input machine word(s) and how the machine word(s)
of bits it produces should be interpreted. So "+" is (mod 2^wordsize)
[unsigned?] integer addition, and "FADR" is floating-point addition
with rounding (as opposed to "FADD", which truncates), and so on.
So this (legal but non-sensical!) BLISS:

x := .y FMPR (.x - 13);

would, in C, have to be written roughly like this:

((void*)x) = (void*)((float)(*(void*)y) * (float)((int)(*(void*)x) - 13));

On the PDP-10, at least, both of them would generate this assembler code:

move t1, x
subi t1, 13
fmpr t1, y
movem t1, x

So is BLISS "typed" or not? And if so, what is that kind of typing called?
-Rob

[1] "Basic Language for the Implementation of Systems Software",
see <http://en.wikipedia.org/wiki/BLISS>. Created at CMU,
added-to by DEC, used by CMU, DEC, and a few others for in
the 70's-80's.

[2] Well, approximately. A BLISS variable is, conceptually at least,
really a "byte-pointer" -- a triple of a word address, a byte-size,
and a byte-position-within-word -- even on target architectures
other than the DEC PDP-10 [which had hardware byte-pointer types].
The compiler (even on the PDP-10) optimizes away LDB/DPB accesses
into natively-addressible load/store sizes, when possible.

-----
Rob Warnock <rp**@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607
Jun 22 '06 #195
Marshall <ma*************@gmail.com> wrote:
+---------------
| Anton van Straaten wrote:
| > 3. A really natural term to refer to types which programmers reason
| > about, even if they are not statically checked, is "latent types". It
| > captures the situation very well intuitively, and it has plenty of
| > precedent -- e.g. it's mentioned in the Scheme reports, R5RS and its
| > predecessors, going back at least a decade or so (haven't dug to check
| > when it first appeared).
|
| Can you be more explicit about what "latent types" means?
| I'm sorry to say it's not at all natural or intuitive to me.
| Are you referring to the types in the programmers head,
| or the ones at runtime, or what?
+---------------

Here's what the Scheme Standard has to say:

http://www.schemers.org/Documents/St...5rs-Z-H-4.html
1.1 Semantics
...
Scheme has latent as opposed to manifest types. Types are assoc-
iated with values (also called objects) rather than with variables.
(Some authors refer to languages with latent types as weakly typed
or dynamically typed languages.) Other languages with latent types
are APL, Snobol, and other dialects of Lisp. Languages with manifest
types (sometimes referred to as strongly typed or statically typed
languages) include Algol 60, Pascal, and C.

To me, the word "latent" means that when handed a value of unknown type
at runtime, I can look at it or perform TYPE-OF on it or TYPECASE or
something and thereby discover its actual type at the moment[1], whereas
"manifest" means that types[2] are lexically apparent in the code.
-Rob

[1] I added "at the moment", since I remembered that in Common Lisp
one may change the type of a value at runtime, specifically, a
CLOS instance may change type "out from under you" if someone
performs a CHANGE-CLASS on it or redefines its CLASS definition.
[Though maybe the latter is more a change of the *type* itself
rather than a change of the *object's* type per se.]

[2] Usually of a variables or locations, but sometimes of expressions.

-----
Rob Warnock <rp**@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607

Jun 22 '06 #196
Rob Warnock <rp**@rpw3.org> wrote:
Another language which has *neither* latent ("dynamic") nor
manifest ("static") types is (was?) BLISS[1], in which, like
assembler, variables are "just" addresses[2], and values are
"just" a machine word of bits.


I'm unsure that it's correct to describe any language as having no
latent typing, in the sense that's being used in this thread. It might
be more correct to say "so specified latent typing" and/or "no latent
typing beyond what is provided by the execution environment, including
the CPU, virtual memory system, etc." as appropriate. I am aware of no
hardware environment that really accepts all possible values for all
possible operations without the potential of somehow signaling a type
violation.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 22 '06 #197
David Hopwood wrote:
Marshall wrote:
Chris Smith wrote:
Marshall <ma*************@gmail.com> wrote:

I think what this highlights is the fact that our existing terminology
is not up to the task of representing all the possible design
choices we could make. Some parts of dynamic vs. static
a mutually exclusive; some parts are orthogonal.
Really? I can see that in a strong enough static type system, many
dynamic typing features would become unobservable and therefore would be
pragmatically excluded from any probable implementations... but I don't
see any other kind of mutual exclusion between the two.

Well, it strikes me that some of what the dynamic camp likes
is the actual *absence* of declared types, or the necessity
of having them.


So why aren't they happy with something like, say, Alice ML, which is
statically typed, but has a "dynamic" type and type inference? I mean
this as a serious question.


Note: I haven't yet worked with such a language, but here is my take anyway.

A statically type language requires you to think about two models of
your program at the same time: the static type model and the dynamic
behavioral model. A static type system ensures that these two
_different_ (that's important!) perspectives are always in sync. This is
especially valuable in settings where you know your domain well and want
to rely on feedback by your compiler that you haven't made any mistakes
in encoding your knowledge. (A static type system based on type
inferencing doesn't essentially change the requirement to think in two
models at the same time.)

A dynamically typed language is especially well suited when you don't
(yet) have a good idea about your domain and you want to use programming
especially to explore that domain. Some static typing advocates claim
that static typing is still suitable for exploring domains because of
the compiler's feedback about the preliminary encoding of your
incomplete knowledge, but the disadvantages are a) that you still have
to think about two models at the same time when you don't even have
_one_ model ready and b) that you cannot just run your incomplete
program to see what it does as part of your exploration.

A statically typed language with a dynamic type treats dynamic typing as
the exception, not as the general approach, so this doesn't help a lot
in the second setting (or so it seems to me).

A language like Common Lisp treats static typing as the exception, so
you can write a program without static types / type checks, but later on
add type declarations as soon as you get a better understanding of your
domain. Common Lisp implementations like CMUCL or SBCL even include
static type inference to aid you here, which gives you warnings but
still allows you to run a program even in the presence of static type
errors. I guess the feedback you get from such a system is probably not
"strong" enough to be appreciated by static typing advocates in the
first setting (where you have a good understanding of your domain).
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 22 '06 #198
Chris Smith wrote:
While this effort to salvage the term "type error" in dynamic languages
is interesting, I fear it will fail. Either we'll all have to admit
that "type" in the dynamic sense is a psychological concept with no
precise technical definition (as was at least hinted by Anton's post
earlier, whether intentionally or not) or someone is going to have to
propose a technical meaning that makes sense, independently of what is
meant by "type" in a static system.


What about this: You get a type error when the program attempts to
invoke an operation on values that are not appropriate for this operation.

Examples: adding numbers to strings; determining the string-length of a
number; applying a function on the wrong number of parameters; applying
a non-function; accessing an array with out-of-bound indexes; etc.
In the terminology I'm suggesting, the object has no type in this language
(assuming we're talking about a Smalltalk-like language without any type system
extensions).


I suspect you'll see the Smalltalk version of the objections raised in
response to my post earlier. In other words, whatever terminology you
think is consistent, you'll probably have a tough time convincing
Smalltalkers to stop saying "type" if they did before. If you exclude
"message not understood" as a type error, then I think you're excluding
type errors from Smalltalk entirely, which contradicts the psychological
understanding again.


Sending a message to an object that does not understand that message is
a type error. The "message not understood" machinery can be seen either
as a way to escape from this type error in case it occurs and allow the
program to still do something useful, or to actually remove (some)
potential type errors. Which view you take probably depends on what your
concrete implementation of "message not understood" does. For example,
if it simply forwards the message to another object that is known to be
able to respond to it, then you remove a potential type error; however,
if it pops up a dialog box to ask the user how to continue from here, it
is still a type error, but just gives you a way to deal with it at runtime.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 22 '06 #199
> A statically type language requires you to think about two models of
your program at the same time: the static type model and the dynamic
behavioral model. A static type system ensures that these two
_different_ (that's important!) perspectives are always in sync.
I have trouble understanding your use of the wording "Model of a
program".
If it is a system that behaves according to the rules of your program
then
statements about your program should consider *all* possible models.
If it is a formal system that makes statements about properties of your
program
than the static type system is a simplified model that is suitable for
automatic
analysis and your runtime model is in most cases nonexistent.
Can you give a definition of a "model of a program"? Can you explain
why
Lisp doesn't have two (SBCL does do a lot of typechecking and gives
type errors)?
This is
especially valuable in settings where you know your domain well and want
to rely on feedback by your compiler that you haven't made any mistakes
in encoding your knowledge. (A static type system based on type
inferencing doesn't essentially change the requirement to think in two
models at the same time.)
It is also valuable when you don't know your domain very well and you
want to rely on feedback by your compiler that you haven't made any
mistakes in encoding your limited knowledge
A dynamically typed language is especially well suited when you don't
(yet) have a good idea about your domain and you want to use programming
especially to explore that domain. our domain).


In the sense that you can start writing code without the compiler
pointing out
all but the most glaring holes in your program, I agree. Most of your
arguments
aren't very convincing and the thruth is that I have seem lisp
programmers using
the debugger to find out that you can't add a number and a hastable.
The static view
was not there and the dynamic view must have been too complicated so
they had
nothing to think about.
Immanuel

Jun 22 '06 #200

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

220
by: Brandon J. Van Every | last post by:
What's better about Ruby than Python? I'm sure there's something. What is it? This is not a troll. I'm language shopping and I want people's answers. I don't know beans about Ruby or have...
24
by: Xah Lee | last post by:
What is Expresiveness in a Computer Language 20050207, Xah Lee. In languages human or computer, there's a notion of expressiveness. English for example, is very expressive in manifestation,...
23
by: Xah Lee | last post by:
The Concepts and Confusions of Pre-fix, In-fix, Post-fix and Fully Functional Notations Xah Lee, 2006-03-15 Let me summarize: The LISP notation, is a functional notation, and is not a...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
1
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development projectplanning, coding, testing,...
0
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and...
0
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The...
0
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.