473,537 Members | 2,805 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

What is Expressiveness in a Computer Language

in March, i posted a essay “What is Expressiveness in a Computer
Language”, archived at:
http://xahlee.org/perl-python/what_i...esiveness.html

I was informed then that there is a academic paper written on this
subject.

On the Expressive Power of Programming Languages, by Matthias
Felleisen, 1990.
http://www.ccs.neu.edu/home/cobbe/pl...ive-slides.pdf

Has anyone read this paper? And, would anyone be interested in giving a
summary?

thanks.

Xah
xa*@xahlee.org
http://xahlee.org/

Jun 9 '06
669 25434
Pascal Bourguignon wrote:
Pascal Costanza <pc@p-cos.net> writes:
Andreas Rossberg wrote:
Pascal Costanza wrote:

Consider a simple expression like 'a + b': In a dynamically typed
language, all I need to have in mind is that the program will
attempt to add two numbers. In a statically typed language, I
additionally need to know that there must a guarantee that a and b
will always hold numbers.

I'm confused. Are you telling that you just write a+b in your
programs without trying to ensure that a and b are in fact numbers??
Basically, yes.

Note that this is a simplistic example. Consider, instead, sending a
message to an object, or calling a generic function, without ensuring
that there will be applicable methods for all possible cases. When I
get a "message not understood" exception, I can then decide whether
that kind of object shouldn't be a receiver in the first place, or
else whether I should define an appropriate method. I don't want to be
forced to decide this upfront, because either I don't want to be
bothered, or maybe I simply can't because I don't understand the
domain well enough yet, or maybe I want to keep a hook to be able to
update the program appropriately while it is running.


Moreover, a good proportion of the program and a good number of
algorithms don't even need to know the type of the objects they
manipulate.

For example, sort doesn't need to know what type the objects it sorts
are. It only needs to be given a function that is able to compare the
objects.


But this is true also in a statically typed language with parametric
polymorphism.

[...] Why should adding a few functions or methods, and providing input
values of a new type be rejected from a statically checked point of
view by a compiled program that would be mostly bit-for-bit the same
with or without this new type?


It usually wouldn't be -- adding methods in a typical statically typed
OO language is unlikely to cause type errors (unless there is a naming
conflict, in some cases). Nor would adding new types or new functions.

(*Using* new methods without declaring them would cause an error, yes.)

--
David Hopwood <da******************@blueyonder.co.uk>
Jun 22 '06 #251
Pascal Bourguignon wrote:
But it's always possible at run-time that new functions and new
function calls be generated such as:

(let ((x "two"))
(eval `(defmethod g ((self ,(type-of x))) t))
(eval `(defmethod h ((x ,(type-of x)) (y string))
(,(intern (format nil "DO-SOMETHING-WITH-A-~A" (type-of x))) x)
(do-something-with-a-string y)))
(funcall (compile nil `(let ((x ,x)) (lambda () (f x "Hi!"))))))

Will you execute the whole type-inference on the whole program "black
box" everytime you define a new function? Will you recompile all the
"black box" functions to take into account the new type the arguments
can be now?
Yes, why not?
This wouldn't be too efficient.


It's rare, so it doesn't need to be efficient. 'eval' is inherently
inefficient, anyway.

--
David Hopwood <da******************@blueyonder.co.uk>
Jun 22 '06 #252
Andreas Rossberg wrote:
Marshall wrote:
What prohibits us from describing an abstract type as a set of values?

If you identify an abstract type with the set of underlying values then
it is equivalent to the underlying representation type, i.e. there is no
abstraction.


I don't follow. Are you saying that a set cannot be described
intentionally? How is "the set of all objects that implement the
Foo interface" not sufficiently abstract? Is it possible you are
mixing in implementation concerns?


"Values" refers to the concrete values existent in the semantics of a
programming language. This set is usually infinite, but basically fixed.
To describe the set of "values" of an abstract type you would need
"fresh" values that did not exist before (otherwise the abstract type
would be equivalent to some already existent type). So you'd need at
least a theory for name generation or something similar to describe
abstract types in a types-as-sets metaphor.


Set theory has no difficulty with this. It's common, for example, to see
"the set of strings representing propositions" used in treatments of
formal systems.

--
David Hopwood <da******************@blueyonder.co.uk>
Jun 22 '06 #253
Vesa Karvonen wrote:
In comp.lang.functional Anton van Straaten <an***@appsolutions.com> wrote:
[...]

This static vs dynamic type thing reminds me of one article written by
Bjarne Stroustrup where he notes that "Object-Oriented" has become a
synonym for "good". More precisely, it seems to me that both camps
(static & dynamic) think that "typed" is a synonym for having
"well-defined semantics" or being "safe" and therefore feel the need
to be able to speak of their language as "typed" whether or not it
makes sense.
I reject this comparison. There's much more to it than that. The point
is that the reasoning which programmers perform when working with an
program in a latently-typed language bears many close similiarities to
the purpose and behavior of type systems.

This isn't an attempt to jump on any bandwagons, it's an attempt to
characterize what is actually happening in real programs and with real
programmers. I'm relating that activity to type systems because that is
what it most closely relates to.

Usually, formal type theory ignores such things, because of course
what's in the programmer's head is outside the domain of the formal
definition of an untyped language. But all that means is that formal
type theory can't account for the entirety of what's happening in the
case of programs in untyped languages.

Unless you can provide some alternate theory of the subject that's
better than what I'm offering, it's not sufficient to complain "but that
goes beyond (static/syntactic) type theory". Yes, it goes beyond
traditional type theory, because it's addressing with something which
traditional type theory doesn't address. There are reasons to connect
it to type theory, and if you can't see those reasons, you need to be
more explicit about why.
I agree. I think that instead of "statically typed" we should say
"typed" and instead of "(dynamically|latently) typed" we should say
"untyped".


The problem with "untyped" is that there are obvious differences in
typing behavior between the untyped lambda calculus and, say, a language
like Scheme (and many others). Like all latently-typed languages,
Scheme includes, in the language, mechanisms to tag values in a way that
supports checks which help the programmer to ensure that the program's
behavior matches the latent type scheme that the programmer has in mind.
See my other recent reply to Marshall for a more detailed explanation
of what I mean.

I'm suggesting that if a language classifies and tags values in a way
that supports the programmer in static reasoning about the behavior of
terms, that calling it "untyped" does not capture the entire picture,
even if it's technically accurate in a restricted sense (i.e. in the
sense that terms don't have static types that are known within the
language).

Let me come at this from another direction: what do you call the
classifications into number, string, vector etc. that a language like
Scheme does? And when someone writes a program which includes the
following lines, how would you characterize the contents of the comment:

; third : integer -> integer
(define (third n) (quotient n 3))

In my experience, answering these questions without using the word
"type" results in a lot of silliness. And if you do use "type", then
you're going to have to adjust the rest of your position significantly.
In a statically-checked language, people tend to confuse automated
static checking with the existence of types, because they're thinking in
a strictly formal sense: they're restricting their world view to what
they see "within" the language.

That is not unreasonable. You see, you can't have types unless you
have a type system. Types without a type system are like answers
without questions - it just doesn't make any sense.


The first point I was making is that *automated* checking has very
little to do with anything, and conflating static types with automated
checking tends to lead to a lot of confusion on both sides of the
static/dynamic fence.

But as to your point, latently typed languages have informal type
systems. Show me a latently typed language or program, and I can tell
you a lot about its type system or type scheme.

Soft type inferencers demonstrate this by actually defining a type
system and inferring type schemes for programs. That's a challenging
thing for an automated tool to do, but programmers routinely perform the
same sort of activity on an informal basis.
But a program as seen by the programmer has types: the programmer
performs (static) type inference when reasoning about the program, and
debugs those inferences when debugging the program, finally ending up
with a program which has a perfectly good type scheme. It's may be
messy compared to say an HM type scheme, and it's usually not proved to
be perfect, but that again is an orthogonal issue.

There is a huge hole in your argument above. Types really do not make
sense without a type system. To claim that a program has a type
scheme, you must first specify the type system. Otherwise it just
doesn't make any sense.


Again, the type system is informal. What you're essentially saying is
that only things that are formally defined make sense. But you can't
wish dynamically-checked languages out of existence. So again, how
would you characterize these issues in dynamically-checked languages?

Saying that it's just a matter of well-defined semantics doesn't do
anything to address the details of what's going on. I'm asking for a
more specific account than that.
Mathematicians operated for thousands of years without automated
checking of proofs, so you can't argue that because a
dynamically-checked program hasn't had its type scheme proved correct,
that it somehow doesn't have types. That would be a bit like arguing
that we didn't have Math until automated theorem provers came along.

No - not at all. First of all, mathematics has matured quite a bit
since the early days. I'm sure you've heard of the axiomatic method.
However, what you are missing is that to prove that your program has
types, you first need to specify a type system. Similarly, to prove
something in math you start by specifying [fill in the rest].


I agree, to make the comparison perfect, you'd need to define a type
system. But that's been done in various cases. So is your complaint
simply that most programmers are working with informal type systems?
I've already stipulated that.

However, I think that you want to suggest that those programmers are not
working with type systems at all.

This reminds me of a comedy skit which parodied the transparency of
Superman's secret identity: Clark Kent is standing in front of Lois Lane
and removes his glasses for some reason. Lois looks confused and says
"where did Clark go?" Clark quickly puts his glasses back on, and Lois
breathes a sigh of relief, "Oh, there you are, Clark".

The problem we're dealing with in this case is that anything that's not
formally defined is essentially claimed to not exist. It's lucky that
this isn't really the case, otherwise much of the world around us would
vanish in a puff of formalist skepticism.

We're discussing systems that operate on an informal basis: in this
case, the reasoning about the classification of values which flow
through terms in a dynamically-checked language. If you can produce a
useful formal model of those systems that doesn't omit significant
information, that's great, and I'm all ears.

However, claiming that e.g. using a universal type is a complete model
what's happening misses the point: it doesn't account at all for the
reasoning process I've just described.
1. "Untyped" is really quite a misleading term, unless you're talking
about something like the untyped lambda calculus. That, I will agree,
can reasonably be called untyped.

Untyped is not misleading. "Typed" is not a synonym for "safe" or
"having well-defined semantics".


Again, your two suggested replacements don't come close to capturing
what I'm talking about. Without better alternatives, "type" is the
closest appropriate term. I'm qualifying it with the term "latent",
precisely to indicate that I'm not talking about formally-defined types.

I'm open to alternative terminology or ways of characterizing this, but
they need to address issues that exist outside the boundaries of formal
type systems, so simply applying terms from formal type theory is not
usually sufficient.
So, will y'all just switch from using "dynamically typed" to "latently
typed"

I won't (use "latently typed"). At least not without further
qualification.


This and my other recent post give a fair amount of qualification, so
let me know if you need anything else to be convinced. :)

But to be fair, I'll start using "untyped" if you can come up with a
satisfactory answer to the two questions I asked above, just before I
used the word "silliness".

Anton
Jun 22 '06 #254
Rob Thorpe wrote:
So, will y'all just switch from using "dynamically typed" to "latently
typed", and stop talking about any real programs in real programming
languages as being "untyped" or "type-free", unless you really are
talking about situations in which human reasoning doesn't come into
play? I think you'll find it'll help to reason more clearly about this
whole issue.

I agree with most of what you say except regarding "untyped".

In machine language or most assembly the type of a variable is
something held only in the mind of the programmer writing it, and
nowhere else. In latently typed languages though the programmer can
ask what they type of a particular value is. There is a vast
difference to writing code in the latter kind of language to writing
code in assembly.


The distinction you describe is pretty much exactly what I was getting
at. I may have put it poorly.
I would suggest that at least assembly should be referred to as
"untyped".


Yes, I agree.

While we're talking about "untyped", I want to expand on something I
wrote in a recent reply to Vesa Karvonen: I accept that "untyped" has a
technical definition which means that a language doesn't statically
assign types to terms. But this doesn't capture the distinction between
"truly" untyped languages, and languages which tag their values to
support the programmer's ability to think in terms of latent types.

The point of avoiding the "untyped" label is not because it's not
technically accurate (in a limited sense) -- it's because in the absence
of other information, it misses out on important aspects of the nature
of latently typed languages. My reply to Vesa goes into more detail
about this.

Anton
Jun 22 '06 #255
Andreas Rossberg wrote:
Rob Warnock wrote:

Here's what the Scheme Standard has to say:

http://www.schemers.org/Documents/St...5rs-Z-H-4.html
1.1 Semantics
...
Scheme has latent as opposed to manifest types. Types are assoc-
iated with values (also called objects) rather than with variables.
(Some authors refer to languages with latent types as weakly typed
or dynamically typed languages.) Other languages with latent types
are APL, Snobol, and other dialects of Lisp. Languages with manifest
types (sometimes referred to as strongly typed or statically typed
languages) include Algol 60, Pascal, and C.

Maybe this is the original source of the myth that static typing is all
about assigning types to variables...

With all my respect to the Scheme people, I'm afraid this paragraph is
pretty off, no matter where you stand. Besides the issue just mentioned
it equates "manifest" with static types. I understand "manifest" to mean
"explicit in code", which of course is nonsense - static typing does not
require explicit types. Also, I never heard "weakly typed" used in the
way they suggest - in my book, C is a weakly typed language (= typed,
but grossly unsound).


That text goes back at least 20 years, to R3RS in 1986, and possibly
earlier, so part of what it represents is simply changing use of
terminology, combined with an attempt to put Scheme in context relative
to multiple languages and terminology usages. The fact that we're still
discussing this now, and haven't settled on terminology acceptable to
all sides, illustrates the problem.

The Scheme report doesn't actually say anything about how latent types
relate to static types, in the way that I've recently characterized the
relationship here. However, I thought this was a good place to explain
how I got to where I am on this subject.

There's been a lot of work done on soft type inference in Scheme, and
other kinds of type analysis. The Scheme report talks about latent
types as meaning "types are associated with values". While this might
cause some teeth-grinding amongst type theorists, it's meaningful enough
when you look at simple cases: cases where the type of a term is an
exact match for the type tags of the values that flow through that term.

When you get to more complex cases, though, most type inferencers for
Scheme assign traditional static-style types to terms. If you think
about this in conjunction with the term "latent types", it's an obvious
connection to make that what the inferencer is doing is recovering types
that are latent in the source.

Once that connection is made, it's obvious that the tags associated with
values are not the whole story: that the conformance of one or more
values to a "latent type" may be checked by a series of tag checks, in
different parts of a program (i.e. before, during and after the
expression in question is evaluated). I gave a more detailed
description of how latent types relate to tags in an earlier reply to
Marshall (Spight, not Joe).

Anton
Jun 23 '06 #256
Joe Marshall wrote:

That's the important point: I want to run broken code.
I want to make sure I understand. I can think of several things
you might mean by this. It could be:
1) I want to run my program, even though I know parts of it
are broken, because I think there are parts that are not broken
and I want to try them out.
2) I want to run my program, even though it is broken, and I
want to run right up to a broken part and trap there, so I can
use the runtime facilities of the language to inspect what's
going on.

I want to run
as much of the working fragments as I can, and I want a `safety net' to
prevent me from performing undefined operations, but I want the safety
net to catch me at the *last* possible moment.


This statement is interesting, because the conventional wisdom (at
least as I'm used to hearing it) is that it is best to catch bugs
at the *first* possible moment. But I think maybe we're talking
about different continua here. The last last last possible moment
is after the software has shipped to the customer, and I'm pretty
sure that's not what you mean. I think maybe you mean something
more like 2) above.
Marshall

Jun 23 '06 #257
Joachim Durchholz wrote:
Marshall schrieb:
immutable = can't change
vary-able = can change

Clearly a contradiction in terms.
Not in mathematics.


So stipulated.

However, it *is* a contradiction in English. Now, we often redefine
or narrow terms for technical fields. However, when we find
ourselves in a situation where we've inverted the English
meaning with the technical meaning, or ended up with a
contradiction, it ought to give us pause.

The understanding there is that a "variable" varies - not over time, but
according to the whim of the usage. (E.g. if a function is displayed in
a graph, the parameter varies along the X axis. If it's used within a
term, the parameter varies depending on how it's used. Etc.)

Similarly for computer programs.
Of course, if values are immutable, the value associated with a
parameter name cannot vary within the same invocation - but it can still
vary from one invocation to the next.


Again, stipulated. In fact I conceded this was a good point when it
was first mentioned. However, we generally refer to parameters
to functions as "parameters"-- you did it yourself above.

What we generally (in programming) call variables are locals
and globals. If the languages supports an update operation
on those variables, then calling them variables makes sense.
But "variable" has become such a catch-all term that we call

public static final int FOO = 7;

a variable, even though it can never, ever vary.

That doesn't make any sense.

If we care about precision in our terminology, we ought to
distinguish among parameters, named values that support
destructive update, named values that don't support
destructive update, and possibly other things.
Marshall

Jun 23 '06 #258
Timo Stamm wrote:

This is actually one of the most interesting threads I have read in a
long time. If you ignore the evangelism, there is a lot if high-quality
information and first-hand experience you couldn't find in a dozen books.


Hear hear! This is an *excellent* thread. The evangelism is at
rock-bottom
and the open exploration of other people's way of thinking is at what
looks to me like an all-time high.
Marshall

Jun 23 '06 #259
Anton van Straaten wrote:
Vesa Karvonen wrote:

This static vs dynamic type thing reminds me of one article written by
Bjarne Stroustrup where he notes that "Object-Oriented" has become a
synonym for "good". More precisely, it seems to me that both camps
(static & dynamic) think that "typed" is a synonym for having
"well-defined semantics" or being "safe" and therefore feel the need
to be able to speak of their language as "typed" whether or not it
makes sense.
I reject this comparison. There's much more to it than that.


I agree that there's more to it than that. I also agree, however, that
Vesa's observation is true, and is a big part of the reason why it's
difficult to discuss this topic. I don't recall who said what at this
point, but earlier today someone else posted -- in this same thread --
the idea that static type "advocates" want to classify some languages as
untyped in order to put them in the same category as assembly language
programming. That's something I've never seen, and I think it's far
from the goal of pretty much anyone; but clearly, *someone* was
concerned about it. I don't know if much can be done to clarify this
rhetorical problem, but it does exist.

The *other* bit that's been brought up in this thread is that the word
"type" is just familiar and comfortable for programmers working in
dynamically typed languages, and that they don't want to change their
vocabulary.

The *third* thing that's brought up is that there is a (to me, somewhat
vague) conception going around that the two really ARE varieties of the
same thing. I'd like to pin this down more, and I hope we get there,
but for the time being I believe that this impression is incorrect. At
the very least, I haven't seen a good way to state any kind of common
definition that withstands scrutiny. There is always an intuitive word
involved somewhere which serves as an escape hatch for the author to
retain some ability to make a judgement call, and that of course
sabotages the definition. So far, that word has varied through all of
"type", "type error", "verify", and perhaps others... but I've never
seen anything that allows someone to identify some universal concept of
typing (or even the phrase "dynamic typing" in the first place) in a way
that doesn't appeal to intuition.
The point
is that the reasoning which programmers perform when working with an
program in a latently-typed language bears many close similiarities to
the purpose and behavior of type systems.
Undoubtedly, some programmers sometimes perform reasoning about their
programs which could also be performed by a static type system. This is
fairly natural, since static type systems specifically perform tractable
analyses of programs (Pierce even uses the word "tractable" in the
definition of a type system), and human beings are often (though not
always) best-served by trying to solve tractable problems as well.
There are reasons to connect
it to type theory, and if you can't see those reasons, you need to be
more explicit about why.
Let me pipe up, then, as saying that I can't see those reasons; or at
least, if I am indeed seeing the same reasons that everyone else is,
then I am unconvinced by them that there's any kind of rigorous
connection at all.
I'm suggesting that if a language classifies and tags values in a way
that supports the programmer in static reasoning about the behavior of
terms, that calling it "untyped" does not capture the entire picture,
even if it's technically accurate in a restricted sense (i.e. in the
sense that terms don't have static types that are known within the
language).
It is, nevertheless, quite appropriate to call the language "untyped" if
I am considering static type systems. I seriously doubt that this usage
in any way misleads anyone into assuming the absence of any mental
processes on the part of the programmer. I hope you agree. If not,
then I think you significantly underestimate a large category of people.
The first point I was making is that *automated* checking has very
little to do with anything, and conflating static types with automated
checking tends to lead to a lot of confusion on both sides of the
static/dynamic fence.
I couldn't disagree more. Rather, when you're talking about static
types (or just "types" in most research literature that I've seen), then
the realm of discussion is specifically defined to be the very set of
errors that are automatically caught and flagged by the language
translator. I suppose that it is possible to have an unimplemented type
system, but it would be unimplemented only because someone hasn't felt
the need nor gotten around to it. Being implementABLE is a crucial part
of the definition of a static type system.

I am beginning to suspect that you're make the converse of the error I
made earlier in the thread. That is, you may be saying things regarding
the psychological processes of programmers and such that make sense when
discussing dynamic types, and in any case I haven't seen any kind of
definition of dynamic types that is more acceptable yet; but it's
completely irrelevant to static types. Static types are not fuzzy -- if
they were fuzzy, they would cease to be static types -- and they are not
a phenomenon of psychology. To try to redefine static types in this way
not only ignores the very widely accepted basis of entire field of
existing literature, but also leads to false ideas such as that there is
some specific definable set of problems that type systems are meant to
solve.

Skipping ahead a bit...
I agree, to make the comparison perfect, you'd need to define a type
system. But that's been done in various cases.
I don't think that has been done, in the case of dynamic types. It has
been done for static types, but much of what you're saying here is in
contradiction to the definition of a type system in that sense of the
word.
The problem we're dealing with in this case is that anything that's not
formally defined is essentially claimed to not exist.


I see it as quite reasonable when there's an effort by several
participants in this thread to either imply or say outright that static
type systems and dynamic type systems are variations of something
generally called a "type system", and given that static type systems are
quite formally defined, that we'd want to see a formal definition for a
dynamic type system before accepting the proposition that they are of a
kind with each other. So far, all the attempts I've seen to define a
dynamic type system seem to reduce to just saying that there is a well-
defined semantics for the language.

I believe that's unacceptable for several reasons, but the most
significant of them is this. It's not reasonable to ask anyone to
accept that static type systems gain their essential "type system-ness"
from the idea of having well-defined semantics. From the perspective of
a statically typed language, this looks like a group of people getting
together and deciding that the real "essence" of what it means to be a
type system is... and then naming something that's so completely non-
essential that we don't generally even mention it in lists of the
benefits of static types, because we have already assumed that it's true
of all languages except C, C++, and assembly language.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 23 '06 #260
Anton van Straaten wrote:
Marshall wrote:
Can you be more explicit about what "latent types" means?
I'm sorry to say it's not at all natural or intuitive to me.
Are you referring to the types in the programmers head,
or the ones at runtime, or what?
Sorry, that was a huge omission. (What I get for posting at 3:30am.)


Thanks for the excellent followup.

The short answer is that I'm most directly referring to "the types in
the programmer's head".
In the database theory world, we speak of three levels: conceptual,
logical, physical. In a dbms, these might roughly be compared to
business entities described in a requirements doc, (or in the
business owners' heads), the (logical) schema encoded in the
dbms, and the b-tree indicies the dbms uses for performance.

So when you say "latent types", I think "conceptual types."

The thing about this is, both the Lisp and the Haskell programmers
are using conceptual types (as best I can tell.) And also, the
conceptual/latent types are not actually a property of the
program; they are a property of the programmer's mental
model of the program.

It seems we have languages:
with or without static analysis
with or without runtime type information (RTTI or "tags")
with or without (runtime) safety
with or without explicit type annotations
with or without type inference

Wow. And I don't think that's a complete list, either.

I would be happy to abandon "strong/weak" as terminology
because I can't pin those terms down. (It's not clear what
they would add anyway.)

A more complete informal summary is as follows:

Languages with latent type systems typically don't include type
declarations in the source code of programs. The "static" type scheme
of a given program in such a language is thus latent, in the English
dictionary sense of the word, of something that is present but
undeveloped. Terms in the program may be considered as having static
types, and it is possible to infer those types, but it isn't necessarily
easy to do so automatically, and there are usually many possible static
type schemes that can be assigned to a given program.

Programmers infer and reason about these latent types while they're
writing or reading programs. Latent types become manifest when a
programmer reasons about them, or documents them e.g. in comments.
Uh, oh, a new term, "manifest." Should I worry about that?

(As has already been noted, this definition may seem at odds with the
definition given in the Scheme report, R5RS, but I'll address that in a
separate post.)

There's a close connection between latent types in the sense I've
described, and the "tagged values" present at runtime. However, as type
theorists will tell you, the tags used to tag values at runtime, as e.g.
a number or a string or a FooBar object, are not the same thing as the
sort of types which statically-typed languages have.

A simple example of the distinction can be seen in the type of a
function. Using Javascript as a lingua franca:

function timestwo(x) { return x * 2 }

In a statically-typed language, the type of a function like this might
be something like "number -> number", which tells us three things: that
timestwo is a function; that it accepts a number argument; and that it
returns a number result.

But if we ask Javascript what it thinks the type of timestwo is, by
evaluating "typeof timestwo", it returns "function". That's because the
value bound to timestwo has a tag associated with it which says, in
effect, "this value is a function".
Well, darn. It strikes me that that's just a decision the language
designers
made, *not* to record complete RTTI. (Is it going to be claimed that
there is an *advantage* to having only incomplete RTTI? It is a
serious question.)

But "function" is not a useful type. Why not? Because if all you know
is that timestwo is a function, then you have no idea what an expression
like "timestwo(foo)" means. You couldn't write working programs, or
read them, if all you knew about functions was that they were functions.
As a type, "function" is incomplete.
Yes, function is a parameterized type, and they've left out the
parameter values.

By my definition, though, the latent type of timestwo is "number ->
number". Any programmer looking at the function can figure out that
this is its type, and programmers do exactly that when reasoning about a
program.
Gotcha.

(Aside: technically, you can pass timestwo something other than a
number, but then you get NaN back, which is usually not much use except
to generate errors. I'll ignore that here; latent typing requires being
less rigourous about some of these issues.)

So, where do tagged values fit into this? Tags help to check types at
runtime, but that doesn't mean that there's a 1:1 correspondence between
tags and types. For example, when an expression such as "timestwo(5) *
3" is evaluated, three checks occur that are relevant to the type of
timestwo:

1. Before the function call takes place, a check ensures that timestwo
is a function.

2. Before the multiplication in "x * 2", a check ensures that x is a number.

3. When timestwo returns, before the subsequent multiplication by 3, a
check ensures that the return type of timestwo is a number.

These three checks correspond to the three pieces of information
contained in the function type signature "number -> number".

However, these dynamic checks still don't actually tell us the type of a
function. All they do is check that in a particular case, the values
involved are compatible with the type of the function. In many cases,
the checks may infer a signature that's either more or less specific
than the function's type, or they may infer an incomplete signature --
e.g., the return type doesn't need to be checked when evaluating "arr[i]
= timestwo(5)".

I used a function just as an example. There are many other cases where
a value's tag doesn't match the static (or latent) type of the terms
through which it flows. A simple example is an expression such as:

(flag ? 5 : "foo")

Here, the latent type of this expression could be described as "number |
string". There won't be a runtime tag anywhere which represents that
type, though, since the language implementation never deals with the
actual type of expressions, except in those degenerate cases where the
type is so simple that it happens to be a 1:1 match to the corresponding
tag. It's only the programmer that "knows" that this expression has
that type.


Hmmm. Another place where the static type isn't the same thing as
the runtime type occurs in languages with subtyping.

Question: if a language *does* record complete RTTI, and the
languages does *not* have subtyping, could we then say that
the runtime type information *is* the same as the static type?
Marshall

Jun 23 '06 #261
Chris Smith <cd*****@twu.net> wrote:
I see it as quite reasonable when there's an effort by several
participants in this thread to either imply or say outright that static
type systems and dynamic type systems are variations of something
generally called a "type system" [...]


I didn't say that right. Obviously, no one is making all that great an
EFFORT to say anything. Typing is not too difficult, after all. What I
meant is that there's an argument being made to that effect.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 23 '06 #262
Chris Smith wrote:
I don't recall who said what at this
point, but earlier today someone else posted -- in this same thread --
the idea that static type "advocates" want to classify some languages as
untyped in order to put them in the same category as assembly language
programming. That's something I've never seen, and I think it's far
from the goal of pretty much anyone; but clearly, *someone* was
concerned about it. I don't know if much can be done to clarify this
rhetorical problem, but it does exist.
For the record, I'm not concerned about that problem as such. However,
I do think that characterizations of dynamically typed languages from a
static type perspective tend to oversimplify, usually because they
ignore the informal aspects which static type systems don't capture.

Terminology is a big part of this, so there are valid reasons to be
careful about how static type terminology and concepts are applied to
languages which lack formally defined static type systems.
The *other* bit that's been brought up in this thread is that the word
"type" is just familiar and comfortable for programmers working in
dynamically typed languages, and that they don't want to change their
vocabulary.
What I'm suggesting is actually a kind of bridge between the two
positions. The dynamically typed programmer tends to think in terms of
values having types, rather than variables. What I'm pointing out is
that even those programmers reason about something much more like static
types than they might realize; and that there's a connection between
that reasoning and static types, and also a connection to the tags
associated with values.

If you wanted to take the word "type" and have it mean something
reasonably consistent between the static and dynamic camps, what I'm
suggesting at least points in that direction. Obviously, nothing in the
dynamic camp is perfectly isomorphic to a real static type, which is why
I'm qualifying the term as "latent type", and attempting to connect it
to both static types and to tags.
The *third* thing that's brought up is that there is a (to me, somewhat
vague) conception going around that the two really ARE varieties of the
same thing. I'd like to pin this down more, and I hope we get there,
but for the time being I believe that this impression is incorrect. At
the very least, I haven't seen a good way to state any kind of common
definition that withstands scrutiny. There is always an intuitive word
involved somewhere which serves as an escape hatch for the author to
retain some ability to make a judgement call, and that of course
sabotages the definition. So far, that word has varied through all of
"type", "type error", "verify", and perhaps others... but I've never
seen anything that allows someone to identify some universal concept of
typing (or even the phrase "dynamic typing" in the first place) in a way
that doesn't appeal to intuition.
It's obviously going to be difficult to formally pin down something that
is fundamentally informal. It's fundamentally informal because if
reasoning about the static properties of terms in DT languages were
formalized, it would essentially be something like a formal type system.

However, there are some pretty concrete things we can look at. One of
them, which as I've mentioned elsewhere is part of what led me to my
position, is to look at what a soft type inferencer does. It takes a
program in a dynamically typed language, and infers a static type scheme
for it (obviously, it also defines an appropriate type system for the
language.) This has been done in both Scheme and Erlang, for example.

How do you account for such a feat in the absence of something like
latent types? If there's no static type-like information already
present in the program, how is it possible to assign a static type
scheme to a program without dramatically modifying its source?

I think it's reasonable to look at a situation like that and conclude
that even DT programs contain information that corresponds to types.
Sure, it's informal, and sure, it's usually messy compared to an
explicitly defined equivalent. But the point is that there is
"something" there that looks so much like static types that it can be
identified and formalized.
Undoubtedly, some programmers sometimes perform reasoning about their
programs which could also be performed by a static type system.
I think that's a severe understatement. Programmers always reason about
things like the types of arguments, the types of variables, the return
types of functions, and the types of expressions. They may not do
whole-program inference and proof in the way that a static type system
does, but they do it locally, all the time, every time.

BTW, I notice you didn't answer any of the significant questions which I
posed to Vesa. So let me pose one directly to you: how should I rewrite
the first sentence in the preceding paragraph to avoid appealing to an
admittedly informal notion of type? Note, also, that I'm using the word
in the sense of static properties, not in the sense of tagged values.
There are reasons to connect
it to type theory, and if you can't see those reasons, you need to be
more explicit about why.

Let me pipe up, then, as saying that I can't see those reasons; or at
least, if I am indeed seeing the same reasons that everyone else is,
then I am unconvinced by them that there's any kind of rigorous
connection at all.


For now, I'll stand on what I've written above. When I see if or how
that doesn't convince you, I can go further.
It is, nevertheless, quite appropriate to call the language "untyped" if
I am considering static type systems.
I agree that in the narrow context of considering to what extent a
dynamically typed language has a formal static type system, you can call
it untyped. However, what that essentially says is that formal type
theory doesn't have the tools to deal with that language, and you can't
go much further than that. As long as that's what you mean by untyped,
I'm OK with it.
I seriously doubt that this usage
in any way misleads anyone into assuming the absence of any mental
processes on the part of the programmer. I hope you agree.
I didn't suggest otherwise (or didn't mean to). However, the term
"untyped" does tend to lead to confusion, to a lack of recognition of
the significance of all the static information in a DT program that is
outside the bounds of a formal type system, and the way that runtime tag
checks relate to that static information.

One misconception that occurs is the assumption that all or most of the
static type information in a statically-typed program is essentially
nonexistent in a dynamically-typed program, or at least is no longer
statically present. That can easily be demonstrated to be false, of
course, and I'm not arguing that experts usually make this mistake.
If not,
then I think you significantly underestimate a large category of people.
If you think there's no issue here, I think you significantly
overestimate a large category of people. Let's declare that line of
argument a draw.
The first point I was making is that *automated* checking has very
little to do with anything, and conflating static types with automated
checking tends to lead to a lot of confusion on both sides of the
static/dynamic fence.

I couldn't disagree more. Rather, when you're talking about static
types (or just "types" in most research literature that I've seen), then
the realm of discussion is specifically defined to be the very set of
errors that are automatically caught and flagged by the language
translator. I suppose that it is possible to have an unimplemented type
system, but it would be unimplemented only because someone hasn't felt
the need nor gotten around to it. Being implementABLE is a crucial part
of the definition of a static type system.


I agree with the latter sentence. However, it's nevertheless the case
that it's common to confuse "type system" with "compile-time checking".
This doesn't help reasoning in debates like this, where the existence
of type systems in languages that don't have automated static checking
is being examined.
I am beginning to suspect that you're make the converse of the error I
made earlier in the thread. That is, you may be saying things regarding
the psychological processes of programmers and such that make sense when
discussing dynamic types, and in any case I haven't seen any kind of
definition of dynamic types that is more acceptable yet; but it's
completely irrelevant to static types. Static types are not fuzzy -- if
they were fuzzy, they would cease to be static types -- and they are not
a phenomenon of psychology. To try to redefine static types in this way
not only ignores the very widely accepted basis of entire field of
existing literature, but also leads to false ideas such as that there is
some specific definable set of problems that type systems are meant to
solve.
I'm not trying to redefine static types. I'm observing that there's a
connection between the static properties of untyped programs, and static
types; and attempting to characterize that connection.

You need to be careful about being overly formalist, considering that in
real programming languages, the type system does have a purpose which
has a connection to informal, fuzzy things in the real world. If you
were a pure mathematician, you might get away with claiming that type
systems are just a self-contained symbolic game which doesn't need any
connections beyond its formal ruleset.

Think of it like this: the more ambitious a language's type system is,
the fewer uncaptured static properties remain in the code of programs in
that language. However, there are plenty of languages with rather weak
static type systems. In those cases, code has more static properties
that aren't captured by the type system. I'm pointing out that in many
of these cases, those properties resemble types, to the point that it
can make sense to think of them and reason about them as such, applying
the same sort of reasoning that an automated type inferencer applies.

If you disagree, then I'd be interested to hear your answers to the two
questions I posed to Vesa, and the related one I posed to you above,
about what else to call these things.
I agree, to make the comparison perfect, you'd need to define a type
system. But that's been done in various cases.

I don't think that has been done, in the case of dynamic types.


I was thinking of the type systems designed for soft type inferencers;
as well as those cases where e.g. a statically-typed subset of an
untyped language is defined, as in the case of PreScheme.

But in such cases, you end up where a program in these systems, while in
some sense statically typed, is also a valid untyped program. There's
also nothing to stop someone familiar with such things programming in a
type-aware style - in fact, books like Felleisen et al's "How to Design
Programs" encourage that, recommending that functions be annotated with
comments expressing their type. Examples:

;; product : (listof number) -> number

;; copy : N X -> (listof X)

You also see something similar in e.g. many Erlang programs. In these
cases, reasoning about types is done explicitly by the programmer, and
documented.

What would you call the descriptions in those comments? Once you tell
me what I should call them other than "type" (or some qualified variant
such as "latent type"), then we can compare terminology and decide which
is more appropriate.
It has
been done for static types, but much of what you're saying here is in
contradiction to the definition of a type system in that sense of the
word.
Which is why I'm using a qualified version of the term.
The problem we're dealing with in this case is that anything that's not
formally defined is essentially claimed to not exist.

I see it as quite reasonable when there's an effort by several
participants in this thread to either imply or say outright that static
type systems and dynamic type systems are variations of something
generally called a "type system", and given that static type systems are
quite formally defined, that we'd want to see a formal definition for a
dynamic type system before accepting the proposition that they are of a
kind with each other.


A complete formal definition of what I'm talking about may be impossible
in principle, because if you could completely formally define it, you'd
have a static type system.

If that makes you throw up your hands, then all you're saying is that
you're unwilling to deal with a very real phenomenon that has obvious
connections to type theory, examples of which I've given above. That's
your choice, but at the same time, you have to give up exclusive claim
to any variation of the word "type".

Terms are used in a context, and it's perfectly reasonable to call
something a "latent type" or even a "dynamic type" in a certain context
and point out connections between those terms and their cousins (or if
you insist, their completely unrelated namesakes) static types.
So far, all the attempts I've seen to define a
dynamic type system seem to reduce to just saying that there is a well-
defined semantics for the language.
That's a pretty strong claim, considering you have so far ducked the
most important questions I raised in the post you replied to.
I believe that's unacceptable for several reasons, but the most
significant of them is this. It's not reasonable to ask anyone to
accept that static type systems gain their essential "type system-ness"
from the idea of having well-defined semantics.
The definition of static type system is not in question. However,
realistically, as I pointed out above, you have to acknowledge that type
systems exist in, and are inextricably connected to, a larger, less
formal context. (At least, you have to acknowledge that if you're
interested in programs that do anything related to the real world.) And
outside the formally defined borders of static type systems, there are
static properties that bear a pretty clear relationship to types.

Closing your eyes to this and refusing to acknowledge any connection
doesn't achieve anything. In the absence of some other account of the
phenomena in question, (an informal version of) types turn out to be a
pretty convenient way to deal with the situation.
From the perspective of
a statically typed language, this looks like a group of people getting
together and deciding that the real "essence" of what it means to be a
type system is...
There's a sense in which one can say that yes, the informal types I'm
referring to have an interesting degree of overlap with static types;
and also that static types do, loosely speaking, have the "purpose" of
formalizing the informal properties I'm talking about.

But I hardly see why such an informal characterization should bother
you. It doesn't affect the definition of static type. It's not being
held up as a foundation for type theory. It's simply a way of dealing
with the realities of programming in a dynamically-checked language.

There are some interesting philophical issues there, to be sure
(although only if you're willing to stray outside the formal), but you
don't have to worry about those unless you want to.
and then naming something that's so completely non-
essential that we don't generally even mention it in lists of the
benefits of static types, because we have already assumed that it's true
of all languages except C, C++, and assembly language.


This is based on the assumption that all we're talking about is
"well-defined semantics". However, there's much more to it than that.
I need to hear your characterization of the properties I've described
before I can respond.

Anton
Jun 23 '06 #263
Marshall wrote:
The short answer is that I'm most directly referring to "the types in
the programmer's head".

In the database theory world, we speak of three levels: conceptual,
logical, physical. In a dbms, these might roughly be compared to
business entities described in a requirements doc, (or in the
business owners' heads), the (logical) schema encoded in the
dbms, and the b-tree indicies the dbms uses for performance.

So when you say "latent types", I think "conceptual types."


That sounds plausible, but of course at some point we need to pick a
term and attempt to define it. What I'm attempting to do with "latent
types" is to point out and emphasize their relationship to static types,
which do have a very clear, formal definition. Despite some people's
skepticism, that definition gives us a lot of useful stuff that can be
applied to what I'm calling latent types.
The thing about this is, both the Lisp and the Haskell programmers
are using conceptual types (as best I can tell.)
Well, a big difference is that the Haskell programmers have language
implementations that are clever enough to tell them, statically, a very
precise type for every term in their program. Lisp programmers usually
don't have that luxury. And in the Haskell case, the conceptual type
and the static type match very closely.

There can be differences, e.g. a programmer might have knowledge about
the input to the program or some programmed constraint that Haskell
isn't capable of inferring, that allows them to figure out a more
precise type than Haskell can. (Whether that type can be expressed in
Haskell's type system is a separate question.)

In that case, you could say that the conceptual type is different than
the inferred static type. But most of the time, the human is reasoning
about pretty much the same types as the static types that Haskell
infers. Things would get a bit confusing otherwise.
And also, the
conceptual/latent types are not actually a property of the
program;
That's not exactly true in the Haskell case (or other languages with
static type inference), assuming you accept the static/conceptual
equivalence I've drawn above.

Although static types are often not explicitly written in the program in
such languages, they are unambiguously defined and automatically
inferrable. They are a static property of the program, even if in many
cases those properties are only implicit with respect to the source
code. You can ask the language to tell you what the type of any given
term is.
they are a property of the programmer's mental
model of the program.
That's more accurate. In languages with type inference, the programmer
still has to figure out what the implicit types are (or ask the language
to tell her).

You won't get any argument from me that this figuring out of implicit
types in a Haskell program is quite similar to what a Lisp, Scheme, or
Python programmer does. That's one of the places the connection between
static types and what I call latent types is the strongest.

(However, I think I just heard a sound as though a million type
theorists howled in unison.[*])
[*] most obscure inadvertent pun ever.
It seems we have languages:
with or without static analysis
with or without runtime type information (RTTI or "tags")
with or without (runtime) safety
with or without explicit type annotations
with or without type inference

Wow. And I don't think that's a complete list, either.
Yup.
I would be happy to abandon "strong/weak" as terminology
because I can't pin those terms down. (It's not clear what
they would add anyway.)
I wasn't following the discussion earlier, but I agree that strong/weak
don't have strong and unambiguous definitions.
Uh, oh, a new term, "manifest." Should I worry about that?
Well, people have used the term "manifest type" to refer to a type
that's explicitly apparent in the source, but I wouldn't worry about it.
I just used the term to imply that at some point, the idea of "latent
type" has to be converted to something less latent. Once you explicitly
identify a type, it's no longer latent to the entity doing the identifying.
But if we ask Javascript what it thinks the type of timestwo is, by
evaluating "typeof timestwo", it returns "function". That's because the
value bound to timestwo has a tag associated with it which says, in
effect, "this value is a function".

Well, darn. It strikes me that that's just a decision the language
designers
made, *not* to record complete RTTI.


No, there's more to it. There's no way for a dynamically-typed language
to figure out that something like the timestwo function I gave has the
type "number -> number" without doing type inference, by examining the
source of the function, at which point it pretty much crosses the line
into being statically typed.
(Is it going to be claimed that
there is an *advantage* to having only incomplete RTTI? It is a
serious question.)
More than an advantage, it's difficult to do it any other way. Tags are
associated with values. Types in the type theory sense are associated
with terms in a program. All sorts of values can flow through a given
term, which means that types can get complicated (even in a nice clean
statically typed language). The only way to reason about types in that
sense is statically - associating tags with values at runtime doesn't
get you there.

This is the sense in which the static type folk object to the term
"dynamic type" - because the tags/RTTI are not "types" in the type
theory sense.

Latent types as I'm describing them are intended to more closely
correspond to static types, in terms of the way in which they apply to
terms in a program.
Hmmm. Another place where the static type isn't the same thing as
the runtime type occurs in languages with subtyping.
Yes.
Question: if a language *does* record complete RTTI, and the
languages does *not* have subtyping, could we then say that
the runtime type information *is* the same as the static type?


You'd still have a problem with function types, as I mentioned above,
as well as expressions like the conditional one I gave:

(flag ? 5 : "foo")

The problem is that as the program is running, all it can ever normally
do is tag a value with the tags obtained from the path the program
followed on that run. So RTTI isn't, by itself, going to determine
anything similar to a static type for such a term, such as "string |
number".

One way to think about "runtime" types is as being equivalent to certain
kinds of leaves on a statically-typed program syntax tree. In an
expression like "x = 3", an inferring compiler might determine that 3 is
of type "integer". The tagged values which RTTI uses operate at this
level: the level of individual values.

However, as soon as you start dealing with other kinds of nodes in the
syntax tree -- nodes that don't represent literal values, or compound
nodes (that have children) -- the possibility arises that the type of
the overall expression will be more complex than that of a single value.
At that point, RTTI can't do what static types do.

Even in the simple "x = 3" case, a hypothetical inferring compiler might
notice that elsewhere in the same function, x is treated as a floating
point number, perhaps via an expression like "x = x / 2.0". According
to the rules of its type system, our language might determine that x has
type "float", as opposed to "number" or "integer". It might then either
treat the original "3" as a float, or supply a conversion when assigning
it to "x". (Of course, some languages might just give an error and
force you to be more explicit, but bear with me for the example - it's
after 3:30am again.)

Compare this to the dynamically-typed language: it sees "3" and,
depending on the language, might decide it's a "number" or perhaps an
"integer", and tag it as such. Either way, x ends up referring to that
tagged value. So what type is x? All you can really say, in the RTTI
case, is that x is a number or an integer, depending on the tag the
value has. There's no way it can figure out that x should be a float at
that point.

Of course, further down when it runs into the code "x = x / 2.0", x
might end up holding a value that's tagged as a float. But that only
tells you the value of x at some point in time, it doesn't help you with
the static type of x, i.e. a type that either encompasses all possible
values x could have during its lifetime, or alternatively determines how
values assigned to x should be treated (e.g. cast to float).

BTW, it's worth noting at this point, since it's implied by the last
paragraph, that a static type is only an approximation to the values
that a term can have during the execution of a program. Static types
can (often!) be too conservative, describing possible values that a
particular term couldn't actually ever have. This gives another hint as
to why RTTI can't be equivalent to static types. It's only ever dealing
with the concrete values right in front of it, it can't see the bigger
static picture.

Anton
Jun 23 '06 #264
David Hopwood wrote:
Rob Thorpe wrote:
David Hopwood wrote:
As far as I can tell, the people who advocate using "typed" and "untyped"
in this way are people who just want to be able to discuss all languages in
a unified terminological framework, and many of them are specifically not
advocates of statically typed languages.


Its easy to create a reasonable framework. My earlier posts show simple
ways of looking at it that could be further refined, I'm sure there are
others who have already done this.

The real objection to this was that latently/dynamically typed
languages have a place in it.


You seem to very keen to attribute motives to people that are not apparent
from what they have said.


The term "dynamically typed" is well used and understood. The term
untyped is generally associated with languages that as you put it "have
no memory safety", it is a pejorative term. "Latently typed" is not
well used unfortunately, but more descriptive.

Most of the arguments above describe a static type system then follow
by saying that this is what "type system" should mean, and finishing by
saying everything else should be considered untyped. This seems to me
to be an effort to associate dynamically typed languages with this
perjorative term.
But some of the advocates of statically
typed languages wish to lump these languages together with assembly
language a "untyped" in an attempt to label them as unsafe.


A common term for languages which have defined behaviour at run-time is
"memory safe". For example, "Smalltalk is untyped and memory safe."
That's not too objectionable, is it?


Memory safety isn't the whole point, it's only half of it. Typing
itself is the point. Regardless of memory safety if you do a
calculation in a latently typed langauge, you can find the type of the
resulting object.

Jun 23 '06 #265
Marshall wrote:

What we generally (in programming) call variables are locals
and globals. If the languages supports an update operation
on those variables, then calling them variables makes sense.
But "variable" has become such a catch-all term that we call

public static final int FOO = 7;

a variable, even though it can never, ever vary.

That doesn't make any sense.


It does, because it is only a degenerate case. In general, you can have
something like

void f(int x)
{
const int foo = x+1;
//...
}

Now, foo is still immutable, it is a local, but it clearly also varies.

- Andreas
Jun 23 '06 #266
In comp.lang.functional Anton van Straaten <an***@appsolutions.com> wrote:
[...]
I reject this comparison. There's much more to it than that. The point
is that the reasoning which programmers perform when working with an
program in a latently-typed language bears many close similiarities to
the purpose and behavior of type systems. This isn't an attempt to jump on any bandwagons, it's an attempt to
characterize what is actually happening in real programs and with real
programmers. I'm relating that activity to type systems because that is
what it most closely relates to.

[...]

I think that we're finally getting to the bottom of things. While reading
your reponses something became very clear to me: latent-typing and latent-
types are not a property of languages. Latent-typing, also known as
informal reasoning, is something that all programmers do as a normal part
of programming. To say that a language is latently-typed is to make a
category mistake, because latent-typing is not a property of languages.

A programmer, working in any language, whether typed or not, performs
informal reasoning. I think that is fair to say that there is a
correspondence between type theory and such informal reasoning. The
correspondence is like the correspondence between informal and formal
math. *But* , informal reasoning (latent-typing) is not a property of
languages.

An example of a form of informal reasoning that (practically) every
programmer does daily is termination analysis. There are type systems
that guarantee termination, but I think that is fair to say that it is not
yet understood how to make a practical general purpose language, whose
type system would guarantee termination (or at least I'm not aware of such
a language). It should also be clear that termination analysis need not
be done informally. Given a program, it may be possible to formally prove
that it terminates.

I'm now more convinced than ever that "(latently|dynamically)-typed
language" is an oxymoron. The terminology really needs to be fixed.

-Vesa Karvonen
Jun 23 '06 #267
Anton van Straaten wrote:

Languages with latent type systems typically don't include type
declarations in the source code of programs. The "static" type scheme
of a given program in such a language is thus latent, in the English
dictionary sense of the word, of something that is present but
undeveloped. Terms in the program may be considered as having static
types, and it is possible to infer those types, but it isn't necessarily
easy to do so automatically, and there are usually many possible static
type schemes that can be assigned to a given program.

Programmers infer and reason about these latent types while they're
writing or reading programs. Latent types become manifest when a
programmer reasons about them, or documents them e.g. in comments.


I very much agree with the observation that every programmer performs
"latent typing" in his head (although Pascal Constanza's seems to have
the opposite opinion).

But I also think that "latently typed language" is not a meaningful
characterisation. And for the very same reason! Since any programming
activity involves latent typing - naturally, even in assembler! - it
cannot be attributed to any language in particular, and is hence
useless to distinguish between them. (Even untyped lambda calculus
would not be a counter-example. If you really were to program in it,
you certainly would think along lines like "this function takes two
chuch numerals and produces a third one".)

I hear you when you define latently typed languages as those that
support the programmer's latently typed thinking by providing dynamic
tag checks. But in the very same post (and others) you also explain in
length why these tags are far from being actual types. This seems a bit
contradictory to me.

As Chris Smith points out, these dynamic checks are basically a
necessaity for a well-defined operational semantics. You need them
whenever you have different syntactic classes of values, but lack a
type system to preclude interference. They are just an encoding for
differentiating these syntactic classes. Their connection to types is
rather coincidential.

- Andreas

Jun 23 '06 #268
David Hopwood wrote:

"Values" refers to the concrete values existent in the semantics of a
programming language. This set is usually infinite, but basically fixed.
To describe the set of "values" of an abstract type you would need
"fresh" values that did not exist before (otherwise the abstract type
would be equivalent to some already existent type). So you'd need at
least a theory for name generation or something similar to describe
abstract types in a types-as-sets metaphor.


Set theory has no difficulty with this. It's common, for example, to see
"the set of strings representing propositions" used in treatments of
formal systems.


Oh, I was not saying that this particular aspect cannot be described in
set theory (that was a different argument, about different issues). Just
that you cannot naively equate types with a set of underlying values,
which is what is usually meant by the types-are-sets metaphor - to
capture something like type abstraction you need to do more. (Even then
it might be arguable if it really describes the same thing.)

- Andreas
Jun 23 '06 #269
Vesa Karvonen wrote:
....
An example of a form of informal reasoning that (practically) every
programmer does daily is termination analysis. There are type systems
that guarantee termination, but I think that is fair to say that it is not
yet understood how to make a practical general purpose language, whose
type system would guarantee termination (or at least I'm not aware of such
a language). It should also be clear that termination analysis need not
be done informally. Given a program, it may be possible to formally prove
that it terminates.


To make the halting problem decidable one would have to do one of two
things: Depend on memory size limits, or have a language that really is
less expressive, at a very deep level, than any of the languages
mentioned in the newsgroups header for this message.

A language for which the halting problem is decidable must also be a
language in which it is impossible to simulate an arbitrary Turing
machine (TM). Otherwise, one could decide the notoriously undecidable TM
halting problem by generating a language X program that simulates the TM
and deciding whether the language X program halts.

One way out might be to depend on the boundedness of physical memory. A
language with a fixed maximum memory size cannot simulate an arbitrary
TM. However, the number of states for a program is so great that a
method that depends on its finiteness, but would not work for an
infinite memory model, is unlikely to be practical.

Patricia
Jun 23 '06 #270
Matthias Blume wrote:
Pascal Costanza <pc@p-cos.net> writes:
Chris Smith wrote:
While this effort to salvage the term "type error" in dynamic
languages is interesting, I fear it will fail. Either we'll all
have to admit that "type" in the dynamic sense is a psychological
concept with no precise technical definition (as was at least hinted
by Anton's post earlier, whether intentionally or not) or someone is
going to have to propose a technical meaning that makes sense,
independently of what is meant by "type" in a static system.

What about this: You get a type error when the program attempts to
invoke an operation on values that are not appropriate for this
operation.

Examples: adding numbers to strings; determining the string-length of
a number; applying a function on the wrong number of parameters;
applying a non-function; accessing an array with out-of-bound indexes;
etc.


Yes, the phrase "runtime type error" is actually a misnomer. What one
usually means by that is a situation where the operational semantics
is "stuck", i.e., where the program, while not yet arrived at what's
considered a "result", cannot make any progress because the current
configuration does not match any of the rules of the dynamic
semantics.

The reason why we call this a "type error" is that such situations are
precisely the ones we want to statically rule out using sound static
type systems. So it is a "type error" in the sense that the static
semantics was not strong enough to rule it out.
Sending a message to an object that does not understand that message
is a type error. The "message not understood" machinery can be seen
either as a way to escape from this type error in case it occurs and
allow the program to still do something useful, or to actually remove
(some) potential type errors.


I disagree with this. If the program keeps running in a defined way,
then it is not what I would call a type error. It definitely is not
an error in the sense I described above.


If your view of a running program is that it is a "closed" system, then
you're right. However, maybe there are several layers involved, so what
appears to be a well-defined behavior from the outside may still be
regarded as a type error internally.

A very obvious example of this is when you run a program in a debugger.
There are two levels involved here: the program signals a type error,
but that doesn't mean that the system as a whole is stuck. Instead, the
debugger takes over and offers ways to deal with the type error. The
very same program run without debugging support would indeed simply be
stuck in the same situation.

So to rephrase: It depends on whether you use the "message not
understood" machinery as a way to provide well-defined behavior for the
base level, or rather as a means to deal with an otherwise unanticipated
situation. In the former case it extends the language to remove certain
type errors, in the latter case it provides a kind of debugging facility
(and it indeed may be a good idea to deploy programs with debugging
facilities, and not only use debugging tools at development time).

This is actually related to the notion of reflection, as coined by Brian
C. Smith. In a reflective architecture, you distinguish between various
interpreters, each of which interprets the program at the next level. A
debugger is a program that runs at a different level than a base program
that it debugs. However, the reflective system as a whole is "just" a
single program seen from the outside (with one interpreter that runs the
whole reflective tower). This distinction between the internal and the
external view of a reflective system was already made by Brian Smith.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 23 '06 #271
Chris Smith wrote:
Pascal Costanza <pc@p-cos.net> wrote:
What about this: You get a type error when the program attempts to
invoke an operation on values that are not appropriate for this operation.

Examples: adding numbers to strings; determining the string-length of a
number; applying a function on the wrong number of parameters; applying
a non-function; accessing an array with out-of-bound indexes; etc.
Hmm. I'm afraid I'm going to be picky here. I think you need to
clarify what is meant by "appropriate".


No, I cannot be a lot clearer here. What operations are appropriate for
what values largely depends on the intentions of a programmer. Adding a
number to a string is inappropriate, no matter how a program behaves
when this actually occurs (whether it continues to execute the operation
blindly, throws a continuable exception, or just gets stuck).
If you mean "the operation will
not complete successfully" as I suspect you do, then we're closer...


No, we're not. You're giving a purely technical definition here, that
may or may not relate to the programmer's (or "designer's")
understanding of the domain.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 23 '06 #272
Marshall wrote:
I am sceptical of the idea that when programming in a dynamically
typed language one doesn't have to think about both models as well.
I don't have a good model of the mental process of working
in a dynamically typed language, but how could that be the case?
(I'm not asking rhetorically.) Do you then run your program over
and over, mechanically correcting the code each time you discover
a type error? In other words, if you're not thinking of the type model,
are you using the runtime behavior of the program as an assistant,
the way I use the static analysis of the program as an assistant?
Yes.
I don't accept the idea about pairing the appropriateness of each
system according to whether one is doing exploratory programming.
I do exploratory programming all the time, and I use the static type
system as an aide in doing so. Rather I think this is just another
manifestation of the differences in the mental processes between
static typed programmers and dynamic type programmers, which
we are beginning to glimpse but which is still mostly unknown.
Probably.
Oh, and I also want to say that of all the cross-posted mega threads
on static vs. dynamic typing, this is the best one ever. Most info;
least flames. Yay us!


Yay! :)
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 23 '06 #273
Marshall wrote:
Pascal Costanza wrote:
Consider a simple expression like 'a + b': In a dynamically typed
language, all I need to have in mind is that the program will attempt to
add two numbers. In a statically typed language, I additionally need to
know that there must a guarantee that a and b will always hold numbers.
I still don't really see the difference.

I would not expect that the dynamic programmer will be
thinking that this code will have two numbers most of the
time but sometimes not, and fail. I would expect that in both
static and dynamic, the thought is that that code is adding
two numbers, with the difference being the static context
gives one a proof that this is so.


There is a third option: it may be that at the point where I am writing
this code, I simply don't bother yet whether a and b will always be
numbers. In case something other than numbers pop up, I can then make a
decision how to proceed from there.
In this simple example,
the static case is better, but this is not free, and the cost
of the static case is evident elsewhere, but maybe not
illuminated by this example.


Yes, maybe the example is not the best one. This kind of example,
however, occurs quite often when programming in an object-oriented
style, where you don't know yet what objects will and will not respond
to a message / generic function. Even in the example above, it could be
that you can give an appropriate definition for + for objects other than
numbers.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 23 '06 #274
>>>>> "Marshall" <ma*************@gmail.com> (M) wrote:
M> Torben gidius Mogensen wrote:

That's not true. ML has variables in the mathematical sense of
variables -- symbols that can be associated with different values at
different times. What it doesn't have is mutable variables (though it
can get the effect of those by having variables be immutable
references to mutable memory locations).
M> While we're on the topic of terminology, here's a pet peeve of
M> mine: "immutable variable." M> immutable = can't change
M> vary-able = can change M> Clearly a contradiction in terms.


I would say that immutable = 'can't *be* changed' rather than 'can't
change'. But I am not a native English speaker.

Compare with this: the distance of the Earth to Mars is variable (it
varies), but it is also immutable (we can't change it).
--
Piet van Oostrum <pi**@cs.uu.nl>
URL: http://www.cs.uu.nl/~piet [PGP 8DAE142BE17999C4]
Private email: pi**@vanoostrum.org
Jun 23 '06 #275
Vesa Karvonen wrote:
I think that we're finally getting to the bottom of things. While reading
your reponses something became very clear to me: latent-typing and latent-
types are not a property of languages. Latent-typing, also known as
informal reasoning, is something that all programmers do as a normal part
of programming. To say that a language is latently-typed is to make a
category mistake, because latent-typing is not a property of languages.

A programmer, working in any language, whether typed or not, performs
informal reasoning. I think that is fair to say that there is a
correspondence between type theory and such informal reasoning. The
correspondence is like the correspondence between informal and formal
math. *But* , informal reasoning (latent-typing) is not a property of
languages.
Well, it's obviously the case that latent types as I've described them
are not part of the usual semantics of dynamically typed languages. In
other messages, I've mentioned types like "number -> number" which have
no meaning in a dynamically typed language. You can only write them in
comments (unless you implement some kind of type handling system), and
language implementations aren't aware of such types.

OTOH, a programmer reasoning explicitly about such types, writing them
in comments, and perhaps using assertions to check them has, in a sense,
defined a language. Having done that, and reasoned about the types
in his program, he manually erases them, "leaving" code written in the
original dynamically-typed language. You can think of it as though it
were generated code, complete with comments describing types, injected
during the erasure process.

So, to address your category error objection, I would say that latent
typing is a property of latently-typed languages, which are typically
informally-defined supersets of what we know as dynamically-typed languages.

I bet that doesn't make you happy, though. :D

Still, if that sounds a bit far-fetched, let me start again at ground
level, with a Haskell vs. Scheme example:

let double x = x * 2

vs.:

(define (double x) (* x 2))

Programmers in both languages do informal reasoning to figure out the
type of 'double'. I'm assuming that the average Haskell programmer
doesn't write out a proof whenever he wants to know the type of a term,
and doesn't have a compiler handy.

But the Haskell programmer's informal reasoning takes place in the
context of a well-defined formal type system. He knows what the "type
of double" means: the language defines that for him.

The type-aware Scheme programmer doesn't have that luxury: before he can
talk about types, he has to invent a type system, something to give
meaning to an expression such as "number -> number". Performing that
invention gives him types -- albeit informal types, a.k.a. latent types.

In the Haskell case, the types are a property of the language. If
you're willing to acknowledge the existence of something like latent
types, what are they a property of? Just the amorphous informal cloud
which surrounds dynamically-typed languages? Is that a satisfactory
explanation of these two quite similar examples?

I want to mention two other senses in which latent types become
connected to real languages. That doesn't make them properties of the
formal semantics of the language, but the connection is a real one at a
different level.

The first is that in a language without a rich formal type system,
informal reasoning outside of the formal type system becomes much more
important. Conversely, in Haskell, even if you accept the existence of
latent types, they're close enough to static types that it's hardly
necessary to consider them. This is why latent types are primarily
associated with languages without rich formal type systems.

The second connection is via tags: these are part of the definition of a
dynamically-typed language, and if the programmer is reasoning
explicitly about latent types, tags are a useful tool to help ensure
that assumptions about types aren't violated. So this is a connection
between a feature in the semantics of the language, and these
extra-linguistic latent types.
An example of a form of informal reasoning that (practically) every
programmer does daily is termination analysis. There are type systems
that guarantee termination, but I think that is fair to say that it is not
yet understood how to make a practical general purpose language, whose
type system would guarantee termination (or at least I'm not aware of such
a language). It should also be clear that termination analysis need not
be done informally. Given a program, it may be possible to formally prove
that it terminates.
Right. And this is partly why talking about latent types, as opposed to
the more general "informal reasoning", makes sense: because latent types
are addressing the same kinds of things that static types can capture.
Type-like things.
I'm now more convinced than ever that "(latently|dynamically)-typed
language" is an oxymoron. The terminology really needs to be fixed.


I agree that fixing is needed. The challenge is to do it in a way that
accounts for, rather than simply ignores, the many informal correlations
to formal type concepts that exist in dynamically-typed languages.
Otherwise, the situation won't improve.

Anton
Jun 23 '06 #276
Marshall wrote:
Joe Marshall wrote:
That's the important point: I want to run broken code.


I want to make sure I understand. I can think of several things
you might mean by this. It could be:
1) I want to run my program, even though I know parts of it
are broken, because I think there are parts that are not broken
and I want to try them out.
2) I want to run my program, even though it is broken, and I
want to run right up to a broken part and trap there, so I can
use the runtime facilities of the language to inspect what's
going on.

I want to run
as much of the working fragments as I can, and I want a `safety net' to
prevent me from performing undefined operations, but I want the safety
net to catch me at the *last* possible moment.


This statement is interesting, because the conventional wisdom (at
least as I'm used to hearing it) is that it is best to catch bugs
at the *first* possible moment. But I think maybe we're talking
about different continua here. The last last last possible moment
is after the software has shipped to the customer, and I'm pretty
sure that's not what you mean. I think maybe you mean something
more like 2) above.


Nowadays, we have more options wrt what it means to "ship" code. It
could be that your program simply runs as a (web) service to which you
have access even after the customer has started to use the program. See
http://www.paulgraham.com/road.html for a good essay on this idea.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 23 '06 #277
Vesa Karvonen wrote:
In comp.lang.functional Anton van Straaten <an***@appsolutions.com> wrote:
[...]
I reject this comparison. There's much more to it than that. The point
is that the reasoning which programmers perform when working with an
program in a latently-typed language bears many close similiarities to
the purpose and behavior of type systems.

This isn't an attempt to jump on any bandwagons, it's an attempt to
characterize what is actually happening in real programs and with real
programmers. I'm relating that activity to type systems because that is
what it most closely relates to.

[...]

I think that we're finally getting to the bottom of things. While reading
your reponses something became very clear to me: latent-typing and latent-
types are not a property of languages. Latent-typing, also known as
informal reasoning, is something that all programmers do as a normal part
of programming. To say that a language is latently-typed is to make a
category mistake, because latent-typing is not a property of languages.


I disagree with you and agree with Anton. Here, it is helpful to
understand the history of Scheme a bit: parts of its design are a
reaction to what Schemers perceived as having failed in Common Lisp (and
other previous Lisp dialects).

One particularly illuminating example is the treatment of nil in Common
Lisp. That value is a very strange beast in Common Lisp because it
stands for several concepts at the same time: most importantly the empty
list and the boolean false value. Its type is also "interesting": it is
both a list and a symbol at the same time. It is also "interesting" that
its quoted value is equivalent to the value nil itself. This means that
the following two forms are equivalent:

(if nil 42 4711)
(if 'nil 42 4711)

Both forms evaluate to 4711.

It's also the case that taking the car or cdr (first or rest) of nil
doesn't give you an error, but simply returns nil as well.

The advantage of this design is that it allows you to express a lot of
code in a very compact way. See
http://www.apl.jhu.edu/~hall/lisp/Scheme-Ballad.text for a nice
illustration.

The disadvantage is that it is mostly impossible to have a typed view of
nil, at least one that clearly disambiguates all the cases. There are
also other examples where Common Lisp conflates different types, and
sometimes only for special cases. [1]

Now compare this with the Scheme specification, especially this section:
http://www.schemers.org/Documents/St...ml#%25_sec_3.2

This clearly deviates strongly from Common Lisp (and other Lisp
dialects). The emphasis here is on a clear separation of all the types
specified in the Scheme standard, without any exception. This is exactly
what makes it straightforward in Scheme to have a latently typed view of
programs, in the sense that Anton describes. So latent typing is a
property that can at least be enabled / supported by a programming
language, so it is reasonable to talk about this as a property of some
dynamically typed languages.

Pascal

[1] Yet Common Lisp allows you to write beautiful code, more often than
not especially _because_ of these "weird" conflations, but that's a
different topic.

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 23 '06 #278
Patricia Shanahan wrote:
Vesa Karvonen wrote:
...
An example of a form of informal reasoning that (practically) every
programmer does daily is termination analysis. There are type systems
that guarantee termination, but I think that is fair to say that it is
not
yet understood how to make a practical general purpose language, whose
type system would guarantee termination (or at least I'm not aware of
such
a language). It should also be clear that termination analysis need not
be done informally. Given a program, it may be possible to formally
prove
that it terminates.


To make the halting problem decidable one would have to do one of two
things: Depend on memory size limits, or have a language that really is
less expressive, at a very deep level, than any of the languages
mentioned in the newsgroups header for this message.


Not quite. See http://en.wikipedia.org/wiki/ACL2
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 23 '06 #279
Andreas Rossberg wrote:
Chris Uppal wrote:
> It's worth noting, too, that (in some sense) the type of an object
> can change over time[*].

No. Since a type expresses invariants, this is precisely what may
*not* happen. If certain properties of an object may change then the
type of
the object has to reflect that possibility. Otherwise you cannot
legitimately call it a type.


Well, it seems to me that you are /assuming/ a notion of what kinds of
logic can be called type (theories), and I don't share your
assumptions. No offence intended.


OK, but can you point me to any literature on type theory that makes a
different assumption?


'Fraid not. (I'm not a type theorist -- for all I know there may be lots, but
my suspicion is that they are rare at best.)

But perhaps I shouldn't have used the word theory at all. What I mean is that
there is one or more logic of type (informal or not -- probably informal) with
respect to which the object in question has changed it categorisation. If no
existing type /theory/ (as devised by type theorists) can handle that case,
then that is a deficiency in the set of existing theories -- we need newer and
better ones.

But, as a sort of half-way, semi-formal, example: consider the type environment
in a Java runtime. The JVM does formal type-checking of classfiles as it loads
them. In most ways that checking is static -- it's treating the bytecode as
program text and doing a static analysis on it before allowing it to run (and
rejecting what it can't prove to be acceptable by its criteria). However, it
isn't /entirely/ static because the collection of classes varies at runtime in
a (potentially) highly dynamic way. So it can't really examine the "whole"
text of the program -- indeed there is no such thing. So it ends up with a
hybrid static/dynamic type system -- it records any assumptions it had to make
in order to find a proof of the acceptability of the new code, and if (sometime
in the future) another class is proposed which violates those assumptions, then
that second class is rejected.

I see no reason,
even in practise, why a static analysis should not be able to see that
the set of acceptable operations (for some definition of acceptable)
for some object/value/variable can be different at different times in
the execution.


Neither do I. But what is wrong with a mutable reference-to-union type,
as I suggested? It expresses this perfectly well.


Maybe I misunderstood what you meant by union type. I took it to mean that the
type analysis didn't "know" which of the two types was applicable, and so would
reject both (or maybe accept both ?). E..g if at instant A some object, obj,
was in a state where it to responds to #aMessage, but not #anotherMessage; and
at instant B it is in a state where it responds to #anotherMessage but not
#aMessage. In my (internal and informal) type logic, make the following
judgements:

In code which will be executed at instant A
obj aMessage. "type correct"
obj anotherMessage. "type incorrect"

In code which will be executed at instant B
obj aMessage. "type incorrect"
obj anotherMessage. "type correct"

I don't see how a logic with no temporal element can arrive at all four those
judgements, whatever it means by a union type.

-- chris
Jun 23 '06 #280
Eliot Miranda wrote:

[me:]
Taking Smalltalk /specifically/, there is a definite sense in which it
is typeless -- or trivially typed -- in that in that language there are
no[*] operations which are forbidden[**],


Come one Chris U. One has to distinguish an attempt to invoke an
operation with it being carried out. There is nothing in Smalltalk to
stop one attempting to invoke any "operation" on any object. But one
can only actually carry-out operations on objects that implement them.
So, apart from the argument about inadvertent operation name overloading
(which is important, but avoidable), Smalltalk is in fact
strongly-typed, but not statically strongly-typed.


What are you doing /here/, Eliot, this is Javaland ? Smalltalk is thatta
way ->

;-)
But this discussion has been all about /whether/ it is ok to apply the notion
of (strong) typing to what runtime-checked languages do. We all agree that
the checks happen, but the question is whether it is
reasonable/helpful/legitimate to extend the language of static checking to the
dynamic case. (I'm on the side which says yes, but good points have been made
against it).

The paragraph you quoted doesn't represent most of what I have been saying --
it was just a side-note looking at one small aspect of the issue from a
different angle.

-- chris
Jun 23 '06 #281
Anton van Straaten wrote:
In that case, you could say that the conceptual type is different than
the inferred static type. But most of the time, the human is reasoning
about pretty much the same types as the static types that Haskell
infers. Things would get a bit confusing otherwise.


Or any mechanised or formalised type system, for any language. If a system
doesn't match pretty closely with at least part of the latent type systems (in
your sense) used by the programmers, then that type system is useless.

(I gather that it took, or maybe is still taking, theorists a while to get to
grips with the infromal type logics which were obvious to working OO
programmers.)

-- chris
Jun 23 '06 #282
David Hopwood wrote:
But some of the advocates of statically
typed languages wish to lump these languages together with assembly
language a "untyped" in an attempt to label them as unsafe.


A common term for languages which have defined behaviour at run-time is
"memory safe". For example, "Smalltalk is untyped and memory safe."
That's not too objectionable, is it?


I find it too weak, as if to say: "well, ok, it can't actually corrupt memory
as such, but the program logic is still apt go all over the shop"...

-- chris
Jun 23 '06 #283
Chris Smith wrote:

[me:]
I think we're agreed (you and I anyway, if not everyone in this thread)
that we don't want to talk of "the" type system for a given language.
We want to allow a variety of verification logics. So a static type
system is a logic which can be implemented based purely on the program
text without making assumptions about runtime events (or making
maximally pessimistic assumptions -- which comes to the same thing
really). I suggest that a "dynamic type system" is a verification
logic which (in principle) has available as input not only the program
text, but also the entire history of the program execution up to the
moment when the to-be-checked operation is invoked.
I am trying to understand how the above statement about dynamic types
actually says anything at all. So a dynamic type system is a system of
logic by which, given a program and a path of program execution up to
this point, verifies something. We still haven't defined "something",
though.


That was the point of my first sentence (quoted above). I take it, and I
assumed that you shared my view, that there is no single "the" type system --
that /a/ type system will yield judgements on matters which it has been
designed to judge. So unless we either nominate a specific type system or
choose what judgements we want to make (today) any discussion of types is
necessarily parameterised on the class(es) of <Judgements>. So, I don't --
can't -- say /which/ judgements my "dynamic type systems" will make. They may
be about nullablity, they may be about traditional "type", they may be about
mutability...

When we look at a specific language (and its implementation), then we can
induce the logic(s) that whatever dynamic checks it applies define.
Alternatively we can consider other "dynamic type systems" which we would like
to formalise and mechanise, but which are not built into our language of
choice.

We also haven't defined what happens if that verification
fails.
True, and a good point. But note that it only applies to systems which are
actually implemented in code (or which are intended to be so).

As a first thought, I suggest that a "dynamic type system" should specify a
replacement action (which includes, but is not limited to, terminating
execution). That action is taken /instead/ of the rejected one. E.g. we don't
actually read off the end of the array, but instead a signal is raised. (An
implementation might, in some cases, find it easier to implement the checks by
allowing the operation to fail, and then backtracking to "pretend" that it had
never happened, but that's irrelevant here). The replacement action must -- of
course -- be valid according to the rules of the type system.

Incidentally, using that idea, we get a fairly close analogy to the difference
between strong and weak static type systems. If the "dynamic type system"
doesn't specify a valid replacement action, or if that action is not guaranteed
to be taken, then it seems that the type system or the language implementation
is "weak" in very much the same sort of way as -- say -- the 'C' type system is
weak and/or weakly enforced.

I wonder whether that way of looking at it -- the "error" never happens since
it is replaced by a valid operation -- puts what I want to call dynamic type
systems onto a closer footing with static type systems. Neither allows the
error to occur.

(Of course, /I/ -- the programmer -- have made a type error, but that's
different thing.)

In other words, I think that everything so far is essentially just
defining a dynamic type system as equivalent to a formal semantics for a
programming language, in different words that connote some bias toward
certain ways of looking at possibilities that are likely to lead to
incorrect program behavior. I doubt that will be an attractive
definition to very many people.
My only objections to this are:

a) I /want/ to use the word "type" to describe the kind of reasoning I do (and
some of the mistakes I make)

b) I want to separate the systems of reasoning (whether formal or informal,
static or dynamic, implemented or merely conceptual, and /whatever/ we call 'em
;-) from the language semantics. I have no objection to <some type system>
being used as part of a language specification, but I don't want to restrict
types to that.

Note that not all errors that I would want to call type errors are
necessarily caught by the runtime -- it might go happily ahead never
realising that it had just allowed one of the constraints of one of the
logics I use to reason about the program. What's known as an
undetected bug -- but just because the runtime doesn't see it, doesn't
mean that I wouldn't say I'd made a type error. (The same applies to
any specific static type system too, of course.)


In static type system terminology, this quite emphatically does NOT
apply. There may, of course, be undetected bugs, but they are not type
errors. If they were type errors, then they would have been detected,
unless the compiler is broken.


Sorry, I wasn't clear. I was thinking here of my internal analysis (which I
want to call a type system too). Most of what I was trying to say is that I
don't expect a "dynamic type system" to be complete, any more than a static
one. I also wanted to emphasise that I am happy to talk about type systems
(dynamic or not) which have not been implemented as code (so they yield
judgements, and provide a framework for understanding the program, but nothing
in the computer actually checks them for me).

If you are trying to identify a set of dynamic type errors, in a way
that also applies to statically typed languages, then I will read on.


Have I answered that now ? /Given/ a set of operations which I want to forbid,
I would like to say that a "dynamic type system" which prevents them executing
is doing very much the same job as a static type system which stops the code
compiling.

Of course, we can talk about what kinds of operations we want to forbid, but
that seems (to me) to be orthogonal to this discussion. Indeed, the question
of dynamic/static is largely irrelevant to a discussion of what operations we
want to police, except insofar as certain checks might require information
which isn't available to a (feasible) static theorem prover.

-- chris


Jun 23 '06 #284
ro******@ps.uni-sb.de wrote:
I very much agree with the observation that every programmer performs
"latent typing" in his head
Great!
(although Pascal Constanza's seems to have
the opposite opinion).
I'll have to catch up on that.
But I also think that "latently typed language" is not a meaningful
characterisation. And for the very same reason! Since any programming
activity involves latent typing - naturally, even in assembler! - it
cannot be attributed to any language in particular, and is hence
useless to distinguish between them. (Even untyped lambda calculus
would not be a counter-example. If you really were to program in it,
you certainly would think along lines like "this function takes two
chuch numerals and produces a third one".)
Vesa raised a similar objection in his post 'Saying "latently typed
language" is making a category mistake'. I've made some relevant
responses to that post.

I agree that there's a valid point in the sense that latent types are
not a property of the semantics of the languages they're associated with.

But to take your example, I've had the pleasure of programming a little
in untyped lambda calculus. I can tell you that not having tags is
quite problematic. You certainly do perform latent typing in your head,
but without tags, the language doesn't provide any built-in support for
it. You're on your own.

I'd characterize this as being similar to other situations in which a
language has explicit support for some programming style (e.g. FP or
OO), vs. not having such support, so that you have to take steps to fake
it. The style may be possible to some degree in either case, but it's
much easier if the language has explicit support for it.
Dynamically-typed languages provide support for latent typing. Untyped
lambda calculus and assembler don't provide such support, built-in.

However, I certainly don't want to add to confusion about things like
this. I'm open to other ways to describe these things. Part of where
"latently-typed language" comes from is as an alternative to
"dynamically-typed language" -- an alternative with connotations that
I'm suggesting should be more compatible with type theory.
I hear you when you define latently typed languages as those that
support the programmer's latently typed thinking by providing dynamic
tag checks. But in the very same post (and others) you also explain in
length why these tags are far from being actual types. This seems a bit
contradictory to me.
I don't see it as contradictory. Tags are a mechanism. An individual
tag is not a type. But together, tags help check types. More below:
As Chris Smith points out, these dynamic checks are basically a
necessaity for a well-defined operational semantics. You need them
whenever you have different syntactic classes of values, but lack a
type system to preclude interference. They are just an encoding for
differentiating these syntactic classes. Their connection to types is
rather coincidential.


Oooh, I think I'd want to examine that "coincidence" very closely. For
example, *why* do we have different syntactic classes of values in the
first place? I'd say that we have them precisely to support type-like
reasoning about programs, even in the absence of a formal static type
system.

If you're going to talk about "syntactic classes of values" in the
context of latent types, you need to consider how they relate to latent
types. Syntactic classes of values relate to latent types in a very
similar way to that in which static types do.

One difference is that since latent types are not automatically
statically checked, checks have to happen in some other way. That's
what checks of tags are for.

Chris Smith's observation was made in a kind of deliberate void: one in
which the existence of anything like latent types is discounted, on the
grounds that they're informal. If you're willing to acknowledge the
idea of latent types, then you can reinterpret the meaning of tags in
the context of latent types. In that context, tags are part of the
mechanism which checks latent types.

Anton
Jun 23 '06 #285
Pascal Costanza <pc@p-cos.net> writes:
Patricia Shanahan wrote:
Vesa Karvonen wrote:
...
An example of a form of informal reasoning that (practically) every
programmer does daily is termination analysis. There are type systems
that guarantee termination, but I think that is fair to say that it
is not
yet understood how to make a practical general purpose language, whose
type system would guarantee termination (or at least I'm not aware
of such
a language). It should also be clear that termination analysis need not
be done informally. Given a program, it may be possible to
formally prove
that it terminates.

To make the halting problem decidable one would have to do one of
two
things: Depend on memory size limits, or have a language that really is
less expressive, at a very deep level, than any of the languages
mentioned in the newsgroups header for this message.


Not quite. See http://en.wikipedia.org/wiki/ACL2


What do you mean "not quite"? Of course, Patricia is absolutely
right. Termination-guaranteeing languages are fundamentally less
expressive than Turing-complete languages. ACL2 was not mentioned in
the newsgroup header.
Jun 23 '06 #286
Chris Uppal wrote:

Well, it seems to me that you are /assuming/ a notion of what kinds of
logic can be called type (theories), and I don't share your
assumptions. No offence intended.
OK, but can you point me to any literature on type theory that makes a
different assumption?


'Fraid not. (I'm not a type theorist -- for all I know there may be lots, but
my suspicion is that they are rare at best.)


I would suspect the same :-). And the reason is that "type" has a
well-established use in theory. It is not just my "assumption", it is
established practice since 80 or so years. So far, this discussion has
not revealed the existence of any formal work that would provide a
theory of "dynamic types" in the sense it is used to characterise
"dynamically typed" languages.

So what you are suggesting may be an interesting notion, but it's not
what is called "type" in a technical sense. Overloading the same term
for something different is not a good idea if you want to avoid
confusion and misinterpretations.
But, as a sort of half-way, semi-formal, example: consider the type environment
in a Java runtime. The JVM does formal type-checking of classfiles as it loads
them. In most ways that checking is static -- it's treating the bytecode as
program text and doing a static analysis on it before allowing it to run (and
rejecting what it can't prove to be acceptable by its criteria). However, it
isn't /entirely/ static because the collection of classes varies at runtime in
a (potentially) highly dynamic way. So it can't really examine the "whole"
text of the program -- indeed there is no such thing. So it ends up with a
hybrid static/dynamic type system -- it records any assumptions it had to make
in order to find a proof of the acceptability of the new code, and if (sometime
in the future) another class is proposed which violates those assumptions, then
that second class is rejected.
Incidentally, I know this scenario very well, because that's what I'm
looking at in my thesis :-). All of this can easily be handled
coherently with well-established type machinery and terminology. No need
to bend existing concepts and language, no need to resort to "dynamic
typing" in the way Java does it either.
In code which will be executed at instant A
obj aMessage. "type correct"
obj anotherMessage. "type incorrect"

In code which will be executed at instant B
obj aMessage. "type incorrect"
obj anotherMessage. "type correct"

I don't see how a logic with no temporal element can arrive at all four those
judgements, whatever it means by a union type.


I didn't say that the type system cannot have temporal elements. I only
said that a type itself cannot change over time. A type system states
propositions about a program, a type assignment *is* a proposition. A
proposition is either true or false (or undecidable), but it is so
persistently, considered under the same context. So if you want a type
system to capture temporal elements, then these must be part of a type
itself. You can introduce types with implications like "in context A,
this is T, in context B this is U". But the whole quoted part then is
the type, and it is itself invariant over time.

- Andreas
Jun 23 '06 #287
Matthias Blume wrote:
Pascal Costanza <pc@p-cos.net> writes:
Patricia Shanahan wrote:
Vesa Karvonen wrote:
...
An example of a form of informal reasoning that (practically) every
programmer does daily is termination analysis. There are type systems
that guarantee termination, but I think that is fair to say that it
is not
yet understood how to make a practical general purpose language, whose
type system would guarantee termination (or at least I'm not aware
of such
a language). It should also be clear that termination analysis need not
be done informally. Given a program, it may be possible to
formally prove
that it terminates.
To make the halting problem decidable one would have to do one of
two
things: Depend on memory size limits, or have a language that really is
less expressive, at a very deep level, than any of the languages
mentioned in the newsgroups header for this message.

Not quite. See http://en.wikipedia.org/wiki/ACL2


What do you mean "not quite"? Of course, Patricia is absolutely
right. Termination-guaranteeing languages are fundamentally less
expressive than Turing-complete languages. ACL2 was not mentioned in
the newsgroup header.


ACL2 is a subset of Common Lisp, and programs written in ACL2 are
executable in Common Lisp. comp.lang.lisp is not only about Common Lisp,
but even if it were, ACL2 would fit.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 23 '06 #288
Vesa Karvonen wrote:
In comp.lang.functional Anton van Straaten <an***@appsolutions.com>
wrote: [...]
I reject this comparison. There's much more to it than that. The


I think that we're finally getting to the bottom of things. While
reading your reponses something became very clear to me:
latent-typing and latent- types are not a property of languages.
Latent-typing, also known as informal reasoning, is something that


And what does this have to do with Perl? Are you an alter ego of Xha Lee who
spills his 'wisdom' across any newsgroup he can find?

jue
Jun 23 '06 #289
Pascal Costanza wrote:
Matthias Blume wrote:
Pascal Costanza <pc@p-cos.net> writes:
Patricia Shanahan wrote:
Vesa Karvonen wrote:
...
> An example of a form of informal reasoning that (practically) every
> programmer does daily is termination analysis. There are type systems
> that guarantee termination, but I think that is fair to say that it
> is not
> yet understood how to make a practical general purpose language, whose
> type system would guarantee termination (or at least I'm not aware
> of such
> a language). It should also be clear that termination analysis
> need not
> be done informally. Given a program, it may be possible to
> formally prove
> that it terminates.
To make the halting problem decidable one would have to do one of
two
things: Depend on memory size limits, or have a language that really is
less expressive, at a very deep level, than any of the languages
mentioned in the newsgroups header for this message.
Not quite. See http://en.wikipedia.org/wiki/ACL2


What do you mean "not quite"? Of course, Patricia is absolutely
right. Termination-guaranteeing languages are fundamentally less
expressive than Turing-complete languages. ACL2 was not mentioned in
the newsgroup header.


ACL2 is a subset of Common Lisp, and programs written in ACL2 are
executable in Common Lisp. comp.lang.lisp is not only about Common Lisp,
but even if it were, ACL2 would fit.


To prove Turing-completeness of ACL2 from Turing-completeness of Common
Lisp you would need to run the reduction the other way round, showing
that any Common Lisp program can be converted to, or emulated by, an
ACL2 program.

Patricia
Jun 23 '06 #290
Pascal Costanza <pc@p-cos.net> writes:
Matthias Blume wrote:
Pascal Costanza <pc@p-cos.net> writes:
Patricia Shanahan wrote:
Vesa Karvonen wrote:
...
> An example of a form of informal reasoning that (practically) every
> programmer does daily is termination analysis. There are type systems
> that guarantee termination, but I think that is fair to say that it
> is not
> yet understood how to make a practical general purpose language, whose
> type system would guarantee termination (or at least I'm not aware
> of such
> a language). It should also be clear that termination analysis need not
> be done informally. Given a program, it may be possible to
> formally prove
> that it terminates.
To make the halting problem decidable one would have to do one of
two
things: Depend on memory size limits, or have a language that really is
less expressive, at a very deep level, than any of the languages
mentioned in the newsgroups header for this message.
Not quite. See http://en.wikipedia.org/wiki/ACL2

What do you mean "not quite"? Of course, Patricia is absolutely
right. Termination-guaranteeing languages are fundamentally less
expressive than Turing-complete languages. ACL2 was not mentioned in
the newsgroup header.


ACL2 is a subset of Common Lisp, and programs written in ACL2 are
executable in Common Lisp. comp.lang.lisp is not only about Common
Lisp, but even if it were, ACL2 would fit.


So what?

Patricia said "less expressive", you said "subset". Where is the
contradiction?

Perhaps you could also bring up the subset of Common Lisp where the
only legal program has the form:

nil

Obviously, this is a subset of Common Lisp and, therefore, fits
comp.lang.lisp. Right?

Yes, if you restrict the language to make it less expressive, you can
guarantee termination. Some restrictions that fit the bill already
exist. IMHO, it is still wrong to say that "Lisp guaratees
termination", just because there is a subset of Lisp that does.

Matthias

PS: I'm not sure if this language guarantees termination, though. :-)
Jun 23 '06 #291
Patricia Shanahan wrote:
Pascal Costanza wrote:
Matthias Blume wrote:
Pascal Costanza <pc@p-cos.net> writes:

Patricia Shanahan wrote:
> Vesa Karvonen wrote:
> ...
>> An example of a form of informal reasoning that (practically) every
>> programmer does daily is termination analysis. There are type
>> systems
>> that guarantee termination, but I think that is fair to say that it
>> is not
>> yet understood how to make a practical general purpose language,
>> whose
>> type system would guarantee termination (or at least I'm not aware
>> of such
>> a language). It should also be clear that termination analysis
>> need not
>> be done informally. Given a program, it may be possible to
>> formally prove
>> that it terminates.
> To make the halting problem decidable one would have to do one of
> two
> things: Depend on memory size limits, or have a language that
> really is
> less expressive, at a very deep level, than any of the languages
> mentioned in the newsgroups header for this message.
Not quite. See http://en.wikipedia.org/wiki/ACL2

What do you mean "not quite"? Of course, Patricia is absolutely
right. Termination-guaranteeing languages are fundamentally less
expressive than Turing-complete languages. ACL2 was not mentioned in
the newsgroup header.


ACL2 is a subset of Common Lisp, and programs written in ACL2 are
executable in Common Lisp. comp.lang.lisp is not only about Common
Lisp, but even if it were, ACL2 would fit.


To prove Turing-completeness of ACL2 from Turing-completeness of Common
Lisp you would need to run the reduction the other way round, showing
that any Common Lisp program can be converted to, or emulated by, an
ACL2 program.


Sorry, obviously I was far from being clear. ACL2 is not
Turing-complete. All iterations must be expressed in terms of
well-founded recursion.
Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
Jun 23 '06 #292
Patricia Shanahan <pa**@acm.org> wrote:
Vesa Karvonen wrote:
...
An example of a form of informal reasoning that (practically) every
programmer does daily is termination analysis. There are type systems
that guarantee termination, but I think that is fair to say that it is not
yet understood how to make a practical general purpose language, whose
type system would guarantee termination (or at least I'm not aware of such
a language). It should also be clear that termination analysis need not
be done informally. Given a program, it may be possible to formally prove
that it terminates.


To make the halting problem decidable one would have to do one of two
things: Depend on memory size limits, or have a language that really is
less expressive, at a very deep level, than any of the languages
mentioned in the newsgroups header for this message.


Patricia, perhaps I'm misunderstanding you here. To make the halting
problem decidable is quite a different thing than to say "given a
program, I may be able to prove that it terminates" (so long as you also
acknowledge the implicit other possibility: I also may not be able to
prove it in any finite bounded amount of time, even if it does
terminate!)

The fact that the halting problem is undecidable is not a sufficient
reason to reject the kind of analysis that is performed by programmers
to convince themselves that their programs are guaranteed to terminate.
Indeed, proofs that algorithms are guaranteed to terminate even in
pathological cases are quite common. Indeed, every assignment of a
worst-case time bound to an algorithm is also a proof of termination.

This is, of course, a very good reason to reject the idea that the
static type system for any Turing-complete language could ever perform
this same kind analysis.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 23 '06 #293
Vesa Karvonen <ve***********@cs.helsinki.fi> wrote:
I think that we're finally getting to the bottom of things. While reading
your reponses something became very clear to me: latent-typing and latent-
types are not a property of languages. Latent-typing, also known as
informal reasoning, is something that all programmers do as a normal part
of programming. To say that a language is latently-typed is to make a
category mistake, because latent-typing is not a property of languages.
It's pretty clear to me that there are two different things here. There
is the set of reasoning that is done about programs by programmers.
This may or may not have some subset called "type reasoning", depending
on whether there's a real difference between type reasoning and any
other reasoning about program behavior, or it's just a word that we've
learned from early experience to use in certain rather arbitrary
contexts. In either case, then there are language features that look
sort of type-like which can support that same reasoning about program
behavior.

It is, to me, an interesting question whether those language features
are really intrinsically connected to that form of reasoning about
program behavior, or whether they are just one way of doing it where
others would suffice as well. Clearly, there is some variation in these
supporting language features between languages that have them... for
example, whether there are only a fixed set of type tags, or an infinite
(or logically infinite, anyway) set, or whether the type tags may have
other type tags as parameters, or whether they even exist as tags at all
or as sets of possible behaviors. Across all these varieties, the
phrase "latently typed language" seems to be a reasonable shorthand for
"language that supports the programmer in doing latent-type reasoning".
Of course, there are still problems here, but really only if you share
my skepticism that there's any difference between type reasoning and any
other reasoning that's done by programmers on their programs.

Perhaps, if you wanted to be quite careful about your distinctions, you
may want to choose different words for the two. However, then you run
into that same pesky "you can't choose other people's terminology" block
again.
A programmer, working in any language, whether typed or not, performs
informal reasoning. I think that is fair to say that there is a
correspondence between type theory and such informal reasoning. The
correspondence is like the correspondence between informal and formal
math.
I'd be more careful in the analogy, there. Since type theory (as
applied to programming languages, anyway) is really just a set of
methods of proving arbitrary statements about program behaviors, a
corresponding "informal type theory" would just be the entire set of
informal reasoning by programmers about their programs. That would
agree with my skepticism, but probably not with too many other people.
An example of a form of informal reasoning that (practically) every
programmer does daily is termination analysis.


Or perhaps you agree with my skepticism, here. This last paragraph
sounds like you're even more convinced of it than I am.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation
Jun 23 '06 #294

Marshall wrote:
Joe Marshall wrote:

That's the important point: I want to run broken code.
I want to make sure I understand. I can think of several things
you might mean by this. It could be:
1) I want to run my program, even though I know parts of it
are broken, because I think there are parts that are not broken
and I want to try them out.


I think I mean a little of each of these.

Nearly *every* program I have ever used is buggy, so this is actually a
normal state of affairs even when I'm not debugging or developing.
2) I want to run my program, even though it is broken, and I
want to run right up to a broken part and trap there, so I can
use the runtime facilities of the language to inspect what's
going on.


I do this quite often. Sometimes I'll develop `in the debugger'. I'll
change some piece of code and run the program until it traps. Then,
without exiting the debugger, I'll fix the immediate problem and
restart the program at the point it trapped. This technique takes a
bit of practice, but if you are working on something that's complex and
has a lot of state, it saves a lot of time because you don't have to
reconstruct the state every time you make a change.
I want to run
as much of the working fragments as I can, and I want a `safety net' to
prevent me from performing undefined operations, but I want the safety
net to catch me at the *last* possible moment.


This statement is interesting, because the conventional wisdom (at
least as I'm used to hearing it) is that it is best to catch bugs
at the *first* possible moment. But I think maybe we're talking
about different continua here. The last last last possible moment
is after the software has shipped to the customer, and I'm pretty
sure that's not what you mean. I think maybe you mean something
more like 2) above.


It is often the case that the proximate cause of a bug is nowhere near
where the fix needs to be applied. This is especially true with the
really hard bugs. A static type system will tell me that there (may)
exist a path through the code that results in an error, but the runtime
check that fails will provide a witness.

Jun 23 '06 #295
Rob Thorpe wrote:
David Hopwood wrote:

The term "dynamically typed" is well used and understood. The term
untyped is generally associated with languages that as you put it "have
no memory safety", it is a pejorative term. "Latently typed" is not
well used unfortunately, but more descriptive.

Most of the arguments above describe a static type system then follow
by saying that this is what "type system" should mean, and finishing by
saying everything else should be considered untyped. This seems to me
to be an effort to associate dynamically typed languages with this
perjorative term.


I can believe that someone, somewhere out there might have done
this, but I can't recall ever having seen it. In any event, the
standard
term for memory safety is "safety", not "untyped." Further, anyone
who was interested in actually understanding the issues woudn't
be doing what you describe.

And if you did find someone who was actively doing this I would
say "never attribute to malice what can adequately be explained
by stupidity."
Marshall

Jun 23 '06 #296
Andreas Rossberg wrote:
Marshall wrote:

What we generally (in programming) call variables are locals
and globals. If the languages supports an update operation
on those variables, then calling them variables makes sense.
But "variable" has become such a catch-all term that we call

public static final int FOO = 7;

a variable, even though it can never, ever vary.

That doesn't make any sense.


It does, because it is only a degenerate case. In general, you can have
something like

void f(int x)
{
const int foo = x+1;
//...
}

Now, foo is still immutable, it is a local, but it clearly also varies.


So what then is to be the definition of "variable" if it includes
things that can never vary, because they are only a degenerate
case? Shall it be simply a named value?
Marshall

Jun 23 '06 #297

Marshall wrote:
Timo Stamm wrote:

This is actually one of the most interesting threads I have read in a
long time. If you ignore the evangelism, there is a lot if high-quality
information and first-hand experience you couldn't find in a dozen books.


Hear hear! This is an *excellent* thread. The evangelism is at
rock-bottom
and the open exploration of other people's way of thinking is at what
looks to me like an all-time high.


What are you, some sort of neo-static pseudo-relational
object-disoriented fascist?

Jun 23 '06 #298
Marshall schreef:
It seems we have languages:
with or without static analysis
with or without runtime type information (RTTI or "tags")
with or without (runtime) safety
with or without explicit type annotations
with or without type inference

Wow. And I don't think that's a complete list, either.


Right. And don't forget that some languages are hybrids in any of those
areas.

--
Affijn, Ruud

"Gewoon is een tijger."
Jun 23 '06 #299
Rob Thorpe schreef:
I would suggest that at least assembly should be referred to as
"untyped".


There are many different languages under the umbrella of "assembly", so
your suggestion is bound to be false.

--
Affijn, Ruud

"Gewoon is een tijger."
Jun 23 '06 #300

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

220
18819
by: Brandon J. Van Every | last post by:
What's better about Ruby than Python? I'm sure there's something. What is it? This is not a troll. I'm language shopping and I want people's answers. I don't know beans about Ruby or have any preconceived ideas about it. I have noticed, however, that every programmer I talk to who's aware of Python is also talking about Ruby. So it...
24
2563
by: Xah Lee | last post by:
What is Expresiveness in a Computer Language 20050207, Xah Lee. In languages human or computer, there's a notion of expressiveness. English for example, is very expressive in manifestation, witness all the poetry and implications and allusions and connotations and dictions. There are a myriad ways to say one thing, fuzzy and warm and...
23
3587
by: Xah Lee | last post by:
The Concepts and Confusions of Pre-fix, In-fix, Post-fix and Fully Functional Notations Xah Lee, 2006-03-15 Let me summarize: The LISP notation, is a functional notation, and is not a so-called pre-fix notation or algebraic notation. Algebraic notations have the concept of operators, meaning, symbols placed around arguments. In...
0
7300
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language...
0
7534
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. ...
1
7277
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For...
0
7644
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the...
0
5826
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development projectplanning, coding, testing, and deploymentwithout human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then...
0
4846
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert...
0
3347
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in...
1
1759
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
1
927
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.