THE GOOD:
1. pickle
2. simplicity and uniformity
3. big library (bigger would be even better)
THE BAD:
1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?
2. Statements vs Expressions business is very dumb. Try writing
a = if x :
y
else: z
3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.
4. splintering of the language: you have the inefficient main language,
and you have a different dialect being developed that needs type
declarations. Why not allow type declarations in the main language
instead as an option (Lisp does it)
5. Why do you need "def" ? In Haskell, you'd write
square x = x * x
6. Requiring "return" is also dumb (see #5)
7. Syntax and semantics of "lambda" should be identical to
function definitions (for simplicity and uniformity)
8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)
9. Syntax for arrays is also bad [a (b c d) e f] would be better
than [a, b(c,d), e, f]
420
P.S. If someone can forward this to python-dev, you can probably save some
people a lot of soul-searching
Jul 18 '05
467 20170
In article <36*************************@posting.google.com> , ra**@cs.mu.oz.au (Ralph Becket) wrote: Let me put it like this. Say I have a statically, expressively, strongly typed language L. And I have another language L' that is identical to L except it lacks the type system. Now, any program in L that has the type declarations removed is also a program in L'. The difference is that a program P rejected by the compiler for L can be converted to a program P' in L' which *may even appear to run fine for most cases*. However, and this is the really important point, P' is *still* a *broken* program. Simply ignoring the type problems does not make them go away: P' still contains all the bugs that program P did.
No. The fallacy in this reasoning is that you assume that "type error"
and "bug" are the same thing. They are not. Some bugs are not type
errors, and some type errors are not bugs. In the latter circumstance
simply ignoring them can be exactly the right thing to do.
(On the other hand, many, perhaps most, type errors are bugs, and so
having a type system provide warnings can be a very useful thing IMO.)
E.
Pascal Costanza wrote: Ken Rose wrote:
Pascal Costanza wrote:
Joachim Durchholz wrote:
Pascal Costanza wrote:
> For example, static type systems are incompatible with dynamic > metaprogramming. This is objectively a reduction of expressive > power, because programs that don't allow for dynamic > metaprogramming can't be extended in certain ways at runtime, by > definition.
What is dynamic metaprogramming?
Writing programs that inspect and change themselves at runtime.
Ah. I used to do that in assembler. I always felt like I was aiming a shotgun between my toes.
When did self-modifying code get rehabilitated?
I think this was in the late 70's.
Have you got a good reference for the uninitiated?
Thanks
- ken
Pascal Costanza wrote: Matthias Blume wrote:
Pascal Costanza <co******@web.de> writes:
Matthias Blume wrote:
PS: When I say "untyped" I mean it as in "the _untyped_ lambda calculus".
What terms would you use to describe the difference between dynamically and weakly typed languages, then?
For example, Smalltalk is clearly "more" typed than C is. Describing both as "untyped" seems a little bit unfair to me. Safe and unsafe.
BTW, C is typed, Smalltalk is untyped. C's type system just happens to be unsound (in the sense that, as you observed, well-typed programs can still be unsafe).
Can you give me a reference to a paper, or some other literature, that defines the terminology that you use?
I have tried to find a consistent set of terms for this topic, and have only found the paper "Type Systems" by Luca Cardelli (http://www.luca.demon.co.uk/Bibliography.htm#Type systems )
He uses the terms of static vs. dynamic typing and strong vs. weak typing, and these are described as orthogonal classifications. I find this terminology very clear, consistent and useful. But I am open to a different terminology.
My copy, http://research.microsoft.com/Users/...Systems.A4.pdf
on page 3 defines safety as orthogonal to typing in the way Matthias
suggested.
--
Andreas Rossberg, ro******@ps.uni-sb.de
"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.
Ken Rose wrote: Pascal Costanza wrote:
Ken Rose wrote:
Pascal Costanza wrote:
Joachim Durchholz wrote:
> Pascal Costanza wrote: > >> For example, static type systems are incompatible with dynamic >> metaprogramming. This is objectively a reduction of expressive >> power, because programs that don't allow for dynamic >> metaprogramming can't be extended in certain ways at runtime, by >> definition. > > > > What is dynamic metaprogramming? Writing programs that inspect and change themselves at runtime. Ah. I used to do that in assembler. I always felt like I was aiming a shotgun between my toes.
When did self-modifying code get rehabilitated? I think this was in the late 70's.
Have you got a good reference for the uninitiated? http://www.laputan.org/ref89/ref89.html and http://www.laputan.org/brant/brant.html are probably good starting
points. http://www-db.stanford.edu/~paepcke/...ts/mopintro.ps
is an excellent paper, but not for the faint of heart. ;)
Pascal
--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)
Andreas Rossberg wrote: Can you give me a reference to a paper, or some other literature, that defines the terminology that you use?
I have tried to find a consistent set of terms for this topic, and have only found the paper "Type Systems" by Luca Cardelli (http://www.luca.demon.co.uk/Bibliography.htm#Type systems )
He uses the terms of static vs. dynamic typing and strong vs. weak typing, and these are described as orthogonal classifications. I find this terminology very clear, consistent and useful. But I am open to a different terminology.
My copy,
http://research.microsoft.com/Users/...Systems.A4.pdf
on page 3 defines safety as orthogonal to typing in the way Matthias suggested.
Yes, but it says dynamically typed vs statically typed where Matthias
says untyped vs typed.
Pascal
--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)
Pascal Costanza wrote: My copy,
http://research.microsoft.com/Users/...Systems.A4.pdf
on page 3 defines safety as orthogonal to typing in the way Matthias suggested.
Yes, but it says dynamically typed vs statically typed where Matthias says untyped vs typed.
Huh? On page 2 Cardelli defines typed vs. untyped. Table 1 on page 5
clearly identifies Lisp as an untyped (but safe) language. He also
speaks of statical vs. dynamical _checking_ wrt safety, but where do you
find a definition of dynamic typing?
- Andreas
--
Andreas Rossberg, ro******@ps.uni-sb.de
"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.
Matthias Blume <fi**@my.address.elsewhere> writes: A 100,000 line program in an untyped language is useless to me if I am trying to make modifications -- unless it is written in a highly stylized way which is extensively documented (and which usually means that you could have captured this style in static types).
The only untyped languages I know are assemblers. (ISTR that even
intercal can't be labelled "untyped" per se).
Are we speaking about assembler here?
--
__Pascal_Bourguignon__ http://www.informatimago.com/
Pascal Bourguignon <sp**@thalassa.informatimago.com> writes: The only untyped languages I know are assemblers. (ISTR that even intercal can't be labelled "untyped" per se).
Are we speaking about assembler here?
No, we are speaking different definitions of "typed" and "untyped"
here. Even assembler is typed if you look at it the right way.
As I said before, I mean "untyped" as in "The Untyped Lambda
Calculus" which is a well-established term.
Matthias
Pascal Costanza <co******@web.de> wrote: Unless the static type system takes away the expressive power that I need.
Even within a static type system, you can always revert to "dynamic
typing" by introducing a sufficiently universal datatype (say,
s-expressions).
Usually the need for real runtime flexiblity is quite localized (but
of course this depends of the application). Unless you really need runtime
flexibility nearly everywhere (and I cannot think of an example where
this is the case), the universal datatype approach works quite well
(though you loose the advantages of static typing in these places, of
course, and you have to compensate with more unit tests).
I have given reasons when not to use a static type system in this thread.
Nobody forces you to use a static type system. Languages, with their
associated type systems, are *tools*, and not religions. You use
what is best for the job.
But it's a bit stupid to frown upon everything else but one's favorite
way of doing things. There are other ways. They may work a bit
differently, and it might be not obvious how to do it if you're used
to doing it differently, but that doesn't mean other ways are
completely stupid. And you might actually learn something once
you know how to do it both ways :-)
Please take a look at the Smalltalk MOP or the CLOS MOP and tell me what a static type system should look like for these languages!
You cannot take an arbitrary language and attach a good static type
system to it. Type inference will be much to difficult, for example.
There's a fine balance between language design and a good type system
that works well with it.
If you want to use Smalltalk or CLOS with dynamic typing and unit
tests, use them. If you want to use Haskell or OCaml with static typing
and type inference, use them. None is really "better" than the other.
Both have their advantages and disadvantages. But don't dismiss
one of them just because you don't know better.
- Dirk
Joachim Durchholz Not quite - that was a loss of 500 million dollars. I don't know what the software development costs were, so I'm just guessing here, but I think it's relatively safe to assume a doubly redundant system would already have paid off if it had caught the problem.
Since the Mars rover mission a few years ago cost only about $250million,
I'm going to assume that you included payload cost. Here's some
relevant references I found [after sig], which suggests a price per rocket
of well under $100 million and the cost of the "four uninsured scientific
satellites" made it be about $500 million.
It used to be that rockets needed a lot of real-world tests before
people would stick expensive payloads on them. For a while, dead
weight was used, but people like amateur hams got permission to
put "cheap" satellites in its place, and as the reliability increased,
more and more people were willing to take chances with unproven
rockets.
So there's an interesting tradeoff here between time spent on live
testing and the chance it will blow up. Suppose Ariane decided
to launch with just bricks as a payload. Then they would have
been out ~$75 million. But suppose they could convince someone
to take a 10% chance of failure to launch a $100 million satellite
for half price, at $40 million. Statistically speaking, that's a good
deal. As long as it really is a 10% chance.
(The satellites were uninsured, which suggests that this was
indeed the case.)
However, it seems that 4 of the first 14 missions failed, making
about a 30% failure rate. It also doesn't appear that all of those
were caused by software failures; the 4th was in a "cooling
circuit."
The point is that no amount of software technology would have caught the problem if the specifications are wrong.
I agree.
Andrew da***@dalkescientific.com http://www.namibian.com.na/2002/june...2651A180A.html
] The Ariane 44L rocket equipped with four liquid strap-on boosters --
] the most powerful in the Ariane-4 series --
...
] Specialists estimated the cost of the satellite, launch and insurance at
] more than $250 million. http://www.cnn.com/2000/TECH/space/1...t.ariane.reut/
] Western Europe's new generation Ariane-5 rocket has placed three
] satellites into space
...
] Experts have estimated the cost of the [ASTRA 2D] satellite,
] launch and insurance at over $85 million
...
] The estimated cost of the GE-8 satellite, launch and insurance is
] over $125 million.
] But Ariane-5's career began with a spectacular failure during its
] maiden test launch in June 1996, exploding 37 seconds after
] lift-off and sending four uninsured scientific satellites worth $500
] million plunging into mangrove swamps on French Guiana's coast. http://www.centennialofflight.gov/es...riane/SP42.htm
] After Arianespace engineers rewrote the rocket's control software,
] the second Ariane-5 launch successfully took place on October 30,
] 1997. More launches followed and the rocket soon entered full commercial
] service, although it suffered another failure on its tenth launch in July
2001.
] Ariane-5 joined the Russian Proton, American Titan IV and Japanese
] H-IIA as the most powerful rockets in service. Ariane-5 initially had a
very
] high vehicle cost, but Arianespace mounted an aggressive campaign to
] significantly reduce this cost and make the rocket more cost-effective.
The
] company also planned further upgrades to the Ariane-5 to enable it to
remain
] competitive against a growing number of competitors. http://www.chron.com/cgi-bin/auth/st...e/space/news/9
9/990824.html
] Each launch of Japan's flagship H-2 rocket to place a satellite into
] geostationary orbit costs close to 19 billion yen, about double the cost
] of competitors such as the European Space Agency's Ariane rocket.
(19 billion yen ~ $190 million => ~$100million for geostationary orbit on
Ariane)
] Part of the six-billion European-Currency-Unit ($6.28 billion U.S.) cost
of
] the Ariane 5 project went toward construction of new facilities at ESA's
Kourou,
] French Guiana launch complex http://www.rte.ie/news/2002/1212/satellite.html
] It is the fourth failure of an Ariane-5 in its 14-mission history, and is
] being seen as a major setback for the European space programme.
See also http://www.dw-world.de/english/0,336...713425,00.html
] the problem occurred in the cooling circuit of one of the rocket's main
] engines. A change in engine speed around 180 seconds after take-off
] caused the launcher to "demonstrate erratic behaviour".
Pascal Costanza wrote: Adrian Hey wrote: I've been using statically typed FPL's for a good few years now, and I can only think of one occasion where I had "good" code rejected by the type checker (and even then the work around was trivial). All other occasions it was telling me my programs were broken (and where they were broken), without me having to test it.
This is good thing. Maybe you haven't written the kind of programs yet that a static type system can't handle.
Your right, I haven't. I would say the overwhelming majority of programs
"out there" fall into this category. I am aware that some situations are
difficult to handle in a statically typed language. An obvious example
in Haskell would be trying to type a function which interpreted strings
representing arbitrary haskell expressions and returned their value..
eval :: String -> ??
If this situation is to be dealt with at all, some kind of dynamic
type system seems necessary. I don't think anybody is denying that
(certainly not me). As for dynamics, I don't think anybody would deny the usefulness of a dynamic type system as a *supplement to* the static type system.
I don't deny that static type systems can be a useful supplement to a dynamic type system in certain contexts.
I don't think anybody who read your posts would get that impression :-)
There is an important class of programs - those that can reason about themselves and can change themselves at runtime - that cannot be statically checked.
Yes indeed. Even your common or garden OS falls into this category I
think, but that doesn't mean you can't statically type check individual
fragments of code (programs) that run under that OS. It just means
you can't statically type check the entire system (OS + application
programs).
Your claim implies that such code should not be written,
What claim? I guess you mean the one about dynamic typing being a
useful supplement to, but not a substitute for, static typing.
If so, I don't think it implies that at all.
at least not "most of the time" (whatever that means).
Dunno who you're quoting there, but it isn't me.
Why? Maybe I am missing an important insight about such programs that you have.
Possibly, but it seems more likely that you are simply misrepresenting
what I (and others) have written in order to create a straw man to demolish.
Regards
--
Adrian Hey
Dirk Thierbach wrote: Pascal Costanza <co******@web.de> wrote: I have given reasons when not to use a static type system in this thread.
Nobody forces you to use a static type system. Languages, with their associated type systems, are *tools*, and not religions. You use what is best for the job.
_exactly!_
That's all I have been trying to say in this whole thread.
Marshall Spight asked http://groups.google.com/groups?selm...38%40sccrnsc01
why one would not want to use a static type system, and I have tried to
give some reasons.
I am not trying to force anyone to use a dynamically checked language. I
am not even trying to convince anyone. I am just trying to say that
someone might have very good reasons if they didn't want to use a static
type system. Please take a look at the Smalltalk MOP or the CLOS MOP and tell me what a static type system should look like for these languages!
You cannot take an arbitrary language and attach a good static type system to it. Type inference will be much to difficult, for example. There's a fine balance between language design and a good type system that works well with it.
Right. As I said before, you need to reduce the expressive power of the
language.
If you want to use Smalltalk or CLOS with dynamic typing and unit tests, use them. If you want to use Haskell or OCaml with static typing and type inference, use them. None is really "better" than the other. Both have their advantages and disadvantages. But don't dismiss one of them just because you don't know better.
dito
Thank you for rephrasing this in a probably better understandable way.
Pascal
Matthias Blume <fi**@my.address.elsewhere> writes: In fact, you should never need to "solve the halting problem" in order to statically check you program. After all, the programmer *already has a proof* in her mind when she writes the code! All that's needed (:-) is for her to provide enough hints as to what that proof is so that the compiler can verify it. (The smiley is there because, as we are all poinfully aware of, this is much easier said than done.)
I'm having trouble proving that MYSTERY returns T for lists of finite
length. I an idea that it would but now I'm not sure. Can the
compiler verify it?
(defun kernel (s i)
(list (not (car s))
(if (car s)
(cadr s)
(cons i (cadr s)))
(cons 'y (cons i (cons 'z (caddr s))))))
(defconstant k0 '(t () (x)))
(defun mystery (list)
(let ((result (reduce #'kernel list :initial-value k0)))
(cond ((null (cadr result)))
((car result) (mystery (cadr result)))
(t (mystery (caddr result))))))
Andreas Rossberg wrote: Pascal Costanza wrote:
My copy,
http://research.microsoft.com/Users/...Systems.A4.pdf
on page 3 defines safety as orthogonal to typing in the way Matthias suggested.
Yes, but it says dynamically typed vs statically typed where Matthias says untyped vs typed.
Huh? On page 2 Cardelli defines typed vs. untyped. Table 1 on page 5 clearly identifies Lisp as an untyped (but safe) language. He also speaks of statical vs. dynamical _checking_ wrt safety, but where do you find a definition of dynamic typing?
Hmm, maybe I was wrong. I will need to check that again - it was some
time ago that I have read the paper. Oh dear, I am getting old. ;)
Thanks for pointing this out.
Pascal
Adrian Hey wrote: Your right, I haven't. I would say the overwhelming majority of programs "out there" fall into this category.
Do you have empirical evidence for this statement? Maybe your sample set
is not representative? As for dynamics, I don't think anybody would deny the usefulness of a dynamic type system as a *supplement to* the static type system.
I don't deny that static type systems can be a useful supplement to a dynamic type system in certain contexts.
I don't think anybody who read your posts would get that impression :-)
Well, then they don't read close enough. In my very posting wrt to this
topic, I have suggested soft typing as a good compromise. See http://groups.google.com/groups?selm...rz.uni-bonn.de
Yes, you can certainly tell that I am a fan of dynamic type systems. So
what? Someone has asked why one would want to get rid of a static type
system, and I am responding.
(Thanks for the smiley. ;) Your claim implies that such code should not be written,
What claim?
"Most code [...] should be [...] checked for type errors at compile time." at least not "most of the time" (whatever that means).
Dunno who you're quoting there, but it isn't me.
Pascal my********************@jpl.nasa.gov (Erann Gat) wrote in message news:<my*************************************@192. 168.1.51>... No. The fallacy in this reasoning is that you assume that "type error" and "bug" are the same thing. They are not. Some bugs are not type errors, and some type errors are not bugs. In the latter circumstance simply ignoring them can be exactly the right thing to do.
Just to be clear, I do not believe "bug" => "type error". However, I do
claim that "type error" (in reachable code) => "bug". If at some point
a program P' (in L') may eventually abort with an exception due to an
ill typed function application then I would insist that P' is buggy.
Here's the way I see it:
(1) type errors are extremely common;
(2) an expressive, statically checked type system (ESCTS) will identify
almost all of these errors at compile time;
(3) type errors flagged by a compiler for an ESCTS can pinpoint the source
of the problem whereas ad hoc assertions in code will only identify a
symptom of a type error;
(4) the programmer does not have to litter type assertions in a program
written in a language with an ESCTS;
(5) an ESCTS provides optimization opportunities that would otherwise
be unavailable to the compiler;
(6) there will be cases where the ESCTS requires one to code around a
constraint that is hard/impossible to express in the ESCTS (the more
expressive the type system, the smaller the set of such cases will be.)
The question is whether the benefits of (2), (3), (4) and (5) outweigh
the occasional costs of (6).
-- Ralph
Hi Matthias Blume, Pascal Costanza <co******@web.de> writes:
Well, to say this once more, there are programs out there that have a consistent design, that don't have "problems", and that cannot be statically checked.
Care to give an example?
(setf *debugger-hook*
(lambda (condition value)
(declare (ignorable condition value))
(invoke-restart (psychic))))
(defun psychic ()
(let* ((*read-eval* nil)
(input (ignore-errors (read))))
(format t "Input ~S is of type ~S.~%" input (type-of input))))
(loop (psychic))
This can only be statically compiled in the most trivial sense where every
input object type is permissible (i.e. every object is of type T).
Regards,
Adam
Pascal Costanza wrote: Joachim Durchholz wrote:
Pascal Costanza wrote:
For example, static type systems are incompatible with dynamic metaprogramming. This is objectively a reduction of expressive power, because programs that don't allow for dynamic metaprogramming can't be extended in certain ways at runtime, by definition.
What is dynamic metaprogramming?
Writing programs that inspect and change themselves at runtime.
That's just the first part of the answer, so I have to make the second
part of the question explicit:
What is dynamic metaprogramming good for?
I looked into the papers that you gave the URLs on later, but I'm still
missing a compelling reason to use MOP. As far as I can see from the
papers, MOP is a bit like pointers: very powerful, very dangerous, and
it's difficult to envision a system that does the same without the power
and danger but such systems do indeed exist.
(For a summary, scroll to the end of this post.)
Just to enumerate the possibilities in the various URLs given:
- Prioritized forwarding to components
(I think that's a non-recommended technique, as it either makes the
compound object highly dependent on the details of its constituents,
particularly if a message is understood by many contituents - but
anyway, here goes:) Any language that has good support for higher-order
functions can to this directly.
- Dynamic fields
Frankly, I don't understand why on earth one would want to have objects
with a variant set of fields. I could do the same easily by adding a
dictionary to the objects, and be done with it (and get the additional
benefit that the dictionary entries will never collide with a field name).
Conflating the name spaces of field names and dictionary keys might
offer some syntactic advantages (callers don't need to differentiate
between static and dynamic fields), but I fail to imagine any good use
for this all... (which may, of course, be lack of imagination on my
side, so I'd be happy to see anybody explain a scenario that needs
exactly this - and then I'll try to see how this can be done without MOP
*g*).
- Dynamic protection (based on sender's class/type)
This is a special case of "multiple views" (implement protection by
handing out a view with a restricted subset of functions to those
classes - other research areas have called this "capability-based
programming").
- Multiple views
Again, in a language with proper handling for higher-order functions
(HOFs), this is easy: a view is just a record of accessor functions, and
a hidden reference to the record for which the view holds. (If you
really need that.)
Note that in a language with good HOF support, calls that go through
such records are syntactically indistinguishable from normal function
calls. (Such languages do exist; I know for sure that this works with
Haskell.)
- Protocol matching
I simply don't understand what's the point with this: yes of course this
can be done using MOP, but where's the problem that's being simplified
with that approach?
- Collection of performance data
That's nonportable anyway, so it can be built right into the runtime,
and with less gotchas (if measurement mechanisms are integrated into the
runtime, they will rather break than produce bogus data - and I prefer a
broken instrument to one that will silently give me nonsense readings,
thank you).
- Result caching
Languages with good HOF support usually have a "memo" or "memoize"
function that does exactly this.
- Coercion
Well, of all things, this really doesn't need MOP to work well.
- Persistency
(and, as the original author forgot: network proxies - the issues are
similar)
Now here's a thing that indeed cannot be retrofitted to a language
without MOP.
(Well, performance counting can't be retrofitted as well, but that's
just a programmer's tool that I'd /expect/ to be part of the development
system. I have no qualms about MOP in the developer system, but IMHO it
should not be part of production code, and persistence and proxying for
remote objects are needed for running productive systems.)
For the first paper, this leaves me with a single valid application for
a MOP. At which point I can say that I can require that "any decent
language should have this built in": not in the sense that every
run-time system should include a working TCP/IP stack, but that every
run-time system should include mechanisms for marshalling and
unmarshalling objects (and quite many do).
On to the second paper (Brant/Foote/Johnson/Roberts).
- Image stripping
I.e. finding out which functions might be called by a given application.
While this isn't Smalltalk-specific, it's specific to dynamic languages,
so this doesn't count: finding the set of called functions is /trivial/
in a static language, since statically-typed languages don't usually
offer ways to construct function calls from lexical elements as typical
dynamic languages do.
- Class collaboration, interaction diagrams
Useful and interesting tools.
Of course, if the compiler is properly modularized, it's easy to write
them based on the string representation, instead of using reflective
capabilities.
- Synchronized methods, pre/postcondition checking
Here, the sole advantage of having an implementation in source code
instead of in the run-time system seems to be that no recompilation is
necessary if one wishes to change the status (method is synchronized or
not, assertions are checked or not).
Interestingly, this is not a difference between MOP and no MOP, it's a
difference between static and dynamic languages.
Even that isn't too interesting. For example, I have worked with Eiffel
compilers, and at least two of them do not require any recompilation if
you want to enable or disable assertion checking (plus, at least for one
compiler, it's possible to switch checking on and off on a per-program,
per-class, or even per-function basis), so this isn't the exclusive
domain of dynamic languages.
Of course, such things are easier to add as an afterthought if the
system is dynamic and such changes can be done with user code - but
since language and run-time system design are as much about giving power
as guarantees to the developer, and giving guarantees necessarily
entails restricting what a developer can do, I'm entirely unconvinced
that a dynamic language is the better way to do that.
- Multimethods
Well, I don't see much value in them anyway...
.... On to Andreas Paepcke's paper.
I found it more interesting than the other two because it clearly spells
out what MOPs are intended to be good for.
One of the main purposes, in Paepcke's view, is making it easier to
write tools. In fact reflective systems make this easier, because all
the tricky details of converting source code into an internal data
object have already been handled by the compiler.
On the other hand, I don't quite see why this should be more difficult
for a static language.
Of course, if the language designer "just wanted to get it to compile",
anybody who wants to write tools for the language has to rewrite the
parser and decorator, simply because the original tools are not built
for separating these phases (to phrase it in a polite manner). However,
in the languages where it's easy to "get it to compile" without
compromising modularity, I have seen lots of user-written tools, too. I
think the main difference is that when designing a run-time system for
introspection, designers are forced to do a very modular compiler design
- which is a Good Thing, but you can do a good design for a
non-introspective language just as well :-)
In other words, I don't think that writing tools provides enough reason
for introspection: the goals can be attained in other ways, too.
The other main purpose in his book is the ability to /extend/ the
language (and, as should go without saying, without affecting code that
doesn't use the extensions).
He claims it's good for experimentation (to which I agree, but I
wouldn't want or need code for language experimentation in production code).
Oh, I see that's already enough of reasons by his book... not by mine.
Summary:
========
Most reasons given for the usefulness of a MOP are irrelevant. The
categories here are (in no particular order):
* Unneeded in a language without introspection (the argument becomes
circular)
* Easily replaced by good higher-order function support
* Programmer tools (dynamic languages tend to be better here, but that's
more of a historical accident: languages with a MOP are usually highly
dynamic, so a good compiler interface is a must - but nothing prevents
the designers of static languages from building their compilers with a
good interface, and in fact some static languages have rich tool
cultures just like the dynamic ones)
A few points have remained open, either because I misunderstood what the
respective author meant, or because I don't see any problem in handling
the issues statically, or because I don't see any useful application of
the mechanism. The uses include:
* Dynamic fields
* Protocol matching
* Coercion
And, finally, there's the list of things that can be done using MOP, but
where I think that they are better handled as part of the run-time system:
* (Un-)Marshalling
* Synchronization
* Multimethods
For (un-)marshalling, I think that this should be closed off and hidden
from the programmer's powers because it opens up all the implementation
details of all the objects. Anybody inspecting source code will have to
check the entire sources to be sure that a private field in a record is
truly private, and not accessed via the mechanisms that make user-level
implementation of (un-)marshalling possible.
Actually, all you need is a builtin pair of functions that convert some
data object from and to a byte stream; user-level code can then still
implement all the networking protocol layers, connection semantics etc.
For synchronization, guarantees are more important than flexibility. To
be sure that a system has no race conditions, I must be sure that the
locking mechanism in place (whatever it is) will work across all
modules, regardless of author. Making libraries interoperate that use
different locking strategies sounds like a nightmare to me - and if
everybody must use the same locking strategy, it should be part of the
language, not part of a user-written MOP library.
However, that's just a preliminary view; I'd be interested in hearing
reports from people who actually encountered such a situation (I
haven't, so I may be seeing problems where there aren't any).
For multimethods, I don't see that they should be part of a language
anyway - but that's a discussion for another thread that I don't wish to
repeat now (and this post is too long already).
Rambling mode OFF.
Regards,
Jo
Pascal Costanza wrote: Joachim Durchholz wrote:
Pascal Costanza wrote:
See the example of downcasts in Java.
Please do /not/ draw your examples from Java, C++, or Eiffel. Modern static type systems are far more flexible and powerful, and far less obtrusive than the type systems used in these languages.
This was just one obvious example in which you need a workaround to make the type system happy. There exist others.
Then give these examples, instead of presenting us with strawman examples. A modern type system has the following characteristics:
I know what modern type systems do.
Then I don't understand your point of view.
Regards,
Jo
Pascal Costanza wrote: Matthias Blume wrote:
Pascal Costanza <co******@web.de> writes:
There are also programs which I cannot express at all in a purely dynamically typed language. (By "program" I mean not only the executable code itself but also the things that I know about this code.) Those are the programs which are protected against certain bad things from happening without having to do dynamic tests to that effect themselves.
This is a circular argument. You are already suggesting the solution in your problem description.
Is it? Am I? Is it too much to ask to know that the invariants that my code relies on will, in fact, hold when it gets to execute?
Yes, because the need might arise to change the invariants at runtime, and you might not want to stop the program and restart it in order just to change it.
Then it's not an invariant.
Or the invariant is something like "foo implies invariant_1 and not foo
implies invariant_2", where "foo" is the condition that changes over the
lifetime of the object.
Invariants are, by definition, the properties of an object that will
always hold.
Or are you talking about system evolution and maintenance?
That would be an entirely new aspect in the discussion, and you should
properly forewarn us so that we know for sure what you're talking about.
Regards,
Jo
Joachim Durchholz <jo***************@web.de> writes: And, finally, there's the list of things that can be done using MOP, but where I think that they are better handled as part of the run-time system: * (Un-)Marshalling * Synchronization * Multimethods
The MOP is an interface to the run-time system for common object
services. I do not understand your position that these would be
better handled by the run-time.
For (un-)marshalling, I think that this should be closed off and hidden from the programmer's powers because it opens up all the implementation details of all the objects.
What if I want to (un-)marshall from/to something besides a byte
stream, such as an SQL database? I don't want one of the object
services my system depends on to be so opaque because a peer thought I
would be better off that way. Then again, I have never understand the
desire to hide things in programming languages.
Anybody inspecting source code will have to check the entire sources to be sure that a private field in a record is truly private, and not accessed via the mechanisms that make user-level implementation of (un-)marshalling possible.
If you look at the MOP in CLOS, you can use the slot-value-using-class
method to ensure that getting/setting the slot thru any interface will
trigger the appropriate code. It does not matter, private, public,
wether they use SLOT-VALUE or an accessor. This is also useful for
transaction mgmt.
The MOP is an interface to the run-time's object services.
--
Sincerely, Craig Brozefsky <cr***@red-bean.com>
No war! No racist scapegoating! No attacks on civil liberties!
Chicago Coalition Against War & Racism: www.chicagoantiwar.org
Pascal Bourguignon fed this fish to the penguins on Thursday 23 October
2003 11:33 am: The only untyped languages I know are assemblers. (ISTR that even intercal can't be labelled "untyped" per se).
Are we speaking about assembler here?
REXX might qualify (hmmm, I think DCL is also untyped).
notstring = 5
string = "5"
what = string + notstring
when = notstring + string
say "5 + '5' (notstring + string)" when
say "'5' + 5 (string + notstring)" what
s. = "empty"
s.string = 2.78
n. = 3.141592654
n.notstring = "Who?"
say "s.5" s.5
say "s.'5'" s."5"
say "s.1" s.1
say "s.'2'" s."2"
say "s.string" s.string
say "s.notstring" s.notstring
say "n.string" n.string
say "n.notstring" n.notstring
[wulfraed@beastie wulfraed]$ rexx t.rx
5 + '5' (notstring + string) 10
'5' + 5 (string + notstring) 10
s.5 2.78
s.'5' empty5
s.1 empty
s.'2' empty2
s.string 2.78
s.notstring 2.78
n.string Who?
n.notstring Who?
Apparently literal strings are not allowed in the stem look-up,
resulting in the stem default of empty followed by the concatenated
literal.
-- ================================================== ============ < wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG < wu******@dm.net | Bestiaria Support Staff < ================================================== ============ < Bestiaria Home Page: http://www.beastie.dm.net/ < Home Page: http://www.dm.net/~wulfraed/ <
Pascal Costanza <co******@web.de> wrote in message news:<bn**********@newsreader2.netcologne.de>... Ralph Becket wrote: This is utterly bogus. If you write unit tests beforehand, you are already pre-specifying the interface that the code to be tested will present.
I fail to see how dynamic typing can confer any kind of advantage here. Read the literature on XP.
What, all of it?
Why not just enlighten me as to the error you see in my contention
about writing unit tests beforehand? Are you seriously claiming that concise, *automatically checked* documentation (which is one function served by explicit type declarations) is inferior to unchecked, ad hoc commenting?
I am sorry, but in my book, assertions are automatically checked.
*But* they are not required.
*And* if they are present, they can only flag a problem at runtime.
*And* then at only a single site. For one thing, type declarations *cannot* become out-of-date (as comments can and often do) because a discrepancy between type declaration and definition will be immidiately flagged by the compiler.
They same holds for assertions as soon as they are run by the test suite.
That is not true unless your test suite is bit-wise exhaustive. I don't think you understand much about language implementation.
...and I don't think you understand much about dynamic compilation. Have you ever checked some not-so-recent-anymore work about, say, the HotSpot virtual machine?
Feedback directed optimisation and dynamic FDO (if that is what you
are suggesting is an advantage of HotSpot) are an implementation
techonology and hence orthogonal to the language being compiled.
On the other hand, if you are not referring to FDO, it's not clear
to me what relevance HotSpot has to the point under discussion. A strong, expressive, static type system provides for optimisations that cannot be done any other way. These optimizations alone can be expected to make a program several times faster. For example:
You are only talking about micro-efficiency here. I don't care about that, my machine is fast enough for a decent dynamically typed language.
Speedups (and resource consumption reduction in general) by (in many
cases) a factor or two or more consitute "micro-efficiency"? On top of all that, you can still run your code through the profiler, although the need for hand-tuned optimization (and consequent code obfuscation) may be completely obviated by the speed advantage conferred by the compiler exploiting a statically checked type system.
Have you checked this?
Do you mean have I used a profiler to search for bottlenecks in programs
in a statically type checked language? Then the answer is yes.
Or do you mean have I observed a significant speedup when porting from
C# or Python to Mercury? Again the answer is yes.
Weak and dynamic typing is not the same thing.
Let us try to draw some lines and see if we can agree on *something*.
UNTYPED: values in the language are just bit patterns and all
operations, primitive or otherwise, simply twiddle the bits
that come their way.
DYNAMICALLY TYPED: values in the language carry type identifiers, but
any value can be passed to any function. Some built-in functions will
raise an exception if the type identifiers attached to their arguments
are of the wrong sort. Such errors can only be identified at runtime.
STATICALLY TYPED: the compiler carries out a proof that no value of the
wrong type will ever be passed to a function expecting a different type,
anywhere in the program. (Note that with the addition of a universal
type and a checked runtime dynamic cast operator, one can add dynamically
typed facilities to a statically typed language.)
The difference between an untyped program that doesn't work (it produces
the wrong answer) and a dynamically typed program with a type bug (it
may throw an exception) is so marginal that I'm tempted to lump them both
in the same boat.
No. The original question asked in this thread was along the lines of why abandon static type systems and why not use them always. I don't need to convince you that a proposed general solution doesn't always work, you have to convince me that it always works.
Done: just add a universal type. See Mercury for example.
[...] The burden of proof is on the one who proposes a solution.
What? You're the one claiming that productivity (presumably in the
sense of leading to a working, efficient, reliable, maintainable
piece of code) is enhanced by using languages that *do not tell you
at compile time when you've made a mistake*!
-- Ralph
On Thursday 23 October 2003 19:30, Joachim Durchholz wrote: - Dynamic fields Frankly, I don't understand why on earth one would want to have objects with a variant set of fields. I could do the same easily by adding a dictionary to the objects, and be done with it (and get the additional benefit that the dictionary entries will never collide with a field name). Conflating the name spaces of field names and dictionary keys might offer some syntactic advantages (callers don't need to differentiate between static and dynamic fields), but I fail to imagine any good use for this all... (which may, of course, be lack of imagination on my side, so I'd be happy to see anybody explain a scenario that needs exactly this - and then I'll try to see how this can be done without MOP *g*).
From what I understand zope uses this extensively in how you do stuff with the
ZODB. For example when rendering an object it looks for the closest callable
item called index_html. This means you can add an object to a folder that is
called index_html and is callable and it just works. I have a lot of objects
where it is not defined in the code what variables they will have and at
runtime these objects can be added. At least in python you can replace a
method with a callable object and this is very useful to do.
Overall when working with zope I can't imagine not doing it that way. It saves
a lot of time and it makes for very maintainable apps. You can view your
program as being transparently persistent so you override methods with
objects just like you normally would be inheriting from a class and then
overriding methods in it. I really like using an OODB for apps and one of the
interesting things is that you end up refactoring objects in your database
just like you would normally refactor code and it is pretty much the same
process.
Ralph Becket wrote: Here's the way I see it: (1) type errors are extremely common; (2) an expressive, statically checked type system (ESCTS) will identify almost all of these errors at compile time; (3) type errors flagged by a compiler for an ESCTS can pinpoint the source of the problem whereas ad hoc assertions in code will only identify a symptom of a type error; (4) the programmer does not have to litter type assertions in a program written in a language with an ESCTS; (5) an ESCTS provides optimization opportunities that would otherwise be unavailable to the compiler; (6) there will be cases where the ESCTS requires one to code around a constraint that is hard/impossible to express in the ESCTS (the more expressive the type system, the smaller the set of such cases will be.)
However,
(7) Developing reliable software also requires extensive testing to
detect bugs other than type errors, and
(8) These tests will usually detect most of the bugs that static
type checking would have detected.
So the *marginal* benefit of static type checking is reduced, unless you
weren't otherwise planning to test your code very well.
BTW, is (3) really justified? My (admittedly old) experience with ML
was that type errors can be rather hard to track back to their sources.
Paul
Joachim Durchholz wrote: Or are you talking about system evolution and maintenance? That would be an entirely new aspect in the discussion, and you should properly forewarn us so that we know for sure what you're talking about.
Did I forget to mention this in the specifications? Sorry. ;)
Yes, I want my software to be adaptable to unexpected circumstances.
(I can't give you a better specification, by definition.)
Pascal
--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)
Ralph Becket wrote: Pascal Costanza <co******@web.de> wrote in message news:<bn**********@newsreader2.netcologne.de>...
Ralph Becket wrote:
This is utterly bogus. If you write unit tests beforehand, you are already pre-specifying the interface that the code to be tested will present.
I fail to see how dynamic typing can confer any kind of advantage here. Read the literature on XP.
What, all of it?
Why not just enlighten me as to the error you see in my contention about writing unit tests beforehand?
Maybe we are talking at cross-purposes here. I didn't know about ocaml
not requiring target code to be present in order to have a test suite
acceptable by the compiler. I will need to take a closer look at this. For one thing, type declarations *cannot* become out-of-date (as comments can and often do) because a discrepancy between type declaration and definition will be immidiately flagged by the compiler.
They same holds for assertions as soon as they are run by the test suite.
That is not true unless your test suite is bit-wise exhaustive.
Assertions cannot become out-of-date. If an assertion doesn't hold
anymore, it will be flagged by the test suite. I don't think you understand much about language implementation.
...and I don't think you understand much about dynamic compilation. Have you ever checked some not-so-recent-anymore work about, say, the HotSpot virtual machine?
Feedback directed optimisation and dynamic FDO (if that is what you are suggesting is an advantage of HotSpot) are an implementation techonology and hence orthogonal to the language being compiled.
On the other hand, if you are not referring to FDO, it's not clear to me what relevance HotSpot has to the point under discussion.
Maybe we both understand language implementation, and it is irrelevant? A strong, expressive, static type system provides for optimisations that cannot be done any other way. These optimizations alone can be expected to make a program several times faster. For example:
You are only talking about micro-efficiency here. I don't care about that, my machine is fast enough for a decent dynamically typed language.
Speedups (and resource consumption reduction in general) by (in many cases) a factor or two or more consitute "micro-efficiency"?
Yes. Since this kind of efficiency is just one of many factors when
developing software, it might not be the most important one and might be
outweighed by advantages a certain loss of efficiency buys you elsewhere.
The difference between an untyped program that doesn't work (it produces the wrong answer) and a dynamically typed program with a type bug (it may throw an exception) is so marginal that I'm tempted to lump them both in the same boat.
Well, but that's a wrong perspective. The one that throws an exception
can be corrected and then continued exactly at the point of the
execution path when the exception was thrown. [...] The burden of proof is on the one who proposes a solution.
What? You're the one claiming that productivity (presumably in the sense of leading to a working, efficient, reliable, maintainable piece of code) is enhanced by using languages that *do not tell you at compile time when you've made a mistake*!
No, other people are claiming that one should _always_ use static type
sytems, and my claim is that there are situations in which a dynamic
type system is better.
If you claim that something (anything) is _always_ better, you better
have a convincing argument that _always_ holds.
I have never claimed that dynamic type systems are _always_ better.
Pascal
--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)
Ralph Becket wrote: STATICALLY TYPED: the compiler carries out a proof that no value of the wrong type will ever be passed to a function expecting a different type, anywhere in the program.
Big deal. From Robert C. Martin: http://www.artima.com/weblogs/viewpost.jsp?thread=4639
"I've been a statically typed bigot for quite a few years....I scoffed
at the smalltalkers who whined about the loss of flexibility. Safety,
after all, was far more important than flexibility -- and besides, we
can keep our software flexible AND statically typed, if we just follow
good dependency management principles.
"Four years ago I got involved with Extreme Programming. ...
"About two years ago I noticed something. I was depending less and less
on the type system for safety. My unit tests were preventing me from
making type errors. The more I depended upon the unit tests, the less I
depended upon the type safety of Java or C++ (my languages of choice).
"I thought an experiment was in order. So I tried writing some
applications in Python, and then Ruby (well known dynamically typed
languages). I was not entirely surprised when I found that type issues
simply never arose. My unit tests kept my code on the straight and
narrow. I simply didn't need the static type checking that I had
depended upon for so many years.
"I also realized that the flexibility of dynamically typed langauges
makes writing code significantly easier. Modules are easier to write,
and easier to change. There are no build time issues at all. Life in a
dynamically typed world is fundamentally simpler.
"Now I am back programming in Java because the projects I'm working on
call for it. But I can't deny that I feel the tug of the dynamically
typed languages. I wish I was programming in Ruby or Python, or even
Smalltalk.
"Does anybody else feel like this? As more and more people adopt test
driven development (something I consider to be inevitable) will they
feel the same way I do. Will we all be programming in a dynamically
typed language in 2010? "
Lights out for static typing.
kenny
-- http://tilton-technology.com
What?! You are a newbie and you haven't answered my: http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey
Pascal Costanza <co******@web.de> wrote: Dirk Thierbach wrote: You cannot take an arbitrary language and attach a good static type system to it. Type inference will be much to difficult, for example. There's a fine balance between language design and a good type system that works well with it.
Right. As I said before, you need to reduce the expressive power of the language.
Maybe that's where the problem is. One doesn't need to reduce the
"expressive power". I don't know your particular application, but what
you seem to need is the ability to dynamically change the program
execution. There's more than one way to do that. And MOPs (like
macros) are a powerful tool and sometimes quite handy, but it's also
easy to shoot yourself severly into your own foot with MOPs if you're
not careful, and often there are better solutions than using MOPs (for
example, apropriate flexible datatypes).
I may be wrong, but I somehow have the impression that it is difficult
to see other ways to solve a problem if you haven't done it in that
way at least once. So you see that with different tools, you cannot do
it in exactly the same way as with the old tools, and immediately you
start complaining that the new tools have "less expressive power",
just because you don't see that you have to use them in a different
way. The "I can do lot of things with macros in Lisp that are
impossible to do in other languages" claim seems to have a similar
background.
I could complain that Lisp or Smalltalk have "less expressive power"
because I cannot declare algebraic datatypes properly, I don't have
pattern matching to use them efficiently, and there is no automatic
test generation (i.e., type checking) for my datatypes. But there
are ways to work around this, so when programming in Lisp or Smalltalk,
I do it in the natural way that is appropriate for these languages,
instead of wasting my time with silly complaints.
The only way out is IMHO to learn as many languages as possible, and
to learn as many alternative styles of solving problems as possible.
Then pick the one that is apropriate, and don't say "this way has
most expressive power, all others have less". In general, this will
be just wrong.
- Dirk
Pascal Costanza <co******@web.de> wrote: No, other people are claiming that one should _always_ use static type sytems, and my claim is that there are situations in which a dynamic type system is better.
If you claim that something (anything) is _always_ better, you better have a convincing argument that _always_ holds.
I have never claimed that dynamic type systems are _always_ better.
To me, it certainly looked like you did in the beginning. Maybe your
impression that other people say that one should always use static
type systems is a similar misinterpretation?
Anyway, formulations like "A has less expressive power than B" aresvery
close to "B is always better than A". It's probably a good idea to
avoid such formulations if this is not what you mean.
- Dirk
Pascal Costanza <co******@web.de> wrote: Remi Vanicat wrote:
> In a statically typed language, when I write a test case that > calls a specific method, I need to write at least one class that > implements at least that method, otherwise the code won't > compile.
Not in ocaml. ocaml is statically typed.
It make the verification when you call the test. I explain : let f x = x #foo
which is a function taking an object x and calling its method foo, even if there is no class having such a method.
When sometime latter you do a :
f bar
then, and only then the compiler verify that the bar object have a foo method.
BTW, the same thing is true for any language with type inference. In
Haskell, there are to methods and objects. But to test a function, you
can write
test_func f = if (f 1 == 1) && (f 2 == 42) then "ok" else "fail"
The compiler will infer that test_func has type
test_func :: (Integer -> Integer) -> String
(I am cheating a bit, because actually it will infer a more general type),
so you can use it to test any function of type Integer->Integer, regardless
if you have written it already or not.
Doesn't this mean that the occurence of such compile-time errors is only delayed, in the sense that when the test suite grows the compiler starts to issue type errors?
As long as you parameterize over the functions (or objects) you want to
test, there'll be no compile-time errors. That's what functioncal
programming and type inference are good for: You can abstract everything
away just by making it an argument. And you should know that, since
you say that you know what modern type-systems can do.
But the whole case is moot anyway, IMHO: You write the tests because
you want them to fail until you have written the correct code that
makes them pass, and it is not acceptable (especially if you're doing
XP) to continue as long as you have failing tests. You have to do the
minimal edit to make all the tests pass *right now*, not later on.
It's the same with compile-time type errors. The only difference is
that they happen at compile-time, not at test-suite run-time, but the
necessary reaction is the same: Fix your code so that all tests (or
the compiler-generated type "tests") pass. Then continue with the next
step.
I really don't see why one should be annoying to you, and you strongly
prefer the other. They're really just the same thing. Just imagine
that you run your test suite automatically when you compile your
program.
- Dirk
Kenny Tilton <kt*****@nyc.rr.com> wrote: Big deal. From Robert C. Martin:
http://www.artima.com/weblogs/viewpost.jsp?thread=4639
"I've been a statically typed bigot for quite a few years....I scoffed at the smalltalkers who whined about the loss of flexibility. Safety, after all, was far more important than flexibility -- and besides, we can keep our software flexible AND statically typed, if we just follow good dependency management principles.
"Four years ago I got involved with Extreme Programming. ...
"About two years ago I noticed something. I was depending less and less on the type system for safety. My unit tests were preventing me from making type errors. The more I depended upon the unit tests, the less I depended upon the type safety of Java or C++ (my languages of choice).
Note that he is speaking about languages with a very bad type system.
As has been said in this thread a few times, there are statically
typed languages and there are statically typed languages. Those two
can differ substantially from each other.
Here's a posting from Richard MacDonald in comp.software.extreme-programming,
MID <Xn**********************************@204.127.36.1 >:
: Eliot, I work with a bunch of excellent programmers who came from AI to
: Smalltalk to Java. We despise Java. We love Smalltalk. Some months ago we
: took a vote and decided that we were now more productive in Java than we
: had ever been in Smalltalk. The reason is the Eclipse IDE. It more than
: makes up for the lousy, verbose syntax of Java. We find that we can get
: Eclipse to write much of our code for us anyway.
:
: Smalltalk is superior in getting something to work fast. But refactoring
: takes a toll on a dynamically typed language because it doesn't provide
: as much information to the IDE as does a statically-typed language (even
: a bad one). Let's face it. If you *always* check callers and implementors
: in Smalltalk, you can catch most of the changes. But sometimes you
: forget. With Eclipse, you can skip this step and it still lights up every
: problem with a big X and helps you refactor to fix it
:
: In Smalltalk, I *needed* unit tests because Smalltalk allowed me to be
: sloppy. In Eclipse, I can get away without writing unit tests and my code
: miraculously often works the first time I get all those Xs eliminated.
:
: Ok, I realize I have not addressed your question yet...
:
: No question but that a "crappy statically typed" (*) language can get you
: into a corner where you're faced with lousy alternatives. But say I
: figure out a massive refactoring step that gets me out of it. In
: Smalltalk, I would probably fail without a bank of unit tests behind me.
: In Eclipse, I could probably make that refactoring step in less time and
: with far great certainty that it is correct. I've done it before without
: the safety net of tests and been successful. No way I would ever have
: been able to do that as efficiently in Smalltalk. (I once refactored my
: entire Smalltalk app in 3 days and needed every test I had every written.
: I have not done the equivalent in Java, but I have complete confidence I
: could do it just as well if not much better.)
:
: As far as productivity, we still write unit tests. But unit test
: maintenance takes a lot of time. In Smalltalk, I would spend 30% of my
: time coding within the tests. I tested at all levels, i.e., low-level,
: medium, and integration, since it paid off when searching for bugs. But
: 30% is too much. With Eclipse, we're able to write good code with just a
: handful of high-level tests. Often we simply write the answer as a test
: and do the entire app with this one test. The reason is once again that
: the IDE is visually showing us right where we broke my code and we don't
: have to run tests to see it.
:
: (*) I suggest we use 3 categories: (1) dynamically typed, (2) statically
: typed, (3) lousy statically typed. Into the latter category, toss Java
: and C++. Into (2), toss some of the functional languages; they're pretty
: slick. Much of the classic typing wars are between dynamic-typists
: criticizing (3) vs. static-typists working with (2).
:
: P.S. I used to be one of those rabid dynamic defenders. I'm a little
: chastened and wiser now that I have a fantastic IDE in my toolkit.
- Dirk
Dirk Thierbach wrote: Pascal Costanza <co******@web.de> wrote:
Remi Vanicat wrote:
>>In a statically typed language, when I write a test case that >>calls a specific method, I need to write at least one class that >>implements at least that method, otherwise the code won't >>compile. Not in ocaml. >ocaml is statically typed. It make the verification when you call the test. I explain : let f x = x #foo
which is a function taking an object x and calling its method foo, even if there is no class having such a method.
When sometime latter you do a :
f bar
then, and only then the compiler verify that the bar object have a foo method.
BTW, the same thing is true for any language with type inference. In Haskell, there are to methods and objects. But to test a function, you can write
test_func f = if (f 1 == 1) && (f 2 == 42) then "ok" else "fail"
The compiler will infer that test_func has type
test_func :: (Integer -> Integer) -> String
(I am cheating a bit, because actually it will infer a more general type), so you can use it to test any function of type Integer->Integer, regardless if you have written it already or not.
OK, I have got it. No, that's not what I want. What I want is:
testxyz obj = (concretemethod obj == 42)
Does the code compile as long as concretemethod doesn't exist?
Doesn't this mean that the occurence of such compile-time errors is only delayed, in the sense that when the test suite grows the compiler starts to issue type errors?
As long as you parameterize over the functions (or objects) you want to test, there'll be no compile-time errors. That's what functioncal programming and type inference are good for: You can abstract everything away just by making it an argument. And you should know that, since you say that you know what modern type-systems can do.
Yes, I know that. I have misunderstood the claim. Does the code I
propose above work?
But the whole case is moot anyway, IMHO: You write the tests because you want them to fail until you have written the correct code that makes them pass, and it is not acceptable (especially if you're doing XP) to continue as long as you have failing tests. You have to do the minimal edit to make all the tests pass *right now*, not later on.
It's the same with compile-time type errors. The only difference is that they happen at compile-time, not at test-suite run-time, but the necessary reaction is the same: Fix your code so that all tests (or the compiler-generated type "tests") pass. Then continue with the next step.
The type system might test too many cases.
I really don't see why one should be annoying to you, and you strongly prefer the other. They're really just the same thing. Just imagine that you run your test suite automatically when you compile your program.
I don't compile my programs. Not as a distinct conscious step during
development. I write pieces of code and execute them immediately. It's
much faster to run the code than to explicitly compile and/or run a type
checker.
This is a completely different style of developing code.
Pascal
--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)
Dirk Thierbach wrote: Pascal Costanza <co******@web.de> wrote:
No, other people are claiming that one should _always_ use static type sytems, and my claim is that there are situations in which a dynamic type system is better.
If you claim that something (anything) is _always_ better, you better have a convincing argument that _always_ holds.
I have never claimed that dynamic type systems are _always_ better. To me, it certainly looked like you did in the beginning. Maybe your impression that other people say that one should always use static type systems is a similar misinterpretation?
Please recheck my original response to the OP of this subthread. (How
much more "in the beginning" can one go?)
Anyway, formulations like "A has less expressive power than B" aresvery close to "B is always better than A". It's probably a good idea to avoid such formulations if this is not what you mean.
"less expressive power" means that there exist programs that work but
that cannot be statically typechecked. These programs objectively exist.
By definition, I cannot express them in a statically typed language.
On the other hand, you can clearly write programs in a dynamically typed
language that can still be statically checked if one wants to do that.
So the set of programs that can be expressed with a dynamically typed
language is objectively larger than the set of programs that can be
expressed with a statically typed language.
It's definitely a trade off - you take away some expressive power and
you get some level of safety in return. Sometimes expressive power is
more important than safety, and vice versa.
It's not my problem that you interpret some arbitrary other claim into
this statement.
Pascal
--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)
Dirk Thierbach wrote: Pascal Costanza <co******@web.de> wrote:
Dirk Thierbach wrote:
You cannot take an arbitrary language and attach a good static type system to it. Type inference will be much to difficult, for example. There's a fine balance between language design and a good type system that works well with it.
Right. As I said before, you need to reduce the expressive power of the language.
Maybe that's where the problem is. One doesn't need to reduce the "expressive power". I don't know your particular application, but what you seem to need is the ability to dynamically change the program execution. There's more than one way to do that.
Of course there is more than one way to do anything. You can do
everything in assembler. The important point is: what are the convenient
ways to do these things? (And convenience is a subjective matter.)
Expressive power is not Turing equivalence.
I may be wrong, but I somehow have the impression that it is difficult to see other ways to solve a problem if you haven't done it in that way at least once.
No, you need several attempts to get used to a certain programming
style. These things don't fall from the sky. When you write your first
program in a new language, it is very likely that you a) try to imitate
what you have done in other languages you knew before and b) that you
don't know the standard idioms of the new language.
Mastering a programming language is a very long process.
So you see that with different tools, you cannot do it in exactly the same way as with the old tools, and immediately you start complaining that the new tools have "less expressive power", just because you don't see that you have to use them in a different way. The "I can do lot of things with macros in Lisp that are impossible to do in other languages" claim seems to have a similar background.
No, you definitely can do a lot of things with macros in Lisp that are
impossible to do in other languages. There are papers that show this
convincingly. Try ftp://publications.ai.mit.edu/ai-pub...df/AIM-453.pdf for a
start. Then continue, for example, with some articles on Paul Graham's
website, or download and read his book "On Lisp".
I could complain that Lisp or Smalltalk have "less expressive power" because I cannot declare algebraic datatypes properly,
I don't see why this shouldn't be possible, but I don't know.
I don't have pattern matching to use them efficiently,
http://www.cliki.net/fare-matcher
and there is no automatic test generation (i.e., type checking) for my datatypes.
http://www.plt-scheme.org/software/mrflow/
The only way out is IMHO to learn as many languages as possible, and to learn as many alternative styles of solving problems as possible.
Right.
Pascal
--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)
Kenny Tilton wrote: Ralph Becket wrote: STATICALLY TYPED: the compiler carries out a proof that no value of the wrong type will ever be passed to a function expecting a different type, anywhere in the program. Big deal.
Yes it is a very big deal. I suspect from your choice of words
you have a closed mind on this issue, so there's no point in me
wasting my time trying to explain why.
<snip quote from someone who doesn't understand static typing at
all if the references to Java and C++ are anything to go by>
Lights out for static typing.
That's complete bollocks. There are more than enough sufficiently
enlightened people to keep static typing alive and well, thank you
very much. If you chose to take advantage of it that's your loss.
Regards
--
Adrian Hey
In article <36**************************@posting.google.com >, ra**@cs.mu.oz.au (Ralph Becket) wrote: my********************@jpl.nasa.gov (Erann Gat) wrote in message
news:<my*************************************@192. 168.1.51>... No. The fallacy in this reasoning is that you assume that "type error" and "bug" are the same thing. They are not. Some bugs are not type errors, and some type errors are not bugs. In the latter circumstance simply ignoring them can be exactly the right thing to do.
Just to be clear, I do not believe "bug" => "type error". However, I do claim that "type error" (in reachable code) => "bug".
But that just begs the question of what you consider a type error. Does
the following code contain a type error?
(defun rsq (a b)
"Return the square root of the sum of the squares of a and b"
(sqrt (+ (* a a) (* b b))))
How about this one?
(defun rsq1 (a b)
(or (ignore-errors (rsq a b)) 'FOO))
or:
(defun rsq2 (a b)
(or (ignore-errors (rsq a b)) (error "Foo")))
Here's the way I see it: (1) type errors are extremely common;
In my experience they are quite rare.
(2) an expressive, statically checked type system (ESCTS) will identify almost all of these errors at compile time;
And then some. That's the problem.
(3) type errors flagged by a compiler for an ESCTS can pinpoint the source of the problem whereas ad hoc assertions in code will only identify a symptom of a type error;
Really? If there's a type mismatch how does the type system know if the
problem is in the caller or the callee?
(4) the programmer does not have to litter type assertions in a program written in a language with an ESCTS;
But he doesn't have to litter type assertions in a program written in a
language without an ESCTS either.
(5) an ESCTS provides optimization opportunities that would otherwise be unavailable to the compiler;
That is true. Whether this benefit outweighs the drawbacks is arguable.
(6) there will be cases where the ESCTS requires one to code around a constraint that is hard/impossible to express in the ESCTS (the more expressive the type system, the smaller the set of such cases will be.)
The question is whether the benefits of (2), (3), (4) and (5) outweigh the occasional costs of (6).
Yes, that's what it comes down to. There are both costs and benefits.
The balance probably tips one way in some circumstances, the other way in
others.
E.
Pascal Costanza wrote: "less expressive power" means that there exist programs that work but that cannot be statically typechecked. These programs objectively exist. By definition, I cannot express them in a statically typed language.
On the other hand, you can clearly write programs in a dynamically typed language that can still be statically checked if one wants to do that. So the set of programs that can be expressed with a dynamically typed language is objectively larger than the set of programs that can be expressed with a statically typed language.
Well, "can be expressed" is a very vague concept, as you noted yourself.
To rationalize the discussion on expressiveness, there is a nice
paper by Felleisen, "On the Expressive Power of Programming Languages"
which makes this terminology precise.
Anyway, you are right of course that any type system will take away some
expressive power (particularly the power to express bogus programs :-)
but also some sane ones, which is a debatable trade-off).
But you completely ignore the fact that it also adds expressive power at
another end! For one thing, by allowing you to encode certain invariants
in the types that you cannot express in another way. Furthermore, by
giving more knowledge to the compiler and hence allow the language to
automatize certain tedious things. Overloading is one obvious example
that increases expressive power in certain ways and crucially relies on
static typing.
So there is no inclusion, the "expressiveness" relation is unordered wrt
static vs dynamic typing.
- Andreas
--
Andreas Rossberg, ro******@ps.uni-sb.de
"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.
Pascal Costanza <co******@web.de> wrote: Dirk Thierbach wrote:
Of course there is more than one way to do anything. You can do everything in assembler. The important point is: what are the convenient ways to do these things? (And convenience is a subjective matter.)
Yes. The point is: It may be as convenient to do in one language as in
the other language. You just need a different approach.
No, you definitely can do a lot of things with macros in Lisp that are impossible to do in other languages.
We just had this discussion here, and I am not going to repeat it.
I know Paul Graham's website, and I know many examples of what you
can do with macros. Macros are a wonderful tool, but you really can
get most of what you can do with macros by using HOFs. There are
some things that won't work, the most important of which is that
you cannot force calculation at compile time, and you have to hope
that the compiler does it for you (ghc actually does it sometimes.)
- Dirk
Pascal Costanza <co******@web.de> wrote: Dirk Thierbach wrote:
OK, I have got it. No, that's not what I want. What I want is:
testxyz obj = (concretemethod obj == 42)
Does the code compile as long as concretemethod doesn't exist?
No. Does your test pass as long as conretemthod doesn't exist? It doesn't,
for the same reason. It's the same with compile-time type errors. The only difference is that they happen at compile-time, not at test-suite run-time, but the necessary reaction is the same: Fix your code so that all tests (or the compiler-generated type "tests") pass. Then continue with the next step.
The type system might test too many cases.
I have never experienced that, because every expression that is valid
code will have a proper type.
Can you think of an example (not in C++ or Java etc.) where the type
system may check too many cases?
I don't compile my programs. Not as a distinct conscious step during development. I write pieces of code and execute them immediately.
I know. I sometimes do the same with Haskell: I use ghc in interactive
mode, write a piece of code and execute it immediately (which means it
gets compiled and type checked). When it works, I paste it into
the file. If there was a better IDE, I wouldn't have to do that,
but even in this primitive way it works quite well.
It's much faster to run the code than to explicitly compile and/or run a type checker.
Unless your modules get very large, or you're in the middle of some
big refactoring, compiling or running the type checker is quite fast.
This is a completely different style of developing code.
I have known this style of developing code for quite some time :-)
- Dirk
Andreas Rossberg wrote: Pascal Costanza wrote:
"less expressive power" means that there exist programs that work but that cannot be statically typechecked. These programs objectively exist. By definition, I cannot express them in a statically typed language.
On the other hand, you can clearly write programs in a dynamically typed language that can still be statically checked if one wants to do that. So the set of programs that can be expressed with a dynamically typed language is objectively larger than the set of programs that can be expressed with a statically typed language. Well, "can be expressed" is a very vague concept, as you noted yourself. To rationalize the discussion on expressiveness, there is a nice paper by Felleisen, "On the Expressive Power of Programming Languages" which makes this terminology precise.
I have skimmed through that paper. It states the following in the
conclusion section:
"The most important criterion for comparing programming languages showed
that an increase in expressive power may destroy semantic properties of
the core language that programmers may have become accustomed to
(Theorem 3.14). Among other things, this invalidation of operational
laws through language extensions implies that there are now more
distinctions to be considered for semantic analyses of expressions in
the core language. On the other hand, the use of more expressive
languages seems to facilitate the programming process by making programs
more concise and abstract (Conciseness Conjecture). Put together, this
result says that
* an increase in expressive power is related to a decrease of the set of
``natural'' (mathematically appealing) operational equivalences."
This seems to be compatible with my point of view. (However, I am not
really sure.)
Anyway, you are right of course that any type system will take away some expressive power (particularly the power to express bogus programs :-) but also some sane ones, which is a debatable trade-off).
Thanks. ;)
But you completely ignore the fact that it also adds expressive power at another end! For one thing, by allowing you to encode certain invariants in the types that you cannot express in another way. Furthermore, by giving more knowledge to the compiler and hence allow the language to automatize certain tedious things.
I think you are confusing things here. It gets much clearer when you
separate compilation/interpretation from type checking, and see a static
type checker as a distinct tool.
The invariants that you write, or that are inferred by the type checker,
are expressions in a domain-specific language for static program
analysis. You can only increase the expressive power of that
domain-specific language by adding a more elaborate static type system.
You cannot increase the expressive power of the language that it reasons
about.
An increase of expressive power of the static type checker decreases the
expressive power of the target language, and vice versa.
As a sidenote, here is where Lisp comes into the game: Since Lisp
programs can easily reason about other Lisp programs, because there is
no distinction between programs and data in Lisp, it should be pretty
straightforward to write a static type checker for Lisp programs, and
include them in your toolset.
It should also be relatively straightforward to make this a relatively
flexible type checker for which you can increase/decrease the level of
required conformance to the (a?) type system.
This would mean that you could have the benefits of both worlds: when
you need static type checking, you can add it. You can even enforce it
in a project, if the requirements are strict in this regard in a certain
setting. If the requirements are not so strict, you can relax the static
type soundness requirements, or maybe even go back to dynamic type checking.
In fact, such systems already seem to exist. I guess that's what soft
typing is good for, for example (see MrFlow). Other examples that come
to mind are Qi and ACL2.
Why would one want to switch languages for a single feature?
Note that this is just brainstorming. I don't know whether such an
approach can really work in practice. There are probably some nasty
details that are hard to solve.
Overloading is one obvious example that increases expressive power in certain ways and crucially relies on static typing.
Overloading relies on static typing? This is news to me. What do you mean?
So there is no inclusion, the "expressiveness" relation is unordered wrt static vs dynamic typing.
No, I don't think so.
Pascal
--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)
"Pascal Costanza" <co******@web.de> wrote in message news:bn**********@f1node01.rhrz.uni-bonn.de... Expressive power is not Turing equivalence.
Agreed.
So, does anyone have a formal definition of "expressive power?"
Metrics? Examples? Theoretical foundations?
It seems like a hard concept to pin down. "Make it possible
to write programs that contain as few characters as possible"
strikes me as a really bad definition; it suggests that
bzip2-encoded C++ would be really expressive.
Marshall
Dirk Thierbach wrote: Pascal Costanza <co******@web.de> wrote:
Dirk Thierbach wrote:
OK, I have got it. No, that's not what I want. What I want is:
testxyz obj = (concretemethod obj == 42)
Does the code compile as long as concretemethod doesn't exist?
No. Does your test pass as long as conretemthod doesn't exist? It doesn't, for the same reason.
As long as I am writing only tests, I don't care. When I am in the mood
of writing tests, I want to write as many tests as possible, without
having to think about whether my code is acceptable for the static type
checker or not. It's the same with compile-time type errors. The only difference is that they happen at compile-time, not at test-suite run-time, but the necessary reaction is the same: Fix your code so that all tests (or the compiler-generated type "tests") pass. Then continue with the next step.
The type system might test too many cases.
I have never experienced that, because every expression that is valid code will have a proper type.
Can you think of an example (not in C++ or Java etc.) where the type system may check too many cases?
Here is one:
(defun f (x)
(unless (< x 200)
(cerror "Type another number"
"You have typed a wrong number")
(f (read)))
(* x 2))
Look up http://www.lispworks.com/reference/H...ror.htm#cerror
before complaining.
Pascal
--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)
"Pascal Costanza" <co******@web.de> wrote in message news:bn**********@newsreader2.netcologne.de... See the example of downcasts in Java.
Downcasts in Java are not a source of problems.
They may well be indicative of a theoretical
hole (in fact I'm pretty sure they are,) but they
are not something that actually causes
problems in the real world.
Marshall
Pascal Costanza <co******@web.de> wrote: I don't have pattern matching to use them efficiently,
http://www.cliki.net/fare-matcher
Certainly an improvement, but no way to declare datatypes (i.e.,
pattern constructors) yet:
There also needs be improvements to the infrastructure to build
pattern constructors, so that you may build pattern constructors and
destructors at the same time (much like you do when you define ML
types).
The following might also be a show-stopper (I didn't test it,
but it doesn't look good):
; FIXME: several branches of an "or" pattern can't share variables;
; variables from all branches are visible in guards and in the body,
; and previous branches may have bound variables before failing.
; This is rather bad.
The following comment is also interesting:
Nobody reported using the matcher -- ML/Erlang style pattern
matching seemingly isn't popular with LISP hackers.
Again, the way to get the benefits of "more expressive languages"
like ML in Lisp seems to be to implement part of them on top of Lisp :-)
and there is no automatic test generation (i.e., type checking) for my datatypes. http://www.plt-scheme.org/software/mrflow/
I couldn't find any details on this page (it says "coming soon"), but
the name suggest a dataflow analyzer. As I have already said, the
problem with attaching static typing and inference to an arbitrary
language is that it is difficult to get it working without changing
the language design. Pure functional features make type inference
easy, imperative features make them hard. Full dataflow analysis might
help, but I'd have to look more closely to see if it works out.
- Dirk
Andreas Rossberg <ro******@ps.uni-sb.de> wrote: Pascal Costanza wrote:
Anyway, you are right of course that any type system will take away some expressive power (particularly the power to express bogus programs :-) but also some sane ones, which is a debatable trade-off).
Yep. It turns out that you take away lots of bogus programs, and the
sane programs that are taken away are in most cases at least questionable
(they will be mostly of the sort: There is a type error in some execution
branch, but this branch will never be reached), and can usually be
expressed as equivalent programs that will pass.
"Taking away possible programs" is not the same as "decreasing expressive
power".
So there is no inclusion, the "expressiveness" relation is unordered wrt static vs dynamic typing.
That's the important point.
- Dirk
Dirk Thierbach wrote: Andreas Rossberg <ro******@ps.uni-sb.de> wrote:
Pascal Costanza wrote:
Anyway, you are right of course that any type system will take away some expressive power (particularly the power to express bogus programs :-) but also some sane ones, which is a debatable trade-off).
Yep. It turns out that you take away lots of bogus programs, and the sane programs that are taken away are in most cases at least questionable (they will be mostly of the sort: There is a type error in some execution branch, but this branch will never be reached)
No. Maybe you believe me when I quote Ralf Hinze, one of the designers
of Haskell:
"However, type systems are always conservative: they must necessarily
reject programs that behave well at run time."
found at http://web.comlab.ox.ac.uk/oucl/rese...ides/hinze.pdf
Could you _please_ just accept that statement? That's all I am asking for!
Pascal
--
Pascal Costanza University of Bonn
mailto:co******@web.de Institute of Computer Science III http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)
"Marshall Spight" <ms*****@dnai.com> writes: "Pascal Costanza" <co******@web.de> wrote in message news:bn**********@f1node01.rhrz.uni-bonn.de... Expressive power is not Turing equivalence.
Agreed.
So, does anyone have a formal definition of "expressive power?" Metrics? Examples? Theoretical foundations? http://citeseer.nj.nec.com/felleisen90expressive.html
It's a start.
"Pascal Bourguignon" <sp**@thalassa.informatimago.com> wrote in message news:87************@thalassa.informatimago.com... The only untyped languages I know are assemblers. (ISTR that even intercal can't be labelled "untyped" per se).
Are we speaking about assembler here?
BCPL!
Marshall
"Pascal Costanza" <co******@web.de> wrote in message news:bn**********@newsreader2.netcologne.de... Marshall Spight wrote:
"Pascal Costanza" <co******@web.de> wrote in message news:bn**********@newsreader2.netcologne.de...
I wouldn't count the use of java.lang.Object as a case of dynamic typing. You need to explicitly cast objects of this type to some class in order to make useful method calls. You only do this to satisfy the static type system. (BTW, this is one of the sources for potential bugs that you don't have in a decent dynamically typed language.) Huh? The explicit-downcast construct present in Java is the programmer saying to the compiler: "trust me; you can accept this type of parameter." In a dynamically-typed language, *every* call is like this! So if this is a source of errors (which I believe it is) then dynamically-typed languages have this potential source of errors with every function call, vs. statically-typed languages which have them only in those few cases where the programmer explicitly puts them in.
What can happen in Java is the following:
- You might accidentally use the wrong class in a class cast. - For the method you try to call, there happens to be a method with the same name and signature in that class.
In this situation, the static type system would be happy, but the code is buggy.
How is this any different a bug than if the programmer types the
wrong name of the method he wants to call? This doesn't demonstrate
anything that I can figure.
Here's a logically identical argument:
In a typed language, a programmer might type "a-b" when he meant
to type "a+b". The type system would be happy, but the code will
be buggy.
Well, yes, that's true.
My claim is: explicit downcasting is a technique, manually specified
by the programmer, that weakens the guarantees the compiler makes
to be exactly as weak as those guarantees made by a dynamically
typed language.
So I can see a valid complaint about the extra typing needed, but
I see no validity to the claim that this makes a statically typed
language any more bug-prone than a dynamically typed language.
Indeed, it gives the statically-typed languages *exactly the same*
degree of bug-proneness as a dynamically typed language for the
scope of a single function call, after which the languages returns
to being strikingly less prone to that specific class of bug. (In fact,
completely immune.)
In a decent dynamically typed language, you have proper name space management, so that a method cannot ever be defined for a class only by accident.
How can a method be defined "by accident?" I can't figure out what
you're trying to say.
Marshall This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Xah Lee |
last post by:
http://python.org/doc/2.4.1/lib/module-re.html
http://python.org/doc/2.4.1/lib/node114.html
---------
QUOTE
The module defines several...
|
by: Juha S. |
last post by:
Hi,
I'm writing a small text editor type application with Python 2.5 and
Tkinter. I'm using the Tk text widget for input and output, and the...
|
by: Berco Beute |
last post by:
I wonder what it would take to implement Python in JavaScript so it
can run on those fancy new JavaScript VM's such as Chrome's V8 or
Firefox'...
|
by: Luke Kenneth Casson Leighton |
last post by:
On Sep 3, 10:02 pm, bearophileH...@lycos.com wrote:
1200 lines of code for the compiler, and about... 800 for a basic
suite of builtin types...
|
by: tammygombez |
last post by:
Hey fellow JavaFX developers,
I'm currently working on a project that involves using a ComboBox in JavaFX, and I've run into a bit of an issue....
|
by: better678 |
last post by:
Question:
Discuss your understanding of the Java platform. Is the statement "Java is interpreted" correct?
Answer:
Java is an object-oriented...
|
by: teenabhardwaj |
last post by:
How would one discover a valid source for learning news, comfort, and help for engineering designs? Covering through piles of books takes a lot of...
|
by: Kemmylinns12 |
last post by:
Blockchain technology has emerged as a transformative force in the business world, offering unprecedented opportunities for innovation and...
|
by: CD Tom |
last post by:
This happens in runtime 2013 and 2016. When a report is run and then closed a toolbar shows up and the only way to get it to go away is to right...
|
by: Naresh1 |
last post by:
What is WebLogic Admin Training?
WebLogic Admin Training is a specialized program designed to equip individuals with the skills and knowledge...
|
by: Matthew3360 |
last post by:
Hi there. I have been struggling to find out how to use a variable as my location in my header redirect function.
Here is my code.
...
|
by: Matthew3360 |
last post by:
Hi, I have a python app that i want to be able to get variables from a php page on my webserver. My python app is on my computer. How would I make it...
|
by: AndyPSV |
last post by:
HOW CAN I CREATE AN AI with an .executable file that would suck all files in the folder and on my computerHOW CAN I CREATE AN AI with an .executable...
| |