By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
444,208 Members | 1,550 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 444,208 IT Pros & Developers. It's quick & easy.

Python from Wise Guy's Viewpoint

P: n/a
THE GOOD:

1. pickle

2. simplicity and uniformity

3. big library (bigger would be even better)

THE BAD:

1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?

2. Statements vs Expressions business is very dumb. Try writing
a = if x :
y
else: z

3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.

4. splintering of the language: you have the inefficient main language,
and you have a different dialect being developed that needs type
declarations. Why not allow type declarations in the main language
instead as an option (Lisp does it)

5. Why do you need "def" ? In Haskell, you'd write
square x = x * x

6. Requiring "return" is also dumb (see #5)

7. Syntax and semantics of "lambda" should be identical to
function definitions (for simplicity and uniformity)

8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)

9. Syntax for arrays is also bad [a (b c d) e f] would be better
than [a, b(c,d), e, f]

420

P.S. If someone can forward this to python-dev, you can probably save some
people a lot of soul-searching
Jul 18 '05 #1
Share this Question
Share on Google+
467 Replies


P: n/a
mi*****@ziplip.com <mi*****@ziplip.com> pisze:
8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)


Yes. By deleting a name from namespace. You better read some tutorial,
this will save you some time.

--
Jarek Zgoda
Registered Linux User #-1
http://www.zgoda.biz/ JID:ja***@jabberpl.org http://zgoda.jogger.pl/
Jul 18 '05 #2

P: n/a
> mi*****@ziplip.com <mi*****@ziplip.com> pisze:
8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)

Jarek Zgoda <jz****@gazeta.usun.pl> writes:
Yes. By deleting a name from namespace. You better read some
tutorial, this will save you some time.


Excuse my ignorance wrt. to Python, but to me this seems to imply that
one of these statements about functions in Python are true:

1. Function names (strings) are resolved (looked up in the
namespace) each time a function is called.

2. You can't really undefine a function such that existing calls to
the function will be affected.

Is this (i.e. one of these) correct?

--
Frode Vatvedt Fjeld
Jul 18 '05 #3

P: n/a
Frode Vatvedt Fjeld wrote:
mi*****@ziplip.com <mi*****@ziplip.com> pisze:
8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)


Jarek Zgoda <jz****@gazeta.usun.pl> writes:
Yes. By deleting a name from namespace. You better read some
tutorial, this will save you some time.


Excuse my ignorance wrt. to Python, but to me this seems to imply that
one of these statements about functions in Python are true:

1. Function names (strings) are resolved (looked up in the
namespace) each time a function is called.

2. You can't really undefine a function such that existing calls to
the function will be affected.

Is this (i.e. one of these) correct?


Both are correct, in essence. (And depending on how one interprets
your second point, which is quite ambiguous.)

-Peter
Jul 18 '05 #4

P: n/a
Warning! Troll alert! I missed the three newsgroup cross-post
the first time, so I thought this might be a semi-serious question.

-Peter

mi*****@ziplip.com wrote:

THE GOOD:

1. pickle

2. simplicity and uniformity

3. big library (bigger would be even better)

THE BAD:

1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?

2. Statements vs Expressions business is very dumb. Try writing
a = if x :
y
else: z

3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.

4. splintering of the language: you have the inefficient main language,
and you have a different dialect being developed that needs type
declarations. Why not allow type declarations in the main language
instead as an option (Lisp does it)

5. Why do you need "def" ? In Haskell, you'd write
square x = x * x

6. Requiring "return" is also dumb (see #5)

7. Syntax and semantics of "lambda" should be identical to
function definitions (for simplicity and uniformity)

8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)

9. Syntax for arrays is also bad [a (b c d) e f] would be better
than [a, b(c,d), e, f]

420

P.S. If someone can forward this to python-dev, you can probably save some
people a lot of soul-searching

Jul 18 '05 #5

P: n/a
On Sun, 19 Oct 2003 15:24:18 +0200, Frode Vatvedt Fjeld <fr****@cs.uit.no>
wrote:
mi*****@ziplip.com <mi*****@ziplip.com> pisze:
8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)


Jarek Zgoda <jz****@gazeta.usun.pl> writes:
Yes. By deleting a name from namespace. You better read some
tutorial, this will save you some time.


Excuse my ignorance wrt. to Python, but to me this seems to imply that
one of these statements about functions in Python are true:

1. Function names (strings) are resolved (looked up in the
namespace) each time a function is called.

2. You can't really undefine a function such that existing calls to
the function will be affected.

Is this (i.e. one of these) correct?

Neither is complely correct. Functions are internally delt with using
dictionaies.
The bytecode compiler gives it a ID and the look up is done using a
dictionary.
Removing the function from the dictionary removes the function.
(pythonese for hash-table)
--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
Jul 18 '05 #6

P: n/a
Peter Hansen <pe***@engcorp.com> writes:
Both are correct, in essence. (And depending on how one interprets
your second point, which is quite ambiguous.)


Frode Vatvedt Fjeld wrote:
1. Function names (strings) are resolved (looked up in the
namespace) each time a function is called.
But this implies a rather enormous overhead in calling a function,
doesn't it?
2. You can't really undefine a function such that existing calls to
the function will be affected.


What I meant was that if you do the following, in sequence:

a. Define function foo.
b. Define function bar, that calls function foo.
c. Undefine function foo

Now, if you call function bar, will you get a "undefined function"
exception? But if point 1. really is true, I'd expect you get a
"undefined name" execption or somesuch.

--
Frode Vatvedt Fjeld
Jul 18 '05 #7

P: n/a
(I'm replying only because I made the mistake of replying to a
triply-crossposted thread which was, in light of that, obviously
troll-bait. I don't plan to continue the thread except to respond
to Frode's questions. Apologies for c.l.p readers.)

Frode Vatvedt Fjeld wrote:

Peter Hansen <pe***@engcorp.com> writes:
Both are correct, in essence. (And depending on how one interprets
your second point, which is quite ambiguous.)


Frode Vatvedt Fjeld wrote:
1. Function names (strings) are resolved (looked up in the
namespace) each time a function is called.
But this implies a rather enormous overhead in calling a function,
doesn't it?


"Enormous" is of course relative. Yes, the overhead is more than in,
say C, but I think it's obvious (since people program useful software
using Python) that the overhead is not unacceptably high?

As John Thingstad wrote in his reply, there is a dictionary lookup
involved and dictionaries are extremely fast (yes, yet another relative
term... imagine that!) in Python so that part of the overhead is
relatively unimportant. There is actually other overhead which is
involved (e.g. setting up the stack frame which is, I believe, much larger
than the trivial dictionary lookup).

Note also that if you have a reference to the original function is,
say, a local variable, removing the original doesn't really remove it,
but merely makes it unavailable by the original name. The local variable
can still be used to call it.
2. You can't really undefine a function such that existing calls to
the function will be affected.
What I meant was that if you do the following, in sequence:

a. Define function foo.
b. Define function bar, that calls function foo.
c. Undefine function foo

Now, if you call function bar, will you get a "undefined function"
exception? But if point 1. really is true, I'd expect you get a
"undefined name" execption or somesuch.


See below.

Python 2.3.1 (#47, Sep 23 2003, 23:47:32) [MSC v.1200 32 bit (Intel)] on win32
def foo(): .... print 'in foo'
.... def bar(): .... foo()
.... bar() in foo del foo
bar()

Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 2, in bar
NameError: global name 'foo' is not defined

On the other hand, as I said above, one can keep a reference to the original.
If I'd done "baz = foo" just before the "del foo", then I could easily have
done "baz()" and the original method would still have been called.

Python is dynamic. Almost everything is looked up in dictionaries at
runtime like this. That's its nature, and much of its power (as with
the many other such languages).

-Peter
Jul 18 '05 #8

P: n/a
Peter Hansen <pe***@engcorp.com> pisze:
Warning! Troll alert! I missed the three newsgroup cross-post
the first time, so I thought this might be a semi-serious question.


That's why I set FUT to this group.

--
Jarek Zgoda
Registered Linux User #-1
http://www.zgoda.biz/ JID:ja***@jabberpl.org http://zgoda.jogger.pl/
Jul 18 '05 #9

P: n/a
John Thingstad <jo************@chello.no> writes:
[..] Functions are internally delt with using dictionaies. The
bytecode compiler gives it a ID and the look up is done using a
dictionary. Removing the function from the dictionary removes the
function. (pythonese for hash-table)


So to get from the ID to the bytecode, you go through a dictionary?
And the mapping from name to ID happens perhaps when the caller is
bytecode-compiled?

--
Frode Vatvedt Fjeld
Jul 18 '05 #10

P: n/a
Frode Vatvedt Fjeld <fr****@cs.uit.no> writes:
[..] Functions are internally delt with using dictionaies. The
bytecode compiler gives it a ID and the look up is done using a
dictionary. Removing the function from the dictionary removes the
function. (pythonese for hash-table)


So to get from the ID to the bytecode, you go through a dictionary?
And the mapping from name to ID happens perhaps when the caller is
bytecode-compiled?


Hah, you wish. If the function name is global, there is a dictionary
lookup, at runtime, on every call.

def square(x):
return x*x

def sum_of_squares(n):
sum = 0
for i in range(n):
sum += square(x)
return sum

print sum_of_squares(100)

looks up "square" in the dictionary 100 times. An optimization:

def sum_of_squares(n):
sum = 0
sq = square
for i in range(n):
sum += sq(x)
return sum

Here, "sq" is a local copy of "square". It lives in a stack slot in
the function frame, so the dictionary lookup is avoided.
Jul 18 '05 #11

P: n/a
Oh, you're trolling for an inter-language flame fest...
well, anyway:
3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.


Multimethods suck.

The longer answer: Multimethods have modularity issues (if whatever
domain they're dispatching on can be extended by independent developers:
different developers may extend the dispatch domain of a function in
different directions, and leave undefined combinations; standard
dispatch strategies as I've seen in some Lisps just cover up the
undefined behaviour, with a slightly less than 50% chance of being correct).

Regards,
Jo

Jul 18 '05 #12

P: n/a
On Sun, 19 Oct 2003 20:01:03 +0200, Joachim Durchholz wrote:
The longer answer: Multimethods have modularity issues (if whatever domain
they're dispatching on can be extended by independent developers:
different developers may extend the dispatch domain of a function in
different directions, and leave undefined combinations;


This doesn't matter until you provide an equally powerful mechanism which
fixes that. Which is it?

--
__("< Marcin Kowalczyk
\__/ qr****@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Jul 18 '05 #13

P: n/a
Frode Vatvedt Fjeld wrote:
...
Excuse my ignorance wrt. to Python, but to me this seems to imply that
one of these statements about functions in Python are true:

1. Function names (strings) are resolved (looked up in the
namespace) each time a function is called.

2. You can't really undefine a function such that existing calls to
the function will be affected.

Is this (i.e. one of these) correct?


Both, depending on how you define "existing call". A "call" that IS
in fact existing, that is, pending on the stack, will NOT in any way
be "affected"; e.g.:

def foo():
print 'foo, before'
remove_foo()
print 'foo, after'

def remove_foo():
print 'rmf, before'
del foo
print 'rmf, after'

the EXISTING call to foo() will NOT be "affected" by the "del foo" that
happens right in the middle of it, since there is no further attempt to
look up the name "foo" in the rest of that call's progress.

But any _further_ lookup is indeed affected, since the name just isn't
bound to the function object any more. Note that other references to
the function object may have been stashed away in many other places (by
other names, in a list, in a dict, ...), so it may still be quite
possible to call that function object -- just not to look up its name
in the scope where it was earlier defined, once it has been undefined.

As for your worries elsewhere expressed that name lookup may impose
excessive overhead, in Python we like to MEASURE performance issues
rather than just reason about them "abstractly"; which is why Python
comes with a handy timeit.py script to time a code snippet accurately.
So, on my 30-months-old creaky main box (I keep mentioning its venerable
age in the hope Santa will notice...:-)...:

[alex@lancelot ext]$ timeit.py -c -s'def foo():pass' 'foo'
10000000 loops, best of 3: 0.143 usec per loop
[alex@lancelot ext]$ timeit.py -c -s'def foo():return' 'foo()'
1000000 loops, best of 3: 0.54 usec per loop

So: a name lookup takes about 140 nanoseconds; a name lookup plus a
call of the simplest possible function -- one that just returns at
once -- about 540 nanoseconds. I.e., the call itself plus the
return take about 400 nanoseconds _in the simplest possible case_;
the lookup adds a further 140 nanoseconds, accounting for about 25%
of the overall lookup-call-return pure overhead.

Yes, managing less than 2 million function calls a second, albeit on
an old machine, is NOT good enough for some applications (although,
for many of practical importance, it already is). But the need for speed
is exactly the reason optimizing compilers exist -- for those times
in which you need MANY more millions of function calls per second.
Currently, the best optimizing compiler for Python is Psyco, the
"specializing compiler" by Armin Rigo. Unfortunately, it currently only
only supports Intel-386-and-compatible CPU's -- so I can use it on my
old AMD Athlon, but not, e.g., on my tiny Palmtop, whose little CPU is
an "ARM" (Intel-made these days I believe, but not 386-compatible)
[ for plans by Armin, and many others of us, on how to fix that in the
reasonably near future, see http://codespeak.net/pypy/ ]

Anyway, here's psyco in action on the issue in question:

import time
import psyco

def non_compiled(name):
def foo(): return
start = time.clock()
for x in xrange(10*1000*1000): foo()
stend = time.clock()
print '%s %.2f' % (name, stend-start)

compiled = psyco.proxy(non_compiled)

non_compiled('noncomp')
compiled('psycomp')
Running this on the same good old machine produces:

[alex@lancelot ext]$ python2.3 calfoo.py
noncomp 5.93
psycomp 0.13

The NON-compiled 10 million calls took an average of 593 nanoseconds
per call -- roughly the already-measured 540 nanoseconds for the
call itself, plus about 50 nanoseconds for each leg of the loop's
overhead. But, as you can see, Psyco has no trouble optimizing that
by over 45 times -- to about 80 million function calls per second,
which _is_ good enough for many more applications than the original
less-than-2 million function calls per second was.

Psyco entirely respects Python's semantics, but its speed-ups take
particular good advantage of the "specialized" cases in which the
possibilities for extremely dynamic behavior are not, in fact, being
used in a given function that's on the bottleneck of your application
(Psyco can also automatically use a profiler to find out about that
bottleneck, if you want -- here, I used the finer-grained approach
of having it compile ["build a compiled proxy for"] just one function
in order to be able to show the speed-ups it was giving).

Oh, BTW, you'll notice I explicitly ran that little test with
python2.3 -- that was to ensure I was using the OLD release of
psyco, 1.0; as my default Python I use the current CVS snapshot,
and on that one I have installed psyco 1.1, which does more
optimizations and in particular _inlines function calls_ under
propitious conditions -- therefore, the fact that running
just "python calfoo.py" would have shown a speed-up of _120_
(rather than just 45) would have been "cheating", a bit, as it's
not measuring any more anything related to name lookup and function
call overhead. That's a common problem with optimizing compilers:
once they get smart enough they may "optimize away" the very
construct whose optimization you were trying to check with a
sufficiently small benchmark. I remember when the whole "SPEC"
suite of benchmarks was made obsolete at a stroke by one advance
in compiler optimization techniques, for example:-).

Anyway, if your main interest is in having your applications run
fast, rather than in studying optimization yields on specific
constructs in various circumstances, be sure to get the current
Psyco, 1.1.1, to go with the current Python, 2.3.2 (the pre-alpha
Python 2.4a0 is recommended only to those who want to help with
Python's development, including testing -- throughout at least 2004
you can count on 2.3.something, NOT 2.4, being the production,
_stable_ version of Python, recommended to all).
Alex

Jul 18 '05 #14

P: n/a
|1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
| 90% of the code is function applictions. Why not make it convenient?

Haskell is cool. But to do what you want, you need uniform currying of
all function calls (i.e. every call is a call with *exactly one*
argument, often returning a new function). That's not a reasonable
model for Python, for lots of reasons (but you are welcome to use
Haskell, I understand you can download versions of it for free).

|2. Statements vs Expressions business is very dumb. Try writing
| a = if x :
| y
| else: z

Try writing ANYTHING that isn't Python... wow, it doesn't run in the
Python interpreter.

|3. no multimethods (why? Guido did not know Lisp, so he did not know
| about them)

Been there, done that... we got them:

http://gnosis.cx/download/gnosis/magic/multimethods.py

|4. splintering of the language: you have the inefficient main language,
| and you have a different dialect being developed that needs type

I think this might be a reference to Pyrex. It's cool, but it's not a
fork of Python.

|5. Why do you need "def" ? In Haskell, you'd write
| square x = x * x

Again, you are welcome to use Haskell. If you'd like, you can also
write the following in Python:

square = lambda x: x*x

|6. Requiring "return" is also dumb (see #5)

'return' is NOT required in a function. Functions will happily return
None if you don't specify some other value you want returned.

|7. Syntax and semantics of "lambda" should be identical to
| function definitions (for simplicity and uniformity)

Obviously, they can't be *identical* in syntax... the word 'lambda' is
SPELLED differently than the word 'def'. The argument has been made for
code blocks in Python at times, but never (yet) convincingly enough to
persuade the BDFL.

|8. Can you undefine a function, value, class or unimport a module?

Yes.

|9. Syntax for arrays is also bad [a (b c d) e f] would be better
| than [a, b(c,d), e, f]

Hmmm... was the OP attacked by a pride of commas as a child?

It's true that the space bar is bigger on my keyboard than is the comma
key... but I don't find it all THAT hard to press ','.

Actually, the OP's example would require some new syntax for tuples as
well, since there's no way of knowing whether '(b c d)' would be a
function invocation or a tuple. Of course other syntaxes are
*possible*. In fact, here's a quick solution to everything s/he wants:

% cp hugs python

Yours, Lulu...

--
mertz@ | The specter of free information is haunting the `Net! All the
gnosis | powers of IP- and crypto-tyranny have entered into an unholy
..cx | alliance...ideas have nothing to lose but their chains. Unite
| against "intellectual property" and anti-privacy regimes!
-------------------------------------------------------------------------
Jul 18 '05 #15

P: n/a
Joachim Durchholz wrote:
Oh, you're trolling for an inter-language flame fest...
well, anyway:
3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.


Multimethods suck.


Multimethods are wonderful, and we're using them as part of the
implementation of pypy, the Python runtime coded in Python. Sure,
we had to implement them, but that was a drop in the ocean in
comparison to the amount of other code in pypy as it stands, much
less the amount of code we want to add to it in the future. See
http://codespeak.net/ for more about pypy (including all of its
code -- subversion makes it available for download as well as for
online browsing).

So, you're both wrong:-).
Alex

Jul 18 '05 #16

P: n/a
Joachim Durchholz wrote:
Oh, you're trolling for an inter-language flame fest...
well, anyway:
3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.

Multimethods suck.


Do they suck more or less than the Visitor pattern?
The longer answer: Multimethods have modularity issues (if whatever
domain they're dispatching on can be extended by independent developers:
different developers may extend the dispatch domain of a function in
different directions, and leave undefined combinations; standard
dispatch strategies as I've seen in some Lisps just cover up the
undefined behaviour, with a slightly less than 50% chance of being
correct).


So how do you implement an equality operator correctly with only single
dynamic dispatch?
Pascal

Jul 18 '05 #17

P: n/a

"Frode Vatvedt Fjeld" <fr****@cs.uit.no> wrote in message
news:2h************@vserver.cs.uit.no...
cc'ed in case you are not reading c.l.python, which I am limiting this
to.
So to get from the ID to the bytecode, you go through a dictionary?
And the mapping from name to ID happens perhaps when the caller is
bytecode-compiled?


No. In Python, all names are associated with objects in namespaces.
Lookup is done as needed at the appropriate runtime. Function objects
are 1st class and are no different from any others in this respect.
The same goes for slots in collection objects being associated with
member objects.

The free online tutorial as www.python.org explains Python basics like
this.

Terry J. Reedy


Jul 18 '05 #18

P: n/a


Joachim Durchholz wrote:
Oh, you're trolling for an inter-language flame fest...
well, anyway:
3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.

Multimethods suck.

The longer answer: Multimethods have modularity issues


Lisp consistently errs on the side of more expressive power. The idea of
putting on a strait jacket while coding to protect us from ourselves
just seems batty. Similarly, a recent ex-C++ journal editor recently
wrote that test-driven development now gives him the code QA peace of
mind he once sought from strong static typing. An admitted former static
typing bigot, he finished by wondering aloud, "Will we all be coding in
Python ten years from now?"

kenny

--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey

Jul 18 '05 #19

P: n/a


Kenny Tilton wrote:


Joachim Durchholz wrote:
Oh, you're trolling for an inter-language flame fest...
well, anyway:
3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.


Multimethods suck.

The longer answer: Multimethods have modularity issues

Lisp consistently errs on the side of more expressive power. The idea of
putting on a strait jacket while coding to protect us from ourselves
just seems batty. Similarly, a recent ex-C++ journal editor recently
wrote that test-driven development now gives him the code QA peace of
mind he once sought from strong static typing. An admitted former static
typing bigot, he finished by wondering aloud, "Will we all be coding in
Python ten years from now?"


http://www.artima.com/weblogs/viewpost.jsp?thread=4639

--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey

Jul 18 '05 #20

P: n/a
Kenny Tilton wrote:

Lisp consistently errs on the side of more expressive power. The idea of
putting on a strait jacket while coding to protect us from ourselves
just seems batty. Similarly, a recent ex-C++ journal editor recently
wrote that test-driven development now gives him the code QA peace of
mind he once sought from strong static typing.
C++ is not the best example of strong static typing. It is a language
full of traps, which can't be detected by its type system.
An admitted former static typing bigot, he finished by wondering
aloud, "Will we all be coding in Python ten years from now?"

kenny


Best regards,
Tom

--
..signature: Too many levels of symbolic links
Jul 18 '05 #21

P: n/a

"Kenny Tilton" <kt*****@nyc.rr.com> wrote in message
news:_8****************@twister.nyc.rr.com...


Joachim Durchholz wrote:
Oh, you're trolling for an inter-language flame fest...
well, anyway:
3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.

Multimethods suck.

The longer answer: Multimethods have modularity issues


Lisp consistently errs on the side of more expressive power. The idea of
putting on a strait jacket while coding to protect us from ourselves
just seems batty. Similarly, a recent ex-C++ journal editor recently
wrote that test-driven development now gives him the code QA peace of
mind he once sought from strong static typing. An admitted former static
typing bigot, he finished by wondering aloud, "Will we all be coding in
Python ten years from now?"

kenny


There was a nice example from one of the ILC 2003 talks about a Europian
Space Agency rocket exploding with a valueable payload. My understanding was
that there was testing, but maybe too much emphasis was placed the static
type checking of the language used to control the rocket. The end result was
a run time arithmetic overflow which the code intepreted as "rocket off
course". The rocket code instructions in this event were to self destruct.
It seems to me that the Agency would have fared better if they just used
Lisp - which has bignums - and relied more on regression suites and less on
the belief that static type checking systems would save the day.

I'd be interested in hearing more about this from someone who knows the
details.

-R. Scott McIntire
Jul 18 '05 #22

P: n/a
Frode Vatvedt Fjeld wrote:
John Thingstad <jo************@chello.no> writes:
[..] Functions are internally delt with using dictionaies. The
Rather, _names_ are dealt that way (for globals; it's faster for
locals -- then, the compiler can turn the name into an index
into the table of locals' values), whether they're names of functions
or names of other values (Python doesn't separate those namespaces).
bytecode compiler gives it a ID and the look up is done using a
dictionary. Removing the function from the dictionary removes the
function. (pythonese for hash-table)
So to get from the ID to the bytecode, you go through a dictionary?


No; it's up to the implementation, but in CPython the id is the
memory address of the function object, so the bytecode's directly
accessed from there (well, there's a couple of indirectness --
function object to code object to code string -- nothing important).
And the mapping from name to ID happens perhaps when the caller is
bytecode-compiled?


No, it's a lookup. Dict lookup for globals, fast (index in table)
lookup for locals (making locals much faster to access), but a
lookup anyway. I've already posted about how psyco can optimize
this, being a specializing compiler, when it notices the dynamic
possibilities are not being used in a given case.
Alex
Jul 18 '05 #23

P: n/a

"Scott McIntire" <mc******************@comcast.net> wrote in message
news:MoEkb.821534$YN5.832338@sccrnsc01...
There was a nice example from one of the ILC 2003 talks about a Europian Space Agency rocket exploding with a valueable payload. My understanding was that there was testing, but maybe too much emphasis was placed the static type checking of the language used to control the rocket. The end result was a run time arithmetic overflow which the code intepreted as "rocket off course". The rocket code instructions in this event were to self destruct. It seems to me that the Agency would have fared better if they just used Lisp - which has bignums - and relied more on regression suites and less on the belief that static type checking systems would save the day.

I'd be interested in hearing more about this from someone who knows the details.


I believe you are referring to the first flight of the Ariane 5
(sp?). The report of the investigating commission is on the web
somewhere and an interesting read. They identified about five
distinct errors. Try google.

Terry
Jul 18 '05 #24

P: n/a
Scott McIntire fed this fish to the penguins on Sunday 19 October 2003
15:39 pm:

There was a nice example from one of the ILC 2003 talks about a
Europian Space Agency rocket exploding with a valueable payload. My
understanding was that there was testing, but maybe too much emphasis
was placed the static type checking of the language used to control
the rocket. The end result was a run time arithmetic overflow which
the code intepreted as "rocket off course". The rocket code
instructions in this event were to self destruct. It seems to me that
the Agency would have fared better if they just used Lisp - which has
bignums - and relied more on regression suites and less on the belief
that static type checking systems would save the day.

I'd be interested in hearing more about this from someone who knows
the
details.
Just check the archives for comp.lang.ada and Ariane-5.

Short version: The software performed correctly, to specification
(including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS
DESIGNED.

The software was then dropped into the ARIANE 5 with NO REVIEW of
requirements. Two things were different -- the A-5 was capable of more
severe maneuvering, AND apparently the A-5 launch sequence did not need
this code to run for some 40 seconds after ignition (something about
the A-4 launch sequence allowed it to be aborted and restarted in the
40 second span, so the code had to keep up-to-date navigational fixes;
the A-5 OTOH is in space by that point, no post ignition holds).

On the A-4, any values that were that extreme were a sign of critical
malfunction and the software was to shutdown. Which is what it did on
the A-5. Of course, the backup computer then saw the same "malfunction"
and shut down too... For the A-4, you wouldn't WANT the computer to try
processing with those values that were so far out of performance specs
that the rocket had to be tumbling out of control anyways.

The bean-counters apparently did not allow the folks with the A-5
requirements to examine the A-4 code for compliance, and the A-4 Coders
obviously never knew about the A-5 performance specs.

LISP wouldn't have helped -- since the A-4 code was supposed to
failure with values that large... And would have done the same thing if
plugged in the A-5. (Or are you proposing that the A-4 code is supposed
to ignore a performance requirement?)

-- ================================================== ============ <
wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG <
wu******@dm.net | Bestiaria Support Staff <
================================================== ============ <
Bestiaria Home Page: http://www.beastie.dm.net/ <
Home Page: http://www.dm.net/~wulfraed/ <


Jul 18 '05 #25

P: n/a


Dennis Lee Bieber wrote:
Scott McIntire fed this fish to the penguins on Sunday 19 October 2003
15:39 pm:

There was a nice example from one of the ILC 2003 talks about a
Europian Space Agency rocket exploding with a valueable payload. My
understanding was that there was testing, but maybe too much emphasis
was placed the static type checking of the language used to control
the rocket. The end result was a run time arithmetic overflow which
the code intepreted as "rocket off course". The rocket code
instructions in this event were to self destruct. It seems to me that
the Agency would have fared better if they just used Lisp - which has
bignums - and relied more on regression suites and less on the belief
that static type checking systems would save the day.

I'd be interested in hearing more about this from someone who knows
the
details.

Just check the archives for comp.lang.ada and Ariane-5.

Short version: The software performed correctly, to specification
(including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS
DESIGNED.


Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html

"The internal SRI software exception was caused during execution of a
data conversion from 64-bit floating point to 16-bit signed integer
value. The floating point number which was converted had a value greater
than what could be represented by a 16-bit signed integer. This resulted
in an Operand Error. The data conversion instructions (in Ada code) were
not protected from causing an Operand Error, although other conversions
of comparable variables in the same place in the code were protected.
The error occurred in a part of the software that only performs
alignment of the strap-down inertial platform. This software module
computes meaningful results only before lift-off. As soon as the
launcher lifts off, this function serves no purpose."

LISP wouldn't have helped -- since the A-4 code was supposed to
failure with values that large... And would have done the same thing if
plugged in the A-5. (Or are you proposing that the A-4 code is supposed
to ignore a performance requirement?)
"supposed to" fail? chya. This was nothing more than an unhandled
exception crashing the sytem and its identical backup. Other conversions
were protected so they could handle things intelligently, this bad boy
went unguarded. Note also that the code functionality was pre-ignition
only, so there is no way they were thinking that a cool way to abort the
flight would be to leave a program exception unhandled.

What happened (aside from an unnecessary chunk of code running
increasing risk to no good end) is that the extra power of the A5 caused
oscillations greater than those seen in the A4. Those greater
oscillations took the 64-bit float beyond what would fit in the 16-bit
int. kablam. Operand Error. This is not a system saying "whoa, out of
range, abort".

As for Lisp not helping:
most-positive-fixnum ;; constant provided by implementation 536870911
(1+ most-positive-fixnum) ;; overflow fixnum type and... 536870912
(type-of (1+ most-positive-fixnum)) ;; ...auto bignum type BIGNUM
(round most-positive-single-float) ;; or floor or ceiling 340282346638528859811704183484516925440
0.0
(type-of *)

BIGNUM

kenny

--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey

Jul 18 '05 #26

P: n/a
mi*****@ziplip.com wrote in message news:<LV**************************************@zip lip.com>...
THE BAD:

1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?
Python has been designed to attract non-programmers as well. Don't
you think f(x,y,z) resembles the mathematical notation of passing
a function some parameters, instead of "f x y z"?
5. Why do you need "def" ? In Haskell, you'd write
square x = x * x
The reason is just to make it clearer that we're defining
a function. I wonder why you didn't complain about
the colons at the beginning of each block.. Some syntax is
there just to add readability. I suppose it means nothing
to you that Python is compared to executable pseudocode.
It means to Pythonistas.
6. Requiring "return" is also dumb (see #5)


You really don't get any of this "explicit is better than implicit"
thing, do you? Requiring people to write "return" instead of
leaving it as optional like in Ruby, is again one reason why
Pythonistas *like* Python instead of Ruby. You come to
a Python group (and cross-post this meaninglessly everywhere
even though it only concerns Pythonistas) claiming that the
features we like are dumb, and you wonder why people think
of you as a troll..

Anyway, as a conclusion, I believe you'd be much happier with
Ruby than with Python. It doesn't do this weird "statement vs
expression" business, it has optional return, it has optional
parens with function calls, and probably more of these things
"fixed" that you consider Python's downsides. You're trying to
make Python into a language that already exists, it seems, but
for some reason Pythonistas are happy with Python and not rapidly
converting to Ruby or Haskell. Instead of trying to tell us
what we like (and failing at that, as you can see), maybe you
should try to think for a while of why we like Python.

By the way, have you already posted a similar message
to comp.std.c++, saying what they should change about C++
to make it more like Haskell or Ruby? I'd love to read it
(it could be hilarious) ;)
Jul 18 '05 #27

P: n/a
Kenny Tilton <kt*****@nyc.rr.com> writes:
Dennis Lee Bieber wrote:
Just check the archives for comp.lang.ada and Ariane-5.

Short version: The software performed correctly, to specification
(including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS
DESIGNED.
Nonsense.


No, that is exactly right. Like the man said, read the archives for
comp.lang.ada.
From: http://www.sp.ph.ic.ac.uk/Cluster/report.html

"The internal SRI software exception was caused during execution of a
data conversion from 64-bit floating point to 16-bit signed integer
value. The floating point number which was converted had a value greater
than what could be represented by a 16-bit signed integer. This resulted
in an Operand Error. The data conversion instructions (in Ada code) were
not protected from causing an Operand Error, although other conversions
of comparable variables in the same place in the code were protected.
The error occurred in a part of the software that only performs
alignment of the strap-down inertial platform. This software module
computes meaningful results only before lift-off. As soon as the
launcher lifts off, this function serves no purpose."


That's all true, but it is only part of the story, and selectively quoting
just that part is misleading in this context.

For a more detailed answer, see
<http://www.google.com.au/groups?as_umsgid=359BFC60.446B%40lanl.gov>.
LISP wouldn't have helped -- since the A-4 code was supposed to
failure with values that large... And would have done the same thing if
plugged in the A-5. (Or are you proposing that the A-4 code is supposed
to ignore a performance requirement?)


"supposed to" fail? chya. This was nothing more than an unhandled
exception crashing the sytem and its identical backup. Other conversions
were protected so they could handle things intelligently, this bad boy
went unguarded.


The reason that it went unguarded is that the programmers DELIBERATELY
omitted an exception handler for it. The post at the URL quoted above
explains why.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Jul 18 '05 #28

P: n/a
mi*****@ziplip.com wrote in
news:LV**************************************@zipl ip.com:
1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?


What syntax do you propose to use for f(x(y,z)), or f(x(y(z))), or
f(x,y(z)) or f(x(y),z) or f(x)(y)(z) or numerous other variants which are
not currently ambiguous?

--
Duncan Booth du****@rcp.co.uk
int month(char *p){return(124864/((p[0]+p[1]-p[2]&0x1f)+1)%12)["\5\x8\3"
"\6\7\xb\1\x9\xa\2\0\4"];} // Who said my code was obscure?
Jul 18 '05 #29

P: n/a
Duncan Booth wrote:
mi*****@ziplip.com wrote in
news:LV**************************************@zipl ip.com:
1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?


What syntax do you propose to use for f(x(y,z)), or f(x(y(z))), or
f(x,y(z)) or f(x(y),z) or f(x)(y)(z) or numerous other variants which are
not currently ambiguous?


Haskell has it easy -- f x y z is the same as ((f x) y) z -- as an
N-ary function is "conceptualized" as a unary function that returns
an (N-1)-ary function [as Haskell Curry conceptualized it -- which
is why the language is named Haskell, and the concept currying:-)].
So, your 5th case, f(x)(y)(z), would be exactly the same thing.

When you want to apply operators in other than their normal order
of priority, then and only then you must use parentheses, e.g. for
your various cases they would be f (x y z) [1st case], f (x (y z))
[2nd case], f x (y z) [3rd case], f (x y) z [4th case]. You CAN,
if you wish, add redundant parentheses, of course, just like in
Python [where parentheses are overloaded to mean: function call,
class inheritance, function definition, empty tuples, tuples in
list comprehensions, apply operators with specified priority --
I hope I recalled them all;-)].

Of course this will never happen in Python, as it would break all
backwards compatibility. And I doubt it could sensibly happen in
any "simil-Python" without adopting many other Haskell ideas, such
as implicit currying and nonstrictness. What "x = f" should mean
in a language with assignment, everything first-class, and implicit
rather than explicit calling, is quite troublesome too.

Ruby allows some calls without parentheses, but the way it disambiguates
"f x y" between f(x(y)) and f(x, y) is, IMHO, pricey -- it has to KNOW
whether x is a method, and if it is it won't just let you pass it as such
as an argument to f; that's the slippery slope whereby you end up having to
write x.call(y) because not just any object is callable.
"x = f" CALLS f if f is a method, so you can't just treat methods
as first-class citizens like any other... etc, etc...
AND good Ruby texts recommend AVOIDING "f x y" without parentheses,
anyway, because it's ambiguous to a human reader, even when it's
clear to the compiler -- so the benefit you get for that price is
dubious indeed.
Alex

Jul 18 '05 #30

P: n/a
Hannu Kankaanp?? wrote:
mi*****@ziplip.com wrote in message
news:<LV**************************************@zip lip.com>...
THE BAD:

1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?
Python has been designed to attract non-programmers as well. Don't
you think f(x,y,z) resembles the mathematical notation of passing
a function some parameters, instead of "f x y z"?


Yes -- which is exactly why many non-programmers would prefer the
parentheses-less notation -- with more obvious names of course;-).
E.g.:
emitwarning URGENT "meltdown imminent!!!"
DOES look nicer to non-programmers than
emitwarning(URGENT, "meltdown imminent!!!")

Indeed, such languages as Visual Basic and Ruby do allow calling
without parentheses, no doubt because of this "nice look" thing.

However, as I explained elsewhere, there are probably-insuperable
language-design problems in merging "implicit call" and first-classness
of all names unless you basically go all the way following Haskell
with implicit currying and non-strictness (and assignments should
probably go away too, else how to distinguish between assigning to
x a nullary function f itself, and assigning to x the result of
_calling_ f without arguments...?). Not to mention:

emitwarning URGENT highlight "meltdown imminent!!!"

where the need to disambiguate between highlight being the second
of three parameters to emitwarning, or a function called with
the string as its sole parameter and its RESULT being the second
of two parameters to emitwarning, is important for human readers
(indeed some languages that DO allow parentheses-less calls, such
as Ruby, warn against actually USING this possibility in all cases
where ambiguity-to-human-readers may result, such as the above --
the need to be very careful and selective in actually using the
capability makes me less and less willing to pay any large price
for it).

In other words, it's a language design tradeoff, like so many
others -- one which I believe both Python and Haskell got just
right for their different audiences and semantics (I know VB
_didn't_, and I suspend judgment on Ruby -- maybe firstclassness
of all names isn't as important as it FEELS to me, but...).

6. Requiring "return" is also dumb (see #5)


You really don't get any of this "explicit is better than implicit"
thing, do you? Requiring people to write "return" instead of
leaving it as optional like in Ruby, is again one reason why
Pythonistas *like* Python instead of Ruby. You come to


I think that making return optional is slightly error-prone,
but it DOES make the language easier to learn for newbies --
newbies often err, in Python, by writing such code as
def double(x): x+x
which indicates the lack of 'return' IS more natural than its
mandatory presence. So, it's a tradeoff one could sensibly
chose either way. Of course, such cases as:
def buhandclose(boh):
try: boh.buh()
finally: boh.close()
would give you a bad headache in trying to explain them to
newbies ("hmyes the result of buhandclose IS that of the
last expression it evaluates, BUT the one in the finally
clause, although evaluated AFTER boh.buh(), doesn't really
count because..." [keeps handwaving copiously & strenously]).
So, mandatory 'return' does make the language more uniform,
consistent, and easy to master, though not quite as easy to
"pick up casually in a semi-cooked manner". Still, I for
one don't condemn Ruby for making the opposite choice -- it
IS a nicely balanced issue, IMHO.

Anyway, as a conclusion, I believe you'd be much happier with
Ruby than with Python. It doesn't do this weird "statement vs
expression" business, it has optional return, it has optional
parens with function calls, and probably more of these things
"fixed" that you consider Python's downsides. You're trying to
But doesn't make higher-order-functions as much of a no-brainer
as they're in Python, sigh.
make Python into a language that already exists, it seems, but
for some reason Pythonistas are happy with Python and not rapidly
converting to Ruby or Haskell. Instead of trying to tell us


My own reasons for the choice of Python over Ruby are quite
nuanced and complicated, actually (those for either of them
over Haskell have much to do with pragmatism over purity:-).

It boils down to my desire to write application programs,
often requiring cooperation of middling-sized groups of people,
rather than experimental frameworks, or programs written by a
lone coder or a small group of highly-attuned experts. I have
the highest respect for Ruby -- it just doesn't match my needs
QUITE as well as Python does. But, yes, if somebody doesn't
really think about what kind of programs they want to write,
but rather focuses on syntax sugar issues such as return being
optional or mandatory "per se", then it's definitely worthwhile
for that somebody to try Ruby and leave c.l.py in peace:-).
Alex

Jul 18 '05 #31

P: n/a
Hannu Kankaanp?? wrote:
Anyway, as a conclusion, I believe you'd be much happier with
Ruby than with Python. It doesn't do this weird "statement vs
expression" business, it has optional return, it has optional
parens with function calls, and probably more of these things
"fixed" that you consider Python's downsides. You're trying to
make Python into a language that already exists, it seems, but
for some reason Pythonistas are happy with Python and not rapidly
converting to Ruby or Haskell.


I wonder to what extent this statement is true. I know at least
1 Ruby programmer who came from Python, but this spot check should
not be trusted, since I know only 1 Ruby programmer and only 1
former Python programmer <g>. But I have heard that there are a
lot of former Python programmers in the Ruby community. I think
it is safe to say that of all languages Python programmers migrate
to, Ruby is the strongest magnet. OTOH, the migration of this part
of the Python community to Ruby may have been completed already,
of course.

Gerrit.

--
53. If any one be too lazy to keep his dam in proper condition, and
does not so keep it; if then the dam break and all the fields be flooded,
then shall he in whose dam the break occurred be sold for money, and the
money shall replace the corn which he has caused to be ruined.
-- 1780 BC, Hammurabi, Code of Law
--
Asperger Syndroom - een persoonlijke benadering:
http://people.nl.linux.org/~gerrit/
Kom in verzet tegen dit kabinet:
http://www.sp.nl/

Jul 18 '05 #32

P: n/a
Marcin 'Qrczak' Kowalczyk wrote:
On Sun, 19 Oct 2003 20:01:03 +0200, Joachim Durchholz wrote:
The longer answer: Multimethods have modularity issues (if whatever domain
they're dispatching on can be extended by independent developers:
different developers may extend the dispatch domain of a function in
different directions, and leave undefined combinations;


This doesn't matter until you provide an equally powerful mechanism which
fixes that. Which is it?


I don't think there is a satisfactory one. It's a fundamental problem:
if two people who don't know of each other can extend the same thing
(framework, base class, whatever) in different directions, who's
responsible for writing the code needed to combine these extensions?

Solutions that I have seen or thought about are:

1. Let the system decide. Technically feasible for base classes (in the
form of priorisation rules for multimethods), technically infeasible for
frameworks. The problem here is that the system doesn't (usually) have
enough information to reliably make the correct decision.

2. Let the system declare an error if the glue code isn't there.
Effectively prohibits all forms of dynamic code loading. Can create
risks in project management (unexpected error messages during code
integration near a project deadline - yuck). Creates a temptation to
hack the glue code up, by people who don't know the details of the two
modules involved.

3. Disallow extending in multiple directions. In other words, no
multimethods, and live with the asymmetry.
Too restricted to be comfortable with.

4. As (3), but allow multiple extensions if they are contained within
the same module. I.e. allow multiple dispatch within an "arithmetics"
module that defines the classes Integer, Real, Complex, etc. etc., but
don't allow additional multiple dispatch outside the module. (Single
dispatch would, of course, be OK.)

5. As (3), but require manual intervention. IOW let the two authors who
did the orthogonal extensions know about each other, and have each
module refer to the other, and each module carry the glue code required
to combine with the other.
Actually, this is the practice for various open source projects. For
example, authors of MTAs, mail servers etc. cooperate to set standards.
Of course, if the authors aren't interested in cooperating, this doesn't
work well either.

6. Don't use dynamic dispatch, use parametric polymorphism (or whatever
your language offers for that purpose, be it "generics" or "templates").

Regards,
Jo

Jul 18 '05 #33

P: n/a
Pascal Costanza wrote:
Joachim Durchholz wrote:
Oh, you're trolling for an inter-language flame fest...
well, anyway:
3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.
Multimethods suck.


Do they suck more or less than the Visitor pattern?


Well, the visitor pattern is worse.
Generics would be better though.
So how do you implement an equality operator correctly with only single
dynamic dispatch?


Good question.

In practice, you don't use dispatch, you use some built-in mechanism.

Even more in practice, all equality operators that I have seen tended to
compare more or less than one wanted to have compared, at least for
complicated types with large hidden internal structures, or different
equivalent internal structures. I have seen many cases where people
implemented several equality operators - of course, with different
names, and for most cases, I'm under the impression they weren't even
aware that it was equality that they were implementing :-)

Examples are:
Lisp with its multitude of equality predicates nicely exposes the
problems, and provides a solution.
Various string representations (7-bit Ascii, 8-bit Ascii, various
Unicode flavors). Do you want to compare representations or contents? Do
you need a code table to compare?
Various number representation: do you want to make 1 different from 1.0,
or do you want to have them equal?

I think that dynamic dispatch is an interesting answer, but not to
equality :-)

Regards,
Jo

Jul 18 '05 #34

P: n/a
Kenny Tilton wrote:

Dennis Lee Bieber wrote:
Short version: The software performed correctly, to
specification (including the failure mode) -- ON THE ARIANE 4 FOR
WHICH IT WAS DESIGNED.
Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html

"The internal SRI software exception was caused during execution of a
data conversion from 64-bit floating point to 16-bit signed integer
value. The floating point number which was converted had a value greater
than what could be represented by a 16-bit signed integer. This resulted
in an Operand Error. The data conversion instructions (in Ada code) were
not protected from causing an Operand Error, although other conversions
of comparable variables in the same place in the code were protected.
The error occurred in a part of the software that only performs
alignment of the strap-down inertial platform. This software module
computes meaningful results only before lift-off. As soon as the
launcher lifts off, this function serves no purpose."


That's the sequence of events that led to the crash.
Why this sequence could happen though it shouldn't have happened is
exactly how Dennis wrote it: the conversion caused an exception because
the Ariane-5 had a tilt angle beyond what the SRI was designed for.
What happened (aside from an unnecessary chunk of code running
increasing risk to no good end) is that the extra power of the A5 caused
oscillations greater than those seen in the A4. Those greater
oscillations took the 64-bit float beyond what would fit in the 16-bit
int. kablam. Operand Error. This is not a system saying "whoa, out of
range, abort".

As for Lisp not helping:
> most-positive-fixnum ;; constant provided by implementation

536870911
> (1+ most-positive-fixnum) ;; overflow fixnum type and...

536870912
> (type-of (1+ most-positive-fixnum)) ;; ...auto bignum type

BIGNUM
> (round most-positive-single-float) ;; or floor or ceiling

340282346638528859811704183484516925440
0.0
> (type-of *)

BIGNUM


Lisp might not have helped even in that case.
1. The SRI was designed for an angle that would have fit into a 16-bit
operand. If the exception hadn't been thrown, some hardware might still
have malfunctioned.
2. I'm pretty sure there's a reason (other than saving space) for that
conversion to 16 bits. I suspect it was to be fed into some hardware
register... in which case all bignums of the world aren't going to help.

Ariane 5 is mostly a lesson in management errors. Software methodology
might have helped, but just replacing the programming language would
have been insufficient (as usual - languages can make proper testing
easier or harder, but the trade-off will always be present).

Regards,
Jo

Jul 18 '05 #35

P: n/a
Followup-To: comp.lang.misc

On Mon, 20 Oct 2003 13:06:08 +0200, Joachim Durchholz wrote:
The longer answer: Multimethods have modularity issues (if whatever
domain they're dispatching on can be extended by independent developers:
different developers may extend the dispatch domain of a function in
different directions, and leave undefined combinations;
This doesn't matter until you provide an equally powerful mechanism
which fixes that. Which is it?


I don't think there is a satisfactory one. It's a fundamental problem:
if two people who don't know of each other can extend the same thing
(framework, base class, whatever) in different directions, who's
responsible for writing the code needed to combine these extensions?


Indeed. I wouldn't thus blame the language mechanism.
1. Let the system decide. Technically feasible for base classes (in the
form of priorisation rules for multimethods), technically infeasible for
frameworks. The problem here is that the system doesn't (usually) have
enough information to reliably make the correct decision.
Sometimes the programmer can write enough default specializations that it
can be freely extended. Example: drawing shapes on devices. If every shape
is convertible to Bezier curves, and every device is capable of drawing
Bezier curves, then the most generic specialization, for arbitrary shape
and arbitrary device, will call 'draw' again with the shape converted to
Bezier curves.

The potential of multimethods is used: particular shapes have specialized
implementations for particular devices (drawing text is usually better
done more directly than through curves), separate modules can provide
additional shapes and additional devices. Yet it is safe and modular, as
long as people agree who provides a particular specialization.

It's easy to agree with a certain restriction: the specialization is
provided either by the module providing the shape or by module providing
the device. In practice the restriction doesn't have to be always followed
- it's enough that the module providing the specialization is known to all
people who might want to write their own, so I wouldn't advocate enforcing
the restriction on the language level.

I would favor multimethods even if they provided only solutions extensible
in one dimension, since they are nicer than having to enumerate all cases
in one place. Better to have a partially extensible mechanism than nothing.
Here it is extensible.
2. Let the system declare an error if the glue code isn't there.
Effectively prohibits all forms of dynamic code loading. Can create risks
in project management (unexpected error messages during code integration
near a project deadline - yuck). Creates a temptation to hack the glue
code up, by people who don't know the details of the two modules involved.
It would be interesting to let the system find the coverage of multimethods,
but without making it an error if not all combinations are covered. It's
useful to be able to test an incomplete program.

There is no definite answer for what kind of errors should prevent running
the program. It's similar to static/dynamic typing, or being able to
compile calls to unimplemented functions or not.

Even if the system shows that all combinations are covered, it doesn't
imply that they do the right thing. It's analogous to failing to override
a method in class-based OOP - the system doesn't know if the superclass
implementation is appropriate for the subclass. So you can't completely
rely on detection of such errors anyway.
3. Disallow extending in multiple directions. In other words, no
multimethods, and live with the asymmetry. Too restricted to be
comfortable with.
I agree.
4. As (3), but allow multiple extensions if they are contained within the
same module. I.e. allow multiple dispatch within an "arithmetics" module
that defines the classes Integer, Real, Complex, etc. etc., but don't
allow additional multiple dispatch outside the module. (Single dispatch
would, of course, be OK.)
For me it's still too restricted. It's a useful guideline to follow but
it should not be a hard requirement.
5. As (3), but require manual intervention. IOW let the two authors who
did the orthogonal extensions know about each other, and have each module
refer to the other, and each module carry the glue code required to
combine with the other.
The glue code might reside in yet another module, especially if each of
the modules makes sense without the other (so it might better not depend
on it). Again, for me it's just a guideline - if one of the modules can
ensure that it's composable with the other, it's a good idea to change it -
but I would like to be able to provide the glue code elsewhere to make
them working in my program which uses both, and remove it once the modules
include the glue code themselves.
Actually, this is the practice for various open source projects. For
example, authors of MTAs, mail servers etc. cooperate to set standards. Of
course, if the authors aren't interested in cooperating, this doesn't work
well either.
The modules might also be a part of one program, where it's relatively
easy to make them cooperate. Inability to cope with some uses is generally
not a sufficient reason to reject a language mechamism which also has well
working uses.
6. Don't use dynamic dispatch, use parametric polymorphism (or whatever
your language offers for that purpose, be it "generics" or "templates").


I think it can rarely solve the same problem. C++ templates (which can
use overloaded operations, i.e. with implementation dependent on type
parameters) help only in statically resolvable cases. Fully parametric
polymorphism doesn't seem to help at all even in these cases (equality,
arithmetic).

--
__("< Marcin Kowalczyk
\__/ qr****@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Jul 18 '05 #36

P: n/a
Gerrit Holl wrote:

But I have heard that there are a
lot of former Python programmers in the Ruby community. I think
it is safe to say that of all languages Python programmers migrate
to, Ruby is the strongest magnet. OTOH, the migration of this part
of the Python community to Ruby may have been completed already,
of course.


And also on the other hand, perhaps not enough time has yet passed for
us to see the migration of these fickle people *back* to Python. :-)

-Peter
Jul 18 '05 #37

P: n/a
Dnia Sun, 19 Oct 2003 04:18:31 -0700 (PDT), mi*****@ziplip.com napisał(a):
THE GOOD: [...] THE BAD:

[...]

Well, in the variety of languages and plenty of conceptions you can search
for your language of choice. Because all the things you mentioned in
"THE BAD" are available in other languages it doesn't mean it should also
exist in Python. Languages are different, just as people are. If you find
Python has more cons than pros it means that this is not a language from
which you can take 100% of fun. Anyway, changing it into next haskell,
smalltalk or ruby has no sense. Python fills certain niche and it does
its job as it should. Differences are necessity, so don't waste your time
on talks about making Python similar to something else.

--
[ Wojtek Walczak - gminick (at) underground.org.pl ]
[ <http://gminick.linuxsecurity.pl/> ]
[ "...rozmaite zwroty, matowe od patyny dawnosci." ]

Jul 18 '05 #38

P: n/a
Pascal Costanza wrote:
...
So how do you implement an equality operator correctly with only single
dynamic dispatch?


Equality is easy, as it's commutative -- pseudocode for it might be:

def operator==(a, b):
try: return a.__eq__(b)
except I_Have_No_Idea:
try: return b.__eq__(a)
except I_Have_No_Idea:
return False

Non-commutative operators require a tad more, e.g. Python lets each
type define both an __add__ and a __radd__ (rightwise-add):

def operator+(a, b):
try: return a.__add__(b)
except (I_Have_No_Idea, AttributeError):
try: return b.__radd__(a)
except (I_Have_No_Idea, AttributeError):
raise TypeError, "can't add %r and %r" % (type(a),type(b))

Multimethods really shine in HARDER problems, e.g., when you have
MORE than just two operands (or, perhaps, some _very_ complicated
inheritance structure -- but in such cases, even multimethods are
admittedly no panacea). Python's pow(a, b, c) is an example --
and, indeed, Python does NOT let you overload THAT (3-operand)
version, only the two-operand one that you can spell pow(a, b)
or a**b.
ALex

Jul 18 '05 #39

P: n/a
In comp.lang.functional Kenny Tilton <kt*****@nyc.rr.com> wrote:
Dennis Lee Bieber wrote:
Short version: The software performed correctly, to specification
(including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS
DESIGNED.
Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html
Dennis is right: it was indeed a specification problem. AFAIK, the coder
had actually even proved formally that the exception could not arise
with the spec of Ariana 4. Lisp code, too, can suddenly raise unexpected
exceptions. The default behaviour of the system was to abort the mission
for safety reasons by blasting the rocket. This wasn't justified in this
case, but one is always more clever after the event...
"supposed to" fail? chya.
Indeed. Values this extreme were considered impossible on Ariane 4 and
taken as indication of such a serious failure that it would justify
aborting the mission.
This was nothing more than an unhandled exception crashing the sytem
and its identical backup.
Depends on what you mean by "crash": it certainly didn't segfault. It
just realized that something happened that wasn't supposed to happen
and reacted AS REQUIRED.
Other conversions were protected so they could handle things
intelligently, this bad boy went unguarded.
Bad, indeed, but absolutely safe with regard to the spec of Ariane 4.
Note also that the code functionality was pre-ignition
only, so there is no way they were thinking that a cool way to abort the
flight would be to leave a program exception unhandled.
This is a serious design error, not a problem of the programming language.
What happened (aside from an unnecessary chunk of code running
increasing risk to no good end)
Again, it's a design error.
is that the extra power of the A5 caused
oscillations greater than those seen in the A4. Those greater
oscillations took the 64-bit float beyond what would fit in the 16-bit
int. kablam. Operand Error. This is not a system saying "whoa, out of
range, abort".
Well, the system was indeed programmed to say "whoa, out of range, abort".
A design error.
As for Lisp not helping:


There is basically no difference between checking the type of a value
dynamically for validity and catching exceptions that get raised on
violations of certain constraints. One can forget to do both or react
to those events in a stupid way (or prove in both cases that the check /
exception handling is unnecessary given the spec).

Note that I am not defending ADA in any way or arguing against FPLs: in
fact, being an FPL-advocate myself I do think that FPLs (including Lisp)
have an edge what concerns writing safe code. But the Ariane-example just
doesn't support this claim. It was an absolutely horrible management
mistake to not check old code for compliance with the new spec. End
of story...

Regards,
Markus Mottl

--
Markus Mottl http://www.oefai.at/~markus ma****@oefai.at
Jul 18 '05 #40

P: n/a


Fergus Henderson wrote:
Kenny Tilton <kt*****@nyc.rr.com> writes:

Dennis Lee Bieber wrote:

Just check the archives for comp.lang.ada and Ariane-5.

Short version: The software performed correctly, to specification
(including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS
DESIGNED.
Nonsense.

No, that is exactly right. Like the man said, read the archives for
comp.lang.ada.


Yep, I was wrong. They /did/ handle the overflow by leaving the
operation unguarded, trusting it to eventually bring down the system,
their design goal. Apologies to Dennis.

From: http://www.sp.ph.ic.ac.uk/Cluster/report.html

"The internal SRI software exception was caused during execution of a
data conversion from 64-bit floating point to 16-bit signed integer
value. The floating point number which was converted had a value greater
than what could be represented by a 16-bit signed integer. This resulted
in an Operand Error. The data conversion instructions (in Ada code) were
not protected from causing an Operand Error, although other conversions
of comparable variables in the same place in the code were protected.
The error occurred in a part of the software that only performs
alignment of the strap-down inertial platform. This software module
computes meaningful results only before lift-off. As soon as the
launcher lifts off, this function serves no purpose."

That's all true, but it is only part of the story, and selectively quoting
just that part is misleading in this context.


I quoted the entire paragraph and it seemed conclusive, so I did not
read the rest of the report. ie, I was not being selective, I just
assumed no one would consider crashing to be a form of error-handling.
My mistake, they did.

Well, the original question was, "Would Lisp have helped?". Let's see.
They dutifully went looking for overflowable conversions and decided
what to do with each, deciding in this case to do something appropriate
for the A4 which was inappropriately allowed by management to go into
the A5 unexamined.

In Lisp, well, there are two cases. Did they have to dump a number into
a 16-bit hardware channel? There was some reason for the conversion. If
not, no Operand Error arises. It is an open question whether they decide
to check anyway for large values and abort if found, but this one arose
only during a sweep of all such conversions, so probably not.

But suppose they did have to dance to the 16-bit tune of some hardware
blackbox. they would go thru the same reasoning and decide to shut down
the system. No advantage to Lisp. But they'd have to do some work to
bring the system down, because there would be no overflow. So:

(define-condition e-hardware-broken (e-pre-ignition e-fatal)
((component-id :initarg :component-id :reader component-id)
(bad-value :initarg :bad-value :intiform nil :reader bad-value)
...etc etc...

And then they would have to kick it off, and the exception handler of
the controlling logic would get a look at the condition on the way out.
Of course, it also sees operand errors, so one can only hope that at
some point during testing they for some reason had /some/ condition of
type e-pre-ignition get trapped by the in-flight supervisor, at which
point someone would have said either throw it away or why is that module
still running?

Or, if they were as meticulous with their handlers as they were with
numeric conversions, they would have during the inventory of explicit
conditions to handle gotten to the pre-ignition module conditions and
decided, "what does that software (which should not even be running)
know about the hardware that the rest of the system does not know?".

The case is not so strong now, but the odds are still better with Lisp.

kenny
--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey

Jul 18 '05 #41

P: n/a
Alex Martelli <al***@aleax.it> wrote in message news:<Ow*******************@news1.tin.it>...
Yes -- which is exactly why many non-programmers would prefer the
parentheses-less notation -- with more obvious names of course;-).
E.g.:
emitwarning URGENT "meltdown imminent!!!"
DOES look nicer to non-programmers than
emitwarning(URGENT, "meltdown imminent!!!")


It depends on the background of the non-programmer. I'd
say most non-programmers who turn into programmers have at
least some math experience, so they won't be scared to type
1 + 2 instead of "give me the answer to one plus two, thank you".
The latter group we can always guide to COBOL ;) (if my
understanding of that language is correct). And the former
group should be familiar with the function notation.

Perhaps, despite of Guido's urge for "programming for everyone",
Python has been designed with such a group in mind that has at
least some hope of becoming programmers ;)
You really don't get any of this "explicit is better than implicit"
thing, do you? Requiring people to write "return" instead of
leaving it as optional like in Ruby, is again one reason why
Pythonistas *like* Python instead of Ruby. You come to


I think that making return optional is slightly error-prone,
but it DOES make the language easier to learn for newbies --
newbies often err, in Python, by writing such code as
def double(x): x+x
which indicates the lack of 'return' IS more natural than its
mandatory presence.


You're right. That definition of double is closer to what
programming newbies probably have learned in math, than one
with "return". But that's not the point I was arguing really.
It was that Pythonistas prefer the explicit "return" and don't
want it to be changed -- So it's silly to present it as one of
Python's flaws.

Well ok, that was a pretty bold claim with no extensive
studies to back it up, and even contradicts my previously
expressed need to be compatible with math. So sure, it's
a tradeoff, but unlike the 'no-parens'-syntax, explicit
return adds to code readability without affecting the basic
notation as comprehensively as the lack of parens in
function calls (such as making higher-order functions less
intuitive to use).

Actually my preference is to either always require return when
there's something to return, or never allow return. Making it
optional just leads to less uniformity. And disallowing it
entirely in an imperative language wouldn't be such a wise
move either.
Jul 18 '05 #42

P: n/a

Fergus Henderson <fj*@cs.mu.oz.au> writes:
<http://www.google.com.au/groups?as_umsgid=359BFC60.446B%40lanl.gov>.


The post at that url writes about the culture of the Ariane team, but
I would say that it's even a more fundamental problem of our culture
in general: we build brittle stuff with very little margin for error.
Granted, it would be costly to increase physical margin, but in this
case, adopting a point of view more like _robotics_ could help. Even
in case of hardware failure, there's no reason to shut down the mind;
just go on with what you have.
--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Lying for having sex or lying for making war? Trust US presidents :-(
Jul 18 '05 #43

P: n/a


Markus Mottl wrote:
In comp.lang.functional Kenny Tilton <kt*****@nyc.rr.com> wrote:
Dennis Lee Bieber wrote:
Short version: The software performed correctly, to specification
(including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS
DESIGNED.


Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html

Dennis is right: it was indeed a specification problem. AFAIK, the coder
had actually even proved formally that the exception could not arise
with the spec of Ariana 4. Lisp code, too, can suddenly raise unexpected
exceptions. The default behaviour of the system was to abort the mission
for safety reasons by blasting the rocket. This wasn't justified in this
case, but one is always more clever after the event...

"supposed to" fail? chya.

Indeed. Values this extreme were considered impossible on Ariane 4 and
taken as indication of such a serious failure that it would justify
aborting the mission.


Yes, I have acknowledged in another post that I was completely wrong in
my guesswork: everything was intentional and signed-off on by many.

A small side-note: as I now understand things, the idea was not to abort
the mission, but to bring down the system. The thinking was that the
error would signify a hardware failure, and with any luck shutting down
would mean either loss of the backup system (if that was where the HW
fault occurred) or correctly falling back on the still-functioning
backup system if the supposed HW fault had been in the primary unit. ie,
an HW fault would likely be isolated to one unit.

kenny
--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:
http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey

Jul 18 '05 #44

P: n/a
On 20 Oct 2003 19:03:10 +0200, Pascal Bourguignon
<sp**@thalassa.informatimago.com> wrote:
Even in case of hardware failure, there's no reason to shut down the
mind; just go on with what you have.


When the thing that failed is a very large rocket having a very large
momentum, and containing a very large amount of very volatile fuel, it
makes sense to give up and shut down in the safest possible way.

Also keep in mind that this was a "can't possibly happen" failure
scenario. If you've deemed that it is something that can't possibly
happen, you are necessarily admitting that you have no idea how to
respond in a meaningful way if it somehow does happen.

-Steve

Jul 18 '05 #45

P: n/a

"Markus Mottl" <ma****@oefai.at> wrote in message
news:bn**********@bird.wu-wien.ac.at...
Note that I am not defending ADA in any way or arguing against FPLs: in fact, being an FPL-advocate myself I do think that FPLs (including Lisp) have an edge what concerns writing safe code. But the Ariane-example just doesn't support this claim. It was an absolutely horrible management
mistake to not check old code for compliance with the new spec. End
of story...


The investigating commission reported about 5 errors that, in series,
allowed the disaster. As I remember, another nonprogrammer/language
one was in mockup testing. The particular black box, known to be
'good', was not included, but just simulated according to its expected
behavior. If it has been included, and a flight similated in real
time with appropriate tilting and shaking, it should probably have
given the spurious abort message that it did in the real flight.

TJR
Jul 18 '05 #46

P: n/a
In article <OA******************@twister.nyc.rr.com>, Kenny Tilton
<kt*****@nyc.rr.com> wrote:

[Discussing the Arianne failure]
A small side-note: as I now understand things, the idea was not to abort
the mission, but to bring down the system. The thinking was that the
error would signify a hardware failure, and with any luck shutting down
would mean either loss of the backup system (if that was where the HW
fault occurred) or correctly falling back on the still-functioning
backup system if the supposed HW fault had been in the primary unit. ie,
an HW fault would likely be isolated to one unit.


That's right. This is why hardware folks spend a lot of time thinking
about common mode failures, and why software folks could learn a thing or
two from the hardware folks in this regard.

E.
Jul 18 '05 #47

P: n/a
Gerrit Holl wrote:
Hannu Kankaanp?? wrote:
Anyway, as a conclusion, I believe you'd be much happier with
Ruby than with Python. It doesn't do this weird "statement vs
expression" business, it has optional return, it has optional
parens with function calls, and probably more of these things
"fixed" that you consider Python's downsides. You're trying to
make Python into a language that already exists, it seems, but
for some reason Pythonistas are happy with Python and not rapidly
converting to Ruby or Haskell.


I wonder to what extent this statement is true. I know at least
1 Ruby programmer who came from Python, but this spot check should
not be trusted, since I know only 1 Ruby programmer and only 1
former Python programmer <g>. But I have heard that there are a
lot of former Python programmers in the Ruby community. I think
it is safe to say that of all languages Python programmers migrate
to, Ruby is the strongest magnet. OTOH, the migration of this part
of the Python community to Ruby may have been completed already,
of course.


Python and Ruby are IMHO very close, thus "compete" for roughly
the same "ecological niche". I still don't have enough actual
experience in "production" Ruby code to be able to say for sure,
but my impression so far is that -- while no doubt there's a LOT
of things for which they're going to be equally good -- Python's
simplicity and uniformity help with application development for
larger groups of programmers, while Ruby's extreme dynamism and
more variegated style may be strengths for experimentation, or
projects with one, or few and very well-attuned and experienced,
developers. I keep coming back to Python (e.g. because I have
no gmpy in Ruby for my own pet personal projects...:-) but I do
mean to devote more of my proverbial "copious spare time" to
Ruby explorations (e.g., porting gmpy, otherwise it's unlikely
I'll ever get all that much combinatorial arithmetics done...;-).
Alex

Jul 18 '05 #48

P: n/a
Pascal Bourguignon wrote:
The post at that url writes about the culture of the Ariane team, but
I would say that it's even a more fundamental problem of our culture
in general: we build brittle stuff with very little margin for error.
Granted, it would be costly to increase physical margin,
Which is exactly why the margin is kept as small as possible.
Occasionally, it will be /too/ small.

Anybody seen a car model series, every one working perfectly from the
first one?
From what I read, every new model has its small quirks and
"near-perfect" gotchas. The difference is just that you're not allowed
to do that in expensive things like rockets (which is, among many other
things, one of the reasons why space vehicles and aircraft are so d*mn
expensive: if something goes wrong, you can't just drive them on the
nearest parking lot and wait for maintenance and repair...)
but in this
case, adopting a point of view more like _robotics_ could help. Even
in case of hardware failure, there's no reason to shut down the mind;
just go on with what you have.


As Steve wrote, letting a rocket carry on regardless isn't a good idea
in the general case: it would be a major disaster if it made it to the
next coast and crashed into the next town. Heck, it would be enough if
the fuel tanks leaked, and the whole fuel rained down on a ship
somewhere in the Atlantic - most rocket fuels are toxic.

Regards,
Jo

Jul 18 '05 #49

P: n/a
Steve Schafer <se*@reply.to.header> writes:
On 20 Oct 2003 19:03:10 +0200, Pascal Bourguignon
<sp**@thalassa.informatimago.com> wrote:
Even in case of hardware failure, there's no reason to shut down the
mind; just go on with what you have.
When the thing that failed is a very large rocket having a very large
momentum, and containing a very large amount of very volatile fuel, it
makes sense to give up and shut down in the safest possible way.


You have to define a "dangerous" situation. Remember that this
"safest possible way" is usually to blow the rocket up. AFAIK, while
this parameter was out of range, there was no instability and the
rocket was not uncontrolable.

Also keep in mind that this was a "can't possibly happen" failure
scenario. If you've deemed that it is something that can't possibly
happen, you are necessarily admitting that you have no idea how to
respond in a meaningful way if it somehow does happen.


My point. This "can't possibly happen" failure did happen, so clearly
it was not a "can't possibly happen" physically, which means that the
problem was with the software. We know it, but what I'm saying is that
a smarter software could have deduced it on fly.

We all agree that it would be better to have a perfect world and
perfect, bug-free, software. But since that's not the case, I'm
saying that instead of having software that behaves like simple unix C
tools, where as soon as there is an unexpected situation, it calls
perror() and exit(), it would be better to have smarter software that
can try and handle UNEXPECTED error situations, including its own
bugs. I would feel safer in an AI rocket.
--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Lying for having sex or lying for making war? Trust US presidents :-(
Jul 18 '05 #50

467 Replies

This discussion thread is closed

Replies have been disabled for this discussion.