By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,621 Members | 1,074 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,621 IT Pros & Developers. It's quick & easy.

Python compilers?

P: n/a
Is anyone working on a python-to-native compiler?
I'd be interested in taking a look.

Come to think of it, is anyone working on a sexpr-enabled version of
Python, or anything similar? I really miss my macros whenever I try to
use it...

Jul 18 '05 #1
Share this Question
Share on Google+
58 Replies


P: n/a
Svein Ove Aas wrote:
Is anyone working on a python-to-native compiler?
I'd be interested in taking a look.

Come to think of it, is anyone working on a sexpr-enabled version of
Python, or anything similar? I really miss my macros whenever I try to
use it...


Lots of past threads on this, including this one:
http://groups.google.com/groups?&th=8f7b4867334c3d07

(Short answers: no, maybe, look at Psyco, PyPy, and others...)

-Peter
Jul 18 '05 #2

P: n/a
Svein Ove Aas wrote:

Is anyone working on a python-to-native compiler?
I'd be interested in taking a look.

Come to think of it, is anyone working on a sexpr-enabled version of
Python, or anything similar? I really miss my macros whenever I try to
use it...


I really wish there was a python-to-native compiler, a good one that
would produce fairly fast execution.

Mitchell Timin

--
"Many are stubborn in pursuit of the path they have chosen, few in
pursuit of the goal." - Friedrich Nietzsche

http://annevolve.sourceforge.net is what I'm into nowadays.
Humans may write to me at this address: zenguy at shaw dot ca
Jul 18 '05 #3

P: n/a
Svein Ove Aas wrote:
Is anyone working on a python-to-native compiler?
I'd be interested in taking a look.


What are you trying to achieve?

If it's faster code execution, the primary slowdown with a very
high-level language like Python is caused by high-level data structures
(introspection, everything being an object, etc.), not the code itself.
A native compiler would still have to use high-level data structures to
work with all Python code, so the speed increase wouldn't be very much.

If it's ease of distribution you're looking for, I think distutils can
make standalone programs on Windows, and most Linux distros have Python
installed by default.

If you just think a compiler would be cool (or would like to see how it
would be done), check out Psyco, Pyrex, and probably some other
projects. Pyrex even combats the speed issue by allowing native C types
to be used in addition to Python high-level types.
Jul 18 '05 #4

P: n/a
Svein Ove Aas wrote:
Is anyone working on a python-to-native compiler?
I'd be interested in taking a look.

Come to think of it, is anyone working on a sexpr-enabled version of
Python, or anything similar? I really miss my macros whenever I try to
use it...


look at starkiller http://www.python.org/pycon/dc2004/papers/1/

The first step is to create a Type Inference system so there is star killer.
Michael Salib said he is working on a C++ translation for python...

--
Yermat

Jul 18 '05 #5

P: n/a
Leif K-Brooks wrote:
Svein Ove Aas wrote:
Is anyone working on a python-to-native compiler?
I'd be interested in taking a look.
What are you trying to achieve?

If it's faster code execution, the primary slowdown with a very
high-level language like Python is caused by high-level data structures
(introspection, everything being an object, etc.), not the code itself.
A native compiler would still have to use high-level data structures to
work with all Python code, so the speed increase wouldn't be very much.


I'd like to point out the usual suspects: Lisp compilers.
*They* somehow manage to get within 2x of C++ speed, so why can't Python?
If it's ease of distribution you're looking for, I think distutils can
make standalone programs on Windows, and most Linux distros have Python
installed by default.
Nope.
If you just think a compiler would be cool (or would like to see how it
would be done), check out Psyco, Pyrex, and probably some other
projects. Pyrex even combats the speed issue by allowing native C types
to be used in addition to Python high-level types.


Erk.
Seems to me that you want 'smarter', not 'worse'. I can't take a language
seriously if it says that 1/3 is 0.33333... .
Jul 18 '05 #6

P: n/a
Svein Ove Aas wrote:
Erk.
Seems to me that you want 'smarter', not 'worse'. I can't take a language
seriously if it says that 1/3 is 0.33333... .


What is that? Are you arguing for an integer result, a fixed result or
a rational result?

Cheers,
Brian

Jul 18 '05 #7

P: n/a
Svein Ove Aas <sv************@brage.info> writes:
Leif K-Brooks wrote:
Svein Ove Aas wrote:
Is anyone working on a python-to-native compiler?
I'd be interested in taking a look.
What are you trying to achieve?

If it's faster code execution, the primary slowdown with a very
high-level language like Python is caused by high-level data structures
(introspection, everything being an object, etc.), not the code itself.
A native compiler would still have to use high-level data structures to
work with all Python code, so the speed increase wouldn't be very much.


I'd like to point out the usual suspects: Lisp compilers.
*They* somehow manage to get within 2x of C++ speed, so why can't Python?


Type declarations. Lots of effort by seriously smart cookies.

We're working on the latter :-)

Cheers,
mwh

-- say-hi-to-the-flying-pink-elephants-for-me-ly y'rs,

No way, the flying pink elephants are carrying MACHINE GUNS!
Aiiee!! Time for a kinder, gentler hallucinogen...
-- Barry Warsaw & Greg Ward, python-dev
Jul 18 '05 #8

P: n/a
Leif K-Brooks <eu*****@ecritters.biz> writes:
Svein Ove Aas wrote:
Is anyone working on a python-to-native compiler?
I'd be interested in taking a look.


What are you trying to achieve?

If it's faster code execution, the primary slowdown with a very
high-level language like Python is caused by high-level data
structures (introspection, everything being an object, etc.), not the
code itself. A native compiler would still have to use high-level data
structures to work with all Python code, so the speed increase
wouldn't be very much.


Oh please !

Native compilers for other languages just as dynamic as Python
exist. These compilers manage to achieve very significant speed
increases[*].

Psyco is a native compiler, of sorts, for Python, and it manages to
produce dramatic improvements (in the areas where it works).

While it's true that often speed doesn't matter, and many of the
criticisms levelled at Python for being too slow are completely
unfounded in real world situations, this is no reason for Pythonistas
to

a) be happy about the fact that Python is slow,

b) be convinced that Python _must_ be slow.

The sooner "we" stop believing that Python's flexibility comes at the
unavoidable cost of piss-poor runtime performance, and the sooner we
accept that it would be useful to have a Python which maintains its
flexibility but runs like a bat out of hell (ie, the sooner we stop
making excuses for Pythons lack of speed), the sooner we will get one.

Fortunately there is already a bunch of people who understand this,
and is trying to do something about it.

[*] A prime example is the Common Lisp implementation CMUCL. Ironically
enough, CMUCL's compiler is called ... Python.
Jul 18 '05 #9

P: n/a
In article <40***************@shaw.ca>, <Se******@SeeBelow.Nut> wrote:
Jul 18 '05 #10

P: n/a
On 2004-05-18, Svein Ove Aas <sv************@brage.info> wrote:
Seems to me that you want 'smarter', not 'worse'. I can't take a language
seriously if it says that 1/3 is 0.33333... .


Dude, didn't you take high-school math? 1/3 _is_ 0.33333...

--
Grant Edwards grante Yow! My forehead feels
at like a PACKAGE of moist
visi.com CRANBERRIES in a remote
FRENCH OUTPOST!!
Jul 18 '05 #11

P: n/a
Brian Quinlan wrote:
Svein Ove Aas wrote:
Erk.
Seems to me that you want 'smarter', not 'worse'. I can't take a
language seriously if it says that 1/3 is 0.33333... .


What is that? Are you arguing for an integer result, a fixed result or
a rational result?


A rational result, of course. If I'm okay with losing precision, I'll
*tell* the language I'm okay with losing precision.

Jul 18 '05 #12

P: n/a
Am Dienstag, 18. Mai 2004 13:41 schrieb Jacek Generowicz:
Native compilers for other languages just as dynamic as Python
exist. These compilers manage to achieve very significant speed
increases[*].


You are again refering to LISP as an example of a dynamic language which when
compiled gives huge speed increases. This is true in some respect, in others
it isn't. LISP has the advantage that type-inference may be used throughout
the program to create one version of each function, which can then be
compiled. Of course it still has to call into runtime functions to do the
high-level work, but there is actually only one representation of each
finished LISP program, and only one set and one proper order of
runtime-functions to call.

In Python this isn't true. Python, instead of LISP, is "completely" dynamic,
meaning that it's pretty impossible to do type-inference for each function
that is called (even checking types isn't possible). E.g. how do you expect
type-inference to work with the pickle module? string -> something/Error
would be the best description what pickle does. For the function which calls
pickle, do you want to create versions for each possible output of Pickle?
Which outputs of Pickle are possible? (depends on the loaded modules, which
can be loaded at runtime) There is no (sane) way to create machine-code which
calls into the appropriate (low-level) Python-runtime functions (such as
Py_List*, Py_Dict*, etc.) for such a method, at least not at compile-time.

At runtime, this is possible. See what psyco does. There's a nice presentation
on the psyco-website which explains what it does, and I guess you'll
understand when you see that why writing a "compiler" for Python is pretty
impossible.

Heiko.

Jul 18 '05 #13

P: n/a
Grant Edwards <gr****@visi.com> writes:
Dude, didn't you take high-school math? 1/3 _is_ 0.33333...


No, because at some point you will stop writing 3's, either out of
boredom, exhaustion or because you need to pee. At that instant, you
introduce a rounding error, making 3 * 1/3 = 0.99999999999... instead
of 1.0
Jul 18 '05 #14

P: n/a
Leif K-Brooks wrote:

Svein Ove Aas wrote:
Is anyone working on a python-to-native compiler?
I'd be interested in taking a look.


What are you trying to achieve?

If it's faster code execution, the primary slowdown with a very
high-level language like Python is caused by high-level data structures
(introspection, everything being an object, etc.), not the code itself.
A native compiler would still have to use high-level data structures to
work with all Python code, so the speed increase wouldn't be very much.


Yes, fast execution. I have been using C. In my applications there is
a population of "chromosomes" which are arrays of floats, about 2 to 5 k
in length. Then there are subroutines which operate on a chromosome
using pointers. For example, the "crossover" routine uses two pointers
to swap portions of two chromosomes. My software sometimes runs for
hours, perform many millions of operations like these. Clearly, speed
of execution is of dramatic importance.

A related problem is that it seems to be a big deal to call C routines
from Python. I have not actually tried it, because when I read about
how its done I was not able to understand it.

Mitchell Timin

--
"Many are stubborn in pursuit of the path they have chosen, few in
pursuit of the goal." - Friedrich Nietzsche

http://annevolve.sourceforge.net is what I'm into nowadays.
Humans may write to me at this address: zenguy at shaw dot ca
Jul 18 '05 #15

P: n/a
Heiko Wundram wrote:
Am Dienstag, 18. Mai 2004 13:41 schrieb Jacek Generowicz:
Native compilers for other languages just as dynamic as Python
exist. These compilers manage to achieve very significant speed
increases[*].


You are again refering to LISP as an example of a dynamic language which
when compiled gives huge speed increases. This is true in some respect,
in others it isn't. LISP has the advantage that type-inference may be
used throughout the program to create one version of each function,
which can then be compiled. Of course it still has to call into runtime
functions to do the high-level work, but there is actually only one
representation of each finished LISP program, and only one set and one
proper order of runtime-functions to call.

In Python this isn't true. Python, instead of LISP, is "completely"
dynamic, meaning that it's pretty impossible to do type-inference for
each function that is called (even checking types isn't possible). E.g.
how do you expect type-inference to work with the pickle module? string
-> something/Error would be the best description what pickle does. For
the function which calls pickle, do you want to create versions for each
possible output of Pickle? Which outputs of Pickle are possible?
(depends on the loaded modules, which can be loaded at runtime) There is
no (sane) way to create machine-code which calls into the appropriate
(low-level) Python-runtime functions (such as Py_List*, Py_Dict*, etc.)
for such a method, at least not at compile-time.


I suppose that's true; pickle is an exception, and the compiler would pick
that up. (The Lisp (print) function is approximately equivalent, at least
when *print-readably* is true.)

What you're claiming, though, is that it's possible to write Python code
that can't easily be translated to equivalent Lisp code. Can you give an
example?
Jul 18 '05 #16

P: n/a
* Tor Iver Wilhelmsen (2004-05-18 17:26 +0100)
Grant Edwards <gr****@visi.com> writes:
Dude, didn't you take high-school math? 1/3 _is_ 0.33333...


No, because at some point you will stop writing 3's, either out of
boredom, exhaustion or because you need to pee. At that instant, you
introduce a rounding error, making 3 * 1/3 = 0.99999999999... instead
of 1.0


Must have been a long time since you went to school... 1/3 is
/exactly/ 0.3...: http://mathworld.wolfram.com/RepeatingDecimal.html

Thorsten
Jul 18 '05 #17

P: n/a
Heiko Wundram wrote:
[...]

In Python this isn't true. Python, instead of LISP, is "completely" dynamic,
meaning that it's pretty impossible to do type-inference for each function
that is called (even checking types isn't possible). E.g. how do you expect
type-inference to work with the pickle module? string -> something/Error
would be the best description what pickle does. For the function which calls
pickle, do you want to create versions for each possible output of Pickle?
Which outputs of Pickle are possible? (depends on the loaded modules, which
can be loaded at runtime) There is no (sane) way to create machine-code which
calls into the appropriate (low-level) Python-runtime functions (such as
Py_List*, Py_Dict*, etc.) for such a method, at least not at compile-time.

At runtime, this is possible. See what psyco does. There's a nice presentation
on the psyco-website which explains what it does, and I guess you'll
understand when you see that why writing a "compiler" for Python is pretty
impossible.


That is here where you are wrong !
It can be known at compile time if you know every modules that will be
imported. That is called "closed environment".

If it is the case, you would be able to compile a program even if you
would not be allowed to do incremental compilation... That just mean
that you will need to recompile everything each time you modified something.

The demonstration is quite easy:
In a "closed environment" there is a finite number of classes. So you
just have to create as much specialized functions as classes. Then where
you can infere type, call directly the good function. Elsewhere, just
call a runtime dispatcher. In fact, it is already used in some languages
like Eiffel with certain optimizations.

The problem is that many python programs are not "closed" at
compile-time, ie they import or eval stuff only known at run-time.

impossible n'est pas français ;-)

--
Yermat

Jul 18 '05 #18

P: n/a
Leif K-Brooks <eu*****@ecritters.biz> wrote:
Is anyone working on a python-to-native compiler?
I'd be interested in taking a look.
If it's faster code execution, the primary slowdown with a very
high-level language like Python is caused by high-level data structures
(introspection, everything being an object, etc.), not the code itself.
A native compiler would still have to use high-level data structures to
work with all Python code, so the speed increase wouldn't be very much.
I think the main slowdown most people would see is the startup time -
the interpreter takes way too long to start (why the company where I
work won't replace Perl/PHP with Python).
If it's ease of distribution you're looking for, I think distutils can
make standalone programs on Windows, and most Linux distros have Python
installed by default.
Oh that one is funny - do most distro's have PyQT and wxPython (and
their supporting Qt/GKT libs) installed by default, oh and which ones
come with 2.3 by default?
If you just think a compiler would be cool (or would like to see how it
would be done), check out Psyco, Pyrex, and probably some other
projects. Pyrex even combats the speed issue by allowing native C types
to be used in addition to Python high-level types.


I thought Pyrex was a hybrid of C and Python (like Jython/Java) not
actually a Python-to-C convertor? And Pysco is just a different VM
isn't it?

I could really push Python where I work if there was a native
compiler, my company uses C/C++/Java/Qt and were looking at QSA as a
way to allow the user to script things, but as all of our products
integrate with our software protection system, we can't be
distributing source or easily decompiled bytecode!

We could replace the whole lot with PyQt and use an embedded Python
interpreter for the scripting! Ah the frustration :-(
Jul 18 '05 #19

P: n/a
simo a écrit :

[...]
I thought Pyrex was a hybrid of C and Python (like Jython/Java) not
actually a Python-to-C convertor? And Pysco is just a different VM
isn't it?
Pyrex is python plus access to C structures and type declarations. But
then the current implementation create intermediary C files.

Psyco is not a different VM, it's like the JIT of java. It's a Just In
Time compilers, ie it runs over the CPythonVM.
[...]


--
Yermat

Jul 18 '05 #20

P: n/a
Se******@SeeBelow.Nut wrote:
A related problem is that it seems to be a big deal to call C routines
from Python. I have not actually tried it, because when I read about
how its done I was not able to understand it.


Use SWIG. If your C header file is well written, you may not need to
give any additional information to SWIG.

Roger
Jul 18 '05 #21

P: n/a
Thorsten Kampe <th******@thorstenkampe.de>
(news:1m***************@thorstenkampe.de) wrote:
* Tor Iver Wilhelmsen (2004-05-18 17:26 +0100)
Grant Edwards <gr****@visi.com> writes:
Dude, didn't you take high-school math? 1/3 _is_ 0.33333...
No, because at some point you will stop writing 3's, either out of
boredom, exhaustion or because you need to pee. At that instant, you
introduce a rounding error, making 3 * 1/3 = 0.99999999999... instead
of 1.0


Must have been a long time since you went to school... 1/3 is
/exactly/ 0.3...: http://mathworld.wolfram.com/RepeatingDecimal.html


Even worse....
0.999999999999999999999... is exactly 1 :)

Thorsten

Jul 18 '05 #22

P: n/a
Mitja wrote:
Thorsten Kampe <th******@thorstenkampe.de>
(news:1m***************@thorstenkampe.de) wrote:
* Tor Iver Wilhelmsen (2004-05-18 17:26 +0100)
Grant Edwards <gr****@visi.com> writes:

Dude, didn't you take high-school math? 1/3 _is_ 0.33333...

No, because at some point you will stop writing 3's, either out of
boredom, exhaustion or because you need to pee. At that instant, you
introduce a rounding error, making 3 * 1/3 = 0.99999999999... instead
of 1.0


Must have been a long time since you went to school... 1/3 is
/exactly/ 0.3...: http://mathworld.wolfram.com/RepeatingDecimal.html


Even worse....
0.999999999999999999999... is exactly 1 :)


Only for infinite counts of '9', and computers don't do infinity.
Jul 18 '05 #23

P: n/a
Tor Iver Wilhelmsen wrote:
Grant Edwards <gr****@visi.com> writes:
Dude, didn't you take high-school math? 1/3 _is_ 0.33333...


No, because at some point you will stop writing 3's, either out of
boredom, exhaustion or because you need to pee. At that instant, you
introduce a rounding error, making 3 * 1/3 = 0.99999999999... instead
of 1.0


Actually, Grant probably meant the "..." (which is an ellipsis,
meaning it substitutes for something else that is left out) to
represent "3 repeating to infinity", the same as putting a dot
over the last 3, or a bar over the last three 3s, or whatever
other convention you might have seen. Of course, it's just a
convention, so perhaps someone else would think it meant "3s
repeating to the limit of the computer's precision" or something
like that...

-Peter
Jul 18 '05 #24

P: n/a
Se******@SeeBelow.Nut wrote:
Leif K-Brooks wrote:

What are you trying to achieve?


Yes, fast execution. I have been using C. In my applications there is
a population of "chromosomes" which are arrays of floats, about 2 to 5 k
in length. Then there are subroutines which operate on a chromosome
using pointers. For example, the "crossover" routine uses two pointers
to swap portions of two chromosomes. My software sometimes runs for
hours, perform many millions of operations like these. Clearly, speed
of execution is of dramatic importance.


The bottleneck is almost certainly in evaluating the fitness
function, not in performing the mutations and cross-overs.
What does your fitness function do with all those floats?
Perhaps it can be handled much faster with one of the numeric
extensions for Python... or with Pyrex.

-Peter
Jul 18 '05 #25

P: n/a
[[ Note for Peter Hansen: all uses of the word "compiler" below is
understood to refer to an optimizing compiler to machine language, and
I mean a real, not virtual, machine. ]]
Heiko Wundram wrote:
Am Dienstag, 18. Mai 2004 13:41 schrieb Jacek Generowicz:
Native compilers for other languages just as dynamic as Python
exist. These compilers manage to achieve very significant speed
increases[*].

You are again refering to LISP as an example of a dynamic language
which when compiled gives huge speed increases. This is true in some
respect, in others it isn't. LISP has the advantage that
type-inference may be used throughout the program to create one
version of each function, which can then be compiled.
Can you give an example of a function Lisp is able to compile that
Python manifestly couldn't. I don't buy it.
[snip] In Python this isn't true. Python, instead of LISP, is "completely"
dynamic, meaning that it's pretty impossible to do type-inference
for each function that is called (even checking types isn't
possible).
I don't follow you. In what way is Python dynamic that Lisp isn't?
And Python certainly can check types.

E.g. how do you expect type-inference to work with the pickle
module? string -> something/Error would be the best description what
pickle does. For the function which calls pickle, do you want to
create versions for each possible output of Pickle? Which outputs
of Pickle are possible?
You can write pickling package in Lisp. I think the Lisp compiler
would handle such a package fine, and I see no reason why a
hypothetical Python compiler wouldn't.

(depends on the loaded modules, which can be loaded at runtime)
There is no (sane) way to create machine-code which calls into the
appropriate (low-level) Python-runtime functions (such as Py_List*,
Py_Dict*, etc.) for such a method, at least not at compile-time.
That might be true, but pickling is only one module. The fact that
I'm able to write a pickling package in Lisp doesn't make it
impossible to write a Lisp compiler, and the fact that pickling module
exists in Python doesn't make it impossible to write a Python
compiler.

What would a Lisp compiler do faced with a Lisp pickling package?

At runtime, this is possible. See what psyco does. There's a nice
presentation on the psyco-website which explains what it does, and I
guess you'll understand when you see that why writing a "compiler"
for Python is pretty impossible.


Well, I don't buy it, and I don't see any fundamental between Python
and Lisp dynamicism. The only thing you've demonstrated is that there
is some code in Python that could make optimizing difficult--a
statement which is true of Lisp also--but it's a specific case that is
not applicable generally.
--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #26

P: n/a
Peter Hansen wrote:
<snip>
The bottleneck is almost certainly in evaluating the fitness
function, not in performing the mutations and cross-overs.
What does your fitness function do with all those floats?
Perhaps it can be handled much faster with one of the numeric
extensions for Python... or with Pyrex.


You are right, the worst bottleneck is the fitness function. Every
population member is an ANN (artificial neural network) and the ANNout()
function must be called for each fitness evaluation. You can see there
is a lot of looping. This C code runs very quickly:
-----------------------------------------------------------------------
/* neursubs.c - for computing output of artificial neural network - for
EvSail-2.2
M. Timin, August, 2003

piecewise parabolic approximator replaced conventional sigmoid,
October, 2003
*/
#include <math.h>
#include <stdlib.h>

#define NEUR_MAX 80 /* maximum number of neurons in a
layer */
#define LOOP(i,N) for(i=0; i<N; i++) /* be careful using this! */

/* This 4 piece curve is a good sigmoid approximator. */
float sigAprox(register float x) {
register float z;

if(x <= -4.0)
return 0.0;
else if(x <= 0.0) {
z = x + 4.0;
return z*z/32;
}
else if(x < 4.0) {
z = x - 4.0;
return 1.0 - z*z/32;
}
else
return 1.0;
}

/* oneNeuron() uses a vector of inputs and a vector of weights, and the
sigmoid activity
function, to compute the output of one neuron. It is assumed that an
extra
input of 1.0 is at the beginning of the input vector, and that there
is a
corresponding value at the beginning of the weight vector. This is
actually
the bias. So upon entering this function, wptr points to the bias
and inptr
points to 1.0. The inputCount should include the bias, so it should
be one
more than the number of inputs. */
float oneNeuron(float *inptr, float *wptr, int inputCount) {
int i;
float sum = 0.0;

LOOP(i, inputCount) { /* summation loop */
sum += *inptr++ * *wptr++;
}
return sigAprox(sum); /* this is the sigmoid formula */
}

/* This is the routine which calculates the outputs of the ANN. Before
calling it the input
values must be in the array pointd to by inptr. Values of the
outputs will be placed
in the array pointed to by outValues */
void ANNout(int numIn, /* number of inputs to the ANN */
int numHid, /* number of neurons that receive the inputs */
int numOut, /* number of final output neurons */
float *inptr, /* pointer to the array of input values */
float *wptr, /* pointer to array of weights & biases in a
specific order */
float *outValues) /* pointer to where to write the output */
{
float t1[NEUR_MAX]; /* NEUR_MAX defined above */
float t2[NEUR_MAX];
int i;

/* prepare the input array: */
t1[0] = 1.0;
LOOP(i, numIn)
t1[i+1] = *inptr++;
/* compute and store intermediate outputs: */
t2[0] = 1.0;
LOOP(i, numHid)
{
t2[i+1] = oneNeuron(t1, wptr, numIn+1);
wptr += numIn+1;
}
/* do similar for final layer, writing to destination */
LOOP(i, numOut)
{
outValues[i] = oneNeuron(t2, wptr, numHid+1);
wptr += numHid+1;
}
}
-----------------------------------------------------------------------
Humans may write to me at this address: zenguy at shaw dot ca
Jul 18 '05 #27

P: n/a
Se******@SeeBelow.Nut wrote:
Yes, fast execution. I have been using C. In my applications there is
a population of "chromosomes" which are arrays of floats, about 2 to 5 k
in length. Then there are subroutines which operate on a chromosome
using pointers. For example, the "crossover" routine uses two pointers
to swap portions of two chromosomes.


Take a look at Pyrex. It may be just what you need:

http://www.cosc.canterbury.ac.nz/~greg/python/Pyrex/

--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg

Jul 18 '05 #28

P: n/a
Carl Banks <im*****@aerojockey.invalid> writes:
I don't follow you. In what way is Python dynamic that Lisp isn't?


class foo: .... def bar(self, x):
.... return x*x
.... a = foo()
a.bar(3) 9 a.bar = lambda x: x*x*x
a.bar(3) 27

Jul 18 '05 #29

P: n/a
Svein Ove Aas <sv************@brage.info> writes:
What you're claiming, though, is that it's possible to write Python code
that can't easily be translated to equivalent Lisp code. Can you give an
example?


class C:
def __eq__(self, other):
return True

for example. For better or worse, Python *is* more dynamic than
Common Lisp, and this *does* contribute to making it harder to make
Python go fast.

I wrote a rant about this subject:

http://starship.python.net/crew/mwh/...ng-python.html

Cheers,
mwh

--
Ability to type on a computer terminal is no guarantee of sanity,
intelligence, or common sense.
-- Gene Spafford's Axiom #2 of Usenet
Jul 18 '05 #30

P: n/a
Svein Ove Aas <sv************@brage.info> writes:
Mitja wrote:
Thorsten Kampe <th******@thorstenkampe.de>
(news:1m***************@thorstenkampe.de) wrote:
* Tor Iver Wilhelmsen (2004-05-18 17:26 +0100)
Grant Edwards <gr****@visi.com> writes:

> Dude, didn't you take high-school math? 1/3 _is_ 0.33333...

No, because at some point you will stop writing 3's, either out of
boredom, exhaustion or because you need to pee. At that instant, you
introduce a rounding error, making 3 * 1/3 = 0.99999999999... instead
of 1.0

Must have been a long time since you went to school... 1/3 is
/exactly/ 0.3...: http://mathworld.wolfram.com/RepeatingDecimal.html


Even worse....
0.999999999999999999999... is exactly 1 :)


Only for infinite counts of '9', and computers don't do infinity.


We need to stop randomly mixing exact math with computer
implementations (and approximations) thereof.

"<digit><digit>..." is the accepted ASCII rendition of the repeating
overbar, and thus explicitly means "on to infinity". 0.99... exactly
equals 1.

If you want to shift the discussion to computer implementations then
that is a different story. We can talk binary representations of
decimal fractions, and the value of rationals as a useful
representation.

But why not also complain that Python does not have a complete
representation of pi. After all, the value of pi is known to well
beyond the IEEE 80-bit or 64-bit or whatever that an implementation
provides. Even if we did mp math and did really long pi
representations, they would of course not be exact. "e" isn't handled
completely either. Why not complain about those?

We don't complain because the available values are "good enough".
IEEE 764 64-bit reals (internally handled as IEEE 764 80-bit) are
good-enough for most numerical needs. I'll admit, the US debt needs
some extended precision :-( but most numerical analysis gets by with
epsilons under 1e-14.

Hey, while we are on the subject of exact representation, what's with
multithreading? My computer has only one CPU. What's going on
here???? It's a lie, a LIE I tell you...

--
ha************@boeing.com
6-6M21 BCA CompArch Design Engineering
Phone: (425) 342-0007
Jul 18 '05 #31

P: n/a
Paul Rubin <http://ph****@NOSPAM.invalid> wrote in message news:<7x************@ruckus.brouhaha.com>...
Carl Banks <im*****@aerojockey.invalid> writes:
I don't follow you. In what way is Python dynamic that Lisp isn't?


class foo: ... def bar(self, x):
... return x*x
... a = foo()
a.bar(3) 9 a.bar = lambda x: x*x*x
a.bar(3) 27

Well, come on, of course there's going to be some things here and
there you can do in one and not the other. In wat is Python dynamic
that Lisp isn't to such an extent that it would cripple any attempts
to compile it?
--
CARL BANKS
Jul 18 '05 #32

P: n/a
On 19 May 2004 14:54:48 -0700,
im*****@aerojockey.com (Carl Banks) wrote:
Paul Rubin <http://ph****@NOSPAM.invalid> wrote in message
news:<7x************@ruckus.brouhaha.com>...
Carl Banks <im*****@aerojockey.invalid> writes:
> I don't follow you. In what way is Python dynamic that Lisp isn't?


>>> class foo:

... def bar(self, x):
... return x*x
...
>>> a = foo()
>>> a.bar(3)

9
>>> a.bar = lambda x: x*x*x
>>> a.bar(3)

27
>>>

Well, come on, of course there's going to be some things here
and there you can do in one and not the other. In wat is Python
dynamic that Lisp isn't to such an extent that it would cripple
any attempts to compile it?


It's not a matter of not being able to compile python; it's a
matter of what sort of benefits you'd gain.

For example:

if x.y.z == a.b.c:
print 'equal'

What is x.y.z? Who knows? The object to which x is bound might
create y on the fly, based on information not available to the
compiler (see __getattr__ and properties). Once the run-time
system asks object x for attribute y, it (the run-time) has to go
through the whole process again to determine z (and, therefore,
x.y.z). Similar for a.b.c, and any of that code might have
redefined what it means for such objects to be equal, which means
that the compiler can't even know what sorts of equality tests
might be available at the time that the code executes, let alone
generate a simple compare instruction.

This is important: There is little, if any, difference between
that code and running everything through the interpreter anyway.

For that matter, simply accessing x.y might change a.b. (FWIW,
though, the ensuing programmer-cide would be entirely justified.)

"Plain" Common Lisp code has (mostly) the same issues, but
performance critical Common Lisp programs contain strategically
placed type declarations (thus reducing the dynamicity of the
language) to help the compiler to know what it's up against.

There are proposals to add type declarations to Python; google is
your friend (see also 'decorators'). Similar for JIT compilers
(psyco falls into this category).

Regards,
Heather

--
Heather Coppersmith
That's not right; that's not even wrong. -- Wolfgang Pauli
Jul 18 '05 #33

P: n/a
im*****@aerojockey.com (Carl Banks) writes:
>> a.bar = lambda x: x*x*x
>> a.bar(3)

27


Well, come on, of course there's going to be some things here and
there you can do in one and not the other. In wat is Python dynamic
that Lisp isn't to such an extent that it would cripple any attempts
to compile it?


The example above kills any attempt to turn a.bar() into a static
procedure call. There's more like it, e.g. the existence of the
locals() dictionary and the ability to modify it. However, it should
be possible to define a reasonable subset of Python that can compile
into good code. The stuff that makes compilation difficult makes the
code unmaintainable too.

I do think that Python's designers should wait til PyPy with
native-code backends has been deployed for a while before defining too
much of Python 3.0, so we can first gain some experience with compiled
Python. Python should evolve towards being compiled most of the time.
Jul 18 '05 #34

P: n/a
Paul Rubin wrote:


im*****@aerojockey.com (Carl Banks) writes:
> >>> a.bar = lambda x: x*x*x
> >>> a.bar(3)
> 27
Well, come on, of course there's going to be some things here and
there you can do in one and not the other. In wat is Python dynamic
that Lisp isn't to such an extent that it would cripple any attempts
to compile it?


The example above kills any attempt to turn a.bar() into a static
procedure call.


Of course it does--but it's one method. A compiler, if it's good,
would only make the optization on methods named "bar", and it could
probably pare the number of possible classes it could happen to down
to only a few.

I mean you could have a Turing nightmare on your hands, with all kinds
of crazy setattrs and execs and stuff, in both Python and Lisp, and
then there's not much a compiler could do but emit totally general
code. I assume Lisp compilers do this sometimes.

There's more like it, e.g. the existence of the
locals() dictionary and the ability to modify it.
New feature? I didn't think modifying the dict returned by locals
affected the variables.

However, it should
be possible to define a reasonable subset of Python that can compile
into good code. The stuff that makes compilation difficult makes the
code unmaintainable too.
True, but even if you didn't do that, I think a compiler could do a
decent job with reasonable code.

I do think that Python's designers should wait til PyPy with
native-code backends has been deployed for a while before defining too
much of Python 3.0, so we can first gain some experience with compiled
Python. Python should evolve towards being compiled most of the time.


Agreed
--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #35

P: n/a
Carl Banks <im*****@aerojockey.invalid> writes:
The example above kills any attempt to turn a.bar() into a static
procedure call.
Of course it does--but it's one method. A compiler, if it's good,
would only make the optization on methods named "bar", and it could
probably pare the number of possible classes it could happen to down
to only a few.


How could it possibly know? The reassignment of a.bar could happen
anytime, anywhere in the code. Maybe even in an eval.
I mean you could have a Turing nightmare on your hands, with all kinds
of crazy setattrs and execs and stuff, in both Python and Lisp, and
then there's not much a compiler could do but emit totally general
code. I assume Lisp compilers do this sometimes.


Lisp compilers might have to do that sometimes, but Python compilers
would have to do it ALL the time. Psyco took one way out, basically
generating code at runtime and caching it for specific operand types,
but the result is considerable code expansion compared to precompilation.
Jul 18 '05 #36

P: n/a
Carl Banks <im*****@aerojockey.invalid> wrote in message news:<38***************@fe2.columbus.rr.com>...
There's more like it, e.g. the existence of the
locals() dictionary and the ability to modify it.


New feature? I didn't think modifying the dict returned by locals
affected the variables.


Evidence of crime :)

Python 2.3.2
x Traceback (most recent call last):
File "<stdin>", line 1, in ?
NameError: name 'x' is not defined locals()['x'] = 1
x

1
Not to mention stack frame magic available via inspect.*...
- kv
Jul 18 '05 #37

P: n/a
kv***********@yahoo.com (Konstantin Veretennicov) wrote in
news:51**************************@posting.google.c om:
Carl Banks <im*****@aerojockey.invalid> wrote in message
news:<38***************@fe2.columbus.rr.com>...
> There's more like it, e.g. the existence of the
> locals() dictionary and the ability to modify it.


New feature? I didn't think modifying the dict returned by locals
affected the variables.


Evidence of crime :)

Python 2.3.2
x Traceback (most recent call last):
File "<stdin>", line 1, in ?
NameError: name 'x' is not defined locals()['x'] = 1
x 1


That works because you are using locals() to access your global variables.
Put the same code in a function and it behaves differently:
def test(): .... x = 0
.... locals()['x'] = 1
.... print x
.... test()

0

You cannot depend on the behaviour of modifying locals() remaining
unchanged over different releases of Python. Bottom line: don't do this.
Jul 18 '05 #38

P: n/a
kv***********@yahoo.com (Konstantin Veretennicov) writes:
Carl Banks <im*****@aerojockey.invalid> wrote in message news:<38***************@fe2.columbus.rr.com>...
There's more like it, e.g. the existence of the
locals() dictionary and the ability to modify it.
New feature? I didn't think modifying the dict returned by locals
affected the variables.


Evidence of crime :)

Python 2.3.2
x Traceback (most recent call last):
File "<stdin>", line 1, in ?
NameError: name 'x' is not defined locals()['x'] = 1
x

1
Not to mention stack frame magic available via inspect.*...


Well, fortunately *this* level of gimmickery is already somewhat
forbidden...

Cheers,
mwh

-- so python will fork if activestate starts polluting it?

I find it more relevant to speculate on whether Python would fork
if the merpeople start invading our cities riding on the backs of
giant king crabs. -- Brian Quinlan, comp.lang.python
Jul 18 '05 #39

P: n/a
>>>>> "Paul" == Paul Rubin <http://ph****@NOSPAM.invalid> writes:

Paul> I do think that Python's designers should wait til PyPy with
Paul> native-code backends has been deployed for a while before
Paul> defining too much of Python 3.0, so we can first gain some
Paul> experience with compiled Python. Python should evolve
Paul> towards being compiled most of the time.

I think we might be able to get significant benefits from the .NET
implementation (provided that Mono would prove to be legally feasible
in the future) - a lot of the JIT work is done by "other people", and
being able to seamlessly combine Python with a statically typed
language (which probably produces faster code) would be able to give
Python a sweet role in many programming projects - the role of the
initial implementation/specification language, with grunts doing the
possible mechanical porting of performance critical areas.

We could have such a role already, but only in theory. C/C++
integration is not seamless enough, and the languages themselves are
considered to be too difficult, clumsy and unproductive to attract
companies.

One thing is pretty certain to benefit any future speedup efforts,
whatever road is chosen, namely optional type declarations. Their
implementation could be left unspecified, and the initial Python
implementation could use them only for runtime type checking in
debugmode (or ignore altogether). Type inferencing could also use them
to smooth up things.

--
Ville Vainio http://tinyurl.com/2prnb
Jul 18 '05 #40

P: n/a
Paul Rubin wrote:


Carl Banks <im*****@aerojockey.invalid> writes:
> The example above kills any attempt to turn a.bar() into a static
> procedure call.


Of course it does--but it's one method. A compiler, if it's good,
would only make the optization on methods named "bar", and it could
probably pare the number of possible classes it could happen to down
to only a few.


How could it possibly know? The reassignment of a.bar could happen
anytime, anywhere in the code. Maybe even in an eval.


And if it happens anytime, anywhere in the code, the compiler will see
it and create more general code. Or is that impossible?

As for eval, well it would be silly to even allow eval in a compiled
program; kind of defeats the purpose of compling. You might let it
run in its own namespace so it can only affect certain objects.
--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #41

P: n/a
Svein Ove Aas wrote:
Is anyone working on a python-to-native compiler?
I'd be interested in taking a look.

Come to think of it, is anyone working on a sexpr-enabled version of
Python, or anything similar? I really miss my macros whenever I try to
use it...

Jul 18 '05 #42

P: n/a
si************@yahoo.co.uk (simo) wrote in message news:<30**************************@posting.google. com>...
Leif K-Brooks <eu*****@ecritters.biz> wrote:
Is anyone working on a python-to-native compiler?
I'd be interested in taking a look.
Various people on various projects... most still at the vapourware
stage.
In essence python is a dynamic interpreted language though - in order
to make it suitable for 'compilation' you have to remove some of the
advantages.

Funnily though, most people call Java a compiled language, but it only
compiles to Java bytecode which runs on the virtual machine. Python
precompiles to python bytecode which runs on the python virtual
machine. So arguably it is *as* compiled as Java.....
If it's faster code execution, the primary slowdown with a very
high-level language like Python is caused by high-level data structures
(introspection, everything being an object, etc.), not the code itself.
A native compiler would still have to use high-level data structures to
work with all Python code, so the speed increase wouldn't be very much.


I think the main slowdown most people would see is the startup time -
the interpreter takes way too long to start (why the company where I
work won't replace Perl/PHP with Python).


It's not a problem I've been particularly aware of.
Lots of projects to increase execution speed. Psyco is *very* good and
very useable of x86 machines. I use it and see a real speed increase
of approx 10 times on a couple of applications I 've done - anagram
generator and binary interleaving... both of which are processor
intensive. It doesn't address startup speed though.........
If it's ease of distribution you're looking for, I think distutils can
make standalone programs on Windows, and most Linux distros have Python
installed by default.


Oh that one is funny - do most distro's have PyQT and wxPython (and
their supporting Qt/GKT libs) installed by default, oh and which ones
come with 2.3 by default?


No - and worse PyQT is commercial only on windoze.
You need py2exe (or similar) to make standalone programs that will run
without python. It's not difficult though.
If you just think a compiler would be cool (or would like to see how it
would be done), check out Psyco, Pyrex, and probably some other
projects. Pyrex even combats the speed issue by allowing native C types
to be used in addition to Python high-level types.


I thought Pyrex was a hybrid of C and Python (like Jython/Java) not
actually a Python-to-C convertor? And Pysco is just a different VM
isn't it?


That's right - Pyrex is a new language that mixes python and C syntax.
You can also use it to easily build python extensions.
I could really push Python where I work if there was a native
compiler, my company uses C/C++/Java/Qt and were looking at QSA as a
way to allow the user to script things, but as all of our products
integrate with our software protection system, we can't be
distributing source or easily decompiled bytecode!

We could replace the whole lot with PyQt and use an embedded Python
interpreter for the scripting! Ah the frustration :-(


if you're lookibng at scripting then you'll be embedding an
interpreter *anyway*.
There *isn't* a native compiler... because python isn't a compiled
language.
*But* bytecode isn't *easily* decompiled (and decompyler is no longer
publicly available) and speed critical stuff/stuff that needs to be
compiled for security reasons can be compiled using Pyrex - which
handles all the interfacing of python to C and you get the best of
both worlds in terms of syntax.

No reason you shouldn't distribute the PyQt libraries *with* your
program - assuming you stick to the relevant licenses.

Regards,
Fuzzy

http://www.voidspace.org.uk/atlantib...thonutils.html
Jul 18 '05 #43

P: n/a
On Fri, 21 May 2004 03:03:51 GMT,
Carl Banks <im*****@aerojockey.invalid> wrote:
Paul Rubin wrote:


Carl Banks <im*****@aerojockey.invalid> writes:
> The example above kills any attempt to turn a.bar() into a static
> procedure call.

Of course it does--but it's one method. A compiler, if it's good,
would only make the optization on methods named "bar", and it could
probably pare the number of possible classes it could happen to down
to only a few.
How could it possibly know? The reassignment of a.bar could happen
anytime, anywhere in the code. Maybe even in an eval.

And if it happens anytime, anywhere in the code, the compiler will see
it and create more general code. Or is that impossible?
The compiler might not see it. Any function/method/module called while
a is in scope might change a.bar. It doesn't even take a perverted
introspection tool:

module foo:

import bar
class C:
pass
a = C( )
a.bar = 'bar'
bar.bar( a )
a.bar = a.bar + 'bar' # which 'add' instruction should this compile to?

module bar:

def bar( x ):
x.bar = 3
As for eval, well it would be silly to even allow eval in a compiled
program; kind of defeats the purpose of compling. You might let it
run in its own namespace so it can only affect certain objects.


I reiterate my comments regarding type declarations, add new comments
regarding backwards compatibility, and note that Lisp allows eval in
compiled code.

Regards,
Heather

--
Heather Coppersmith
That's not right; that's not even wrong. -- Wolfgang Pauli
Jul 18 '05 #44

P: n/a
Fuzzyman wrote:
Funnily though, most people call Java a compiled language, but it only
compiles to Java bytecode which runs on the virtual machine. Python
precompiles to python bytecode which runs on the python virtual
machine. So arguably it is *as* compiled as Java.....


Actually, there are compilers that produce native machine code from
Java for several CPUs available, and they are used at least in the
embedded world.

-Peter
Jul 18 '05 #45

P: n/a
Peter Hansen wrote:
Fuzzyman wrote:
Funnily though, most people call Java a compiled language, but it only
compiles to Java bytecode which runs on the virtual machine. Python
precompiles to python bytecode which runs on the python virtual
machine. So arguably it is *as* compiled as Java.....


Actually, there are compilers that produce native machine code from
Java for several CPUs available, and they are used at least in the
embedded world.

There is also GCJ as part of the GCC, which can compile both .class
and .java files. Its libraries aren't complete yet, but I'm sure it's
only a matter of time.
Jul 18 '05 #46

P: n/a
Heather Coppersmith wrote:


On Fri, 21 May 2004 03:03:51 GMT,
Carl Banks <im*****@aerojockey.invalid> wrote:
Paul Rubin wrote:


Carl Banks <im*****@aerojockey.invalid> writes:
> The example above kills any attempt to turn a.bar() into a static
> procedure call.

Of course it does--but it's one method. A compiler, if it's good,
would only make the optization on methods named "bar", and it could
probably pare the number of possible classes it could happen to down
to only a few.

How could it possibly know? The reassignment of a.bar could happen
anytime, anywhere in the code. Maybe even in an eval.
And if it happens anytime, anywhere in the code, the compiler will see
it and create more general code. Or is that impossible?


The compiler might not see it. Any function/method/module called while
a is in scope might change a.bar. It doesn't even take a perverted
introspection tool:

module foo:

import bar
class C:
pass
a = C( )
a.bar = 'bar'
bar.bar( a )
a.bar = a.bar + 'bar' # which 'add' instruction should this compile to?

module bar:

def bar( x ):
x.bar = 3

Compiler builds a big call-tree. Compiler sees that the object "a" is
passed to a function that might rebind bar. Compiler thinks to
itself, "better mark C.bar as a dynamic attribute." Compiler sees
dynamic attirbute and addition. Compiler generates generic add code.

You could have all kinds of setattrs, evals, and unanalysable data
structures (a la pickle), that could turn the code into a nightmare
that a compiler couldn't do anything with. But surely, guarding
against all of that, a good enough static compiler can still reduce a
lot of good, maintainable Python code to it's static case.
And, frankly, I still don't see how this is different from Lisp.

(defun foo ()
(let ((x 1))
(setq x (bar x))
(setq x (+ x 1))))

In another package:

(defun bar (x)
#C(2.0 1.0))

If I recall, + can work on ints, floats, bignums, rationals, and
complex numbers, at least. What one instruction does + compile to
here?

As for eval, well it would be silly to even allow eval in a compiled
program; kind of defeats the purpose of compling. You might let it
run in its own namespace so it can only affect certain objects.


I reiterate my comments regarding type declarations, add new comments
regarding backwards compatibility, and note that Lisp allows eval in
compiled code.


It's highly frowned upon cause it interferes with everything it
touches.
--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #47

P: n/a
On Fri, 21 May 2004, Svein Ove Aas wrote:
Peter Hansen wrote:
Actually, there are compilers that produce native machine code from
Java for several CPUs available, and they are used at least in the
embedded world.


There is also GCJ as part of the GCC, which can compile both .class
and .java files. Its libraries aren't complete yet, but I'm sure it's
only a matter of time.


Hmmm... anyone tried GCJ on Jython?

--
Andrew I MacIntyre "These thoughts are mine alone..."
E-mail: an*****@bullseye.apana.org.au (pref) | Snail: PO Box 370
an*****@pcug.org.au (alt) | Belconnen ACT 2616
Web: http://www.andymac.org/ | Australia

Jul 18 '05 #48

P: n/a
Carl Banks <im*****@aerojockey.invalid> writes:
If I recall, + can work on ints, floats, bignums, rationals, and
complex numbers, at least. What one instruction does + compile to
here?


Lisp supports type declarations which advise the compiler in those
situations. A few such proposals have been made for Python, but none
have taken off so far.
Jul 18 '05 #49

P: n/a
Paul Rubin wrote:
Carl Banks <im*****@aerojockey.invalid> writes:
If I recall, + can work on ints, floats, bignums, rationals, and
complex numbers, at least. What one instruction does + compile to
here?


Lisp supports type declarations which advise the compiler in those
situations. A few such proposals have been made for Python, but none
have taken off so far.


Yes, that's been established. There's two questions remaining for me:

1. These claims that Lisp code can approach 50 percent the speed of C,
is that with or without the optional type declarations?

2. If you don't use the declarations, does compiling Lisp help? If it
does (and nothing I've read indicated that is doesn't), it
definitely casts some doubt on the claim that compiling Python
wouldn't help. That's kind of been my point all along.

I think (and I'm wrapping this up, cause I think I made my point)
compiling Python could help, even without type declarations, but
probably not as much as in Lisp. It could still make inferences or
educated guesses, like Lisp compilers do; just maybe not as often.
--
CARL BANKS http://www.aerojockey.com/software
"If you believe in yourself, drink your school, stay on drugs, and
don't do milk, you can get work."
-- Parody of Mr. T from a Robert Smigel Cartoon
Jul 18 '05 #50

58 Replies

This discussion thread is closed

Replies have been disabled for this discussion.