469,949 Members | 2,180 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,949 developers. It's quick & easy.

Prothon gets Major Facelift in Vers 0.1.0 [Prothon]

There is a new release of Prothon that I think is worth mentioning here.
Prothon version 0.1.0 has changed almost beyond recognition compared to what
was discussed here before. For example: the "Perl-like" symbols are gone
and the "self" keyword is back, replacing the period. Prothon has gotten
more "python-like", simpler, and more powerful, all at the same time.

There is a new tutorial that covers Prothon completely without assuming any
knowledge of Python or any other language. The Prothon/Python differences
page now has links to the relevant section of this tutorial.
See http://prothon.org.

Some of these differences are:

Locals and globals are gone and replaced by a simple scheme that allows you
to access any local or external variable from inside a block or function
scope by name. You may also modify any existing variable outside of the
current scope by simply prepending "outer" to the variable name as in
"outer.x = 1". You can even do this "outer access" when the variable is in
a function that has quit running, giving you "closures" with no need for a
special syntax.

There is a new powerful self-binding method syntax that gives you the
ability to explicitly specify the "self" to bind to a function call with
"obj.func{self}(args)". This allows the full power of message-passing to be
used without compromising the simplicity of function calling. This powerful
and general scheme also solves the problem of calling methods when there is
no such thing as a class to define what a method is. Intelligent defaults
for {self} allow most method calls to be the intuiitive and simple form
obj.call().

The "with" keyword now just creates a new local scope instead of another
"self" which was so confusing before. So "self" is now the simple meaning
of the instance object inside a method as it is in Python.

There is a new "object" keyword which works almost identically to the
"class" keyword, yet also works as a general object creation and
initialization statement. It combines object creation and the "with"
statement.
Jul 18 '05 #1
25 1637
One thing that might attract me from Python to Prothon is if it had
proper private medthods - i.e. not just name mangling like __myDef
which can be overridden using _myClass__myDef (as the interpreter
does).

Proper encapsulation is needed before the C++ brigade will take
P[y/ro]thon seriously as an OO language, oh and a machinecode compiler
;-)
Jul 18 '05 #2
simo wrote:
One thing that might attract me from Python to Prothon is if it had
proper private medthods - i.e. not just name mangling like __myDef
which can be overridden using _myClass__myDef (as the interpreter
does).

Proper encapsulation is needed before the C++ brigade will take
P[y/ro]thon seriously as an OO language, oh and a machinecode compiler
;-)

PROthon should implement {...} as optional block-beg and end marks.

# Original PROthon code (uses indentation)
def abc(_x):
if _x ...:
do_1()
do_2()
do_3()
Could become

#!/usr/bin/pROthon
#pragma(C_STYLE_BLOCK=1) # <- Valid within this file/module scope

def abc(_x):
{
if _x ...:
{
do_1()
do_2()
}
do_3()
}

----------------------------------------

The lack of "class" type and direct creation and use of objects is very
handy for "Prototype-based Programming" (that's PROthon)

Who said OO-programming must have a class type ?

// moma
http:/www.futuredesktop.org
Jul 18 '05 #3
On Sun, 2004-05-23 at 21:46 +0000, Neil Hodgson wrote:
gabor:
hmmm...i understand that you can cast away const,
but what about...

class C
{
private x;
};
Technique 1:

class D {
public:
int x;
};

C c;
(reinterpret_cast<D*>(&c))->x = 1;

Technique 2:

#define private public
#include <C_definition.h>
#undef private

C c;
c.x = 1;

thanks...very interesting.
In both of these it is obvious that the encapsulation rules are being
broken which is exactly the same situation as when someone uses the mangled
name in Python. I have used technique 2 in production code where a library
could not be altered and access to private methods was required.


look...i am not asking for a way in python to COMPLETELY RESTRICT the
access to a variable... i hint is enough... now...if i know correctly
that means that i have to prefix my wanto-to-be-private variables with a
"_"... that restriction would be enough for me. my problem is with the
syntax. i HATE anything that even remotely resembles perl/hungarian
notation. why cannot i do it somehow as we create static-methods ....

class C:
def __init__(self):
self.lenght = 5
self.lenght =private(self.length)

or something like it.

the point is:
i only want to WRITE that the variable is private once...
i don't want to deal with _prefixed variables in my whole code...

this is imho of course

gabor
Jul 18 '05 #4
On Sun, 2004-05-23 at 15:04 -0700, Mark Hahn wrote:
Does any Python user have a story of regret of using an object's attribute
when they should not have and encapsulation would have saved their butts?


not exactly...

i don't need restriction, only a hint...

if i look thru some java code, i immediately see what variables/methods
should/can i access... it's not the same for python...

for example at work where i use java, there are classes that have
10-15methods, but only 4public ones... it immediately gets simplier to
understand/use the class
gabor
Jul 18 '05 #5
Ryan Paul wrote:
On Sat, 22 May 2004 22:53:01 -0700, simo wrote:

(snip)
Proper encapsulation is needed before the C++ brigade will take
P[y/ro]thon seriously as an OO language, oh and a machinecode compiler
;-)


(snip)
I dont understand why everybody seems to want a machinecode compiler. It
wont make a high-level, dynamically typed language run any faster.
Err... You might want to check some Common Lisp implementations.
--SegPhault

Note that here it should read 'sigfault' !-)

Bruno

Jul 18 '05 #6
>>>>> "Ryan" == Ryan Paul <se*******@sbcglobal.net> writes:

Ryan> check out ruby. It is very similar to python, but one of the
Ryan> many benefits it has over python, is ability to distinguish
Ryan> 'real' private, public, and protected
Ryan> variables/methods. Ruby does not allow multiple inheritance,
Ryan> and

Python has private methods too:

class C:
def stuff(self):
_privatestuff(self,2)
print "val is",self.val

def _privatestuff(self, arg):
self.val = arg

Though I would expect the people that read/use my code to understand
that if I prepended a _ on attribute name, they would have the brain
to not use it as a part of the "official" API. If they absolutely need
to use something private (it happens - world has seen several badly
designed APIs), they can. At least they don't need to resort to
various horrible hacks to do it.

I tend to think that emphasizing private/protected access restrictions
is an artifact of not understanding modern programming realities and
dynamics properly. School teaches access labels as an essential
feature of OO (encapsulation), and it takes some real world experience
implementing something of lasting value *fast* to unlearn it.

Ryan> it supports a very powerful mixin system- it's OO mechanisms
Ryan> and syntax generally seem better than pythons.

Not to mention that it's much closer to Smalltalk. And it's much
better than Perl too. Python OO is a hack! It's an add-on, not
built-in like in Ruby!

etc, ad infinitum.

Inter-newsgroup advocacy efforts are rather pointless and mostly serve
to create yet more flamewars. Ruby and Python are within the same 10%
productivity-wise (which one is winning depends on the programmer),
but Python is massively more mature and popular (e.g. going to be
shipping with Nokia S60 smartphones RSN hopefully [1]). You do the
math.

[1] http://www.guardian.co.uk/online/sto...182803,00.html

Ryan> I dont understand why everybody seems to want a machinecode
Ryan> compiler. It wont make a high-level, dynamically typed
Ryan> language run any faster.

It could make the language run much faster.

--
Ville Vainio http://tinyurl.com/2prnb
Jul 18 '05 #7

"gabor" <ga***@z10n.net> wrote
Does any Python user have a story of regret of using an object's attribute when they should not have and encapsulation would have saved their
butts?
not exactly...

i don't need restriction, only a hint...

if i look thru some java code, i immediately see what variables/methods
should/can i access... it's not the same for python...

for example at work where i use java, there are classes that have
10-15methods, but only 4public ones... it immediately gets simplier to
understand/use the class


I sounds to me like you want a better documentation solution, not an
encapsulation solution. Now that is something I agree with 100% and have
near the top of the list in Prothon. I personally think docs are a weak
point in Python.
Jul 18 '05 #8
On Mon, 2004-05-24 at 18:10, Mark Hahn wrote:
"gabor" <ga***@z10n.net> wrote
Does any Python user have a story of regret of using an object's attribute when they should not have and encapsulation would have saved their

butts?

not exactly...

i don't need restriction, only a hint...

if i look thru some java code, i immediately see what variables/methods
should/can i access... it's not the same for python...

for example at work where i use java, there are classes that have
10-15methods, but only 4public ones... it immediately gets simplier to
understand/use the class


I sounds to me like you want a better documentation solution, not an
encapsulation solution. Now that is something I agree with 100% and have
near the top of the list in Prothon. I personally think docs are a weak
point in Python.


documentation is fine.... the more the better...

but the argument that
more-docs-should-be-enough-because-you-can-document-which-functions-are-private
reminds me a little of the
but-you-can-write-object-oriented-code-in-assembler....

or that you-can-write-object-oriented-code-in-c...

yes, you can implement the needed mechanism in every language, but it's
not always fun....
:)

gabor
Jul 18 '05 #9

"gabor" <ga***@z10n.net> wrote
yes, you can implement the needed mechanism in every language, but it's
not always fun....


We'll just have to make sure it's fun in Prothon. Maybe you'll have to
elaborate a little more on what you think is fun encapsulation documentation
and what isn't.

I myself think that putting underbars in front of every var like _var is not
fun. Do you agree?

I also don't think declaring all vars as in "private var" is fun either, do
you? If I did, I'd be using C++.

So to me, documenting the public vars makes the most sense. They need some
explaining anyway. So now the only question is, what is a fun and painless
way to document public vars. It would be nice if it had these properties:

1) There should be some reward or lack of punishment for actually doing the
documentation. I was thinking that Prothon could have some cool doc tool
that programmers would want to use that would choke and refuse to finish
without proper doc definitions. Maybe the interpreter itself could even
give warnings.

2) The doc syntax should be painless, friendly, and intelligent enough so
human-added stuff is minimal.

Any ideas in this area would be greatly welcomed. Implementing a half-baked
scheme would be as good as no scheme because it wouldn't be used.
Jul 18 '05 #10
> See http://prothon.org.

You use inconsistant descriptions of generators here:
http://prothon.org/tutorial/tutorial11.htm#gen

First you say that all generators must use 'gen' rather than 'def', but
in your example you mix the two...

gen evenOdds(max):
def evens(max):
(body contains a yield)

Based on that same portion of your tutorial, it is not clear that your
example...

gen evenOdds(max):
def evens(max):
num = 0
while num < max:
yield num
num += 2

def odds(max):
num = 1
while num < max:
yield num
num += 2

evens(max)
odds(max)

for i in evenOdds(10):
print i, # prints 0 2 4 6 8 1 3 5 7 9

Actually should produce what you say it should.

Perhaps it is my background in Python that says unless you manually
iterate through evens(max) and odds(max), yielding values as you go along...
for i in evens(max): yield i
for i in odds(max): yield i
....or combine and return the iterator with something like...
return itertools.chain(evens(max), odds(max))
...., you will get nothing when that generator function is evaluated.

I believe that either your documentation or example needs to be fixed.

- Josiah
Jul 18 '05 #11
Josiah Carlson wrote:
You use inconsistant descriptions of generators here:
http://prothon.org/tutorial/tutorial11.htm#gen

First you say that all generators must use 'gen' rather than 'def',
but
in your example you mix the two...

gen evenOdds(max):
def evens(max):
(body contains a yield)

Based on that same portion of your tutorial, it is not clear that your
example...

gen evenOdds(max):
def evens(max):
num = 0
while num < max:
yield num
num += 2

def odds(max):
num = 1
while num < max:
yield num
num += 2

evens(max)
odds(max)

for i in evenOdds(10):
print i, # prints 0 2 4 6 8 1 3 5 7 9

Actually should produce what you say it should.
The code is correct and tested. My programming skills are much better than
my tutorial writing skills :-)
Perhaps it is my background in Python that says unless you manually
iterate through evens(max) and odds(max), yielding values as you go
along... for i in evens(max): yield i
for i in odds(max): yield i
...or combine and return the iterator with something like...
return itertools.chain(evens(max), odds(max))
..., you will get nothing when that generator function is evaluated.

I believe that either your documentation or example needs to be fixed.


I'm sure that my tutorial could be clearer, but in my defense I do say in
that section: "only the one outermost "function" should use the "gen"
keyword".

The way it works is that the "gen" keyword is just a special flag to tell
the interpreter to stop rolling up the execution frame stack when a yield
keyword is encountered. This is what allows the functions to be nested. The
yield keyword can be in any function whether it uses the gen keyword or the
def keyword.

I do think you are confusing it with Python, which cannot nest functions
with yield statements as Prothon can.
Jul 18 '05 #12
Ryan Paul <se*******@sbcglobal.net> writes:
On Sat, 22 May 2004 22:53:01 -0700, simo wrote:
One thing that might attract me from Python to Prothon is if it had
proper private medthods - i.e. not just name mangling like __myDef
which can be overridden using _myClass__myDef (as the interpreter
does).

Proper encapsulation is needed before the C++ brigade will take
Don't confuse encapsulation with access restriction.
I dont understand why everybody seems to want a machinecode
compiler.
Actually, my impression is that most (at least many) around here don't
want one.
It wont make a high-level, dynamically typed language run any
faster.


Your claim is false.

Proof by counterexapmle:
CL-USER 1 > (defun fib (n)
(if (< n 2)
1
(+ (fib (- n 1)) (fib (- n 2)))))
FIB

CL-USER 2 > (time (fib 35))
Timing the evaluation of (FIB 35)

user time = 66.330
system time = 0.000
Elapsed time = 0:01:06
Allocation = 5488 bytes standard / 328476797 bytes conses
0 Page faults
Calls to %EVAL 8388522
14930352

CL-USER 3 > (compile 'fib)
FIB
NIL
NIL

CL-USER 4 > (time (fib 35))
Timing the evaluation of (FIB 35)

user time = 1.000
system time = 0.000
Elapsed time = 0:00:01
Allocation = 1216 bytes standard / 2783 bytes conses
0 Page faults
14930352

Looks like compiling this partucular high-level dynamically typed
language makes it run considerably faster. Let's repeat the exercise
for 3 more implementations of this particular language I just happen
to have lying around on my machine, and compare it to Python's
performance on the equivalent program:
def fib(n): .... if n<2: return 1
.... return fib(n-1) + fib(n-2)
.... import time
a=time.time(); fib(35); time.time() - a

14930352
20.425565958023071
Here are the results gathered in a table:
Name Interpreted Compiled

LispWorks 66 1.0 s
Clisp 41 9.5 s
CMUCL Got bored waiting 1.5 s
SBCL Compiles everything 1.6 s
Python Compiles everything 20 s
So, we have times of 1.0s, 1.5s, 1.6s, 9.5s and 20s. Now one of those
Common Lisp implementations does NOT compile to native; it compiles to
bytecode. Can you guess which one it is, by looking at the timings ?
Jul 18 '05 #13
Jacek Generowicz <ja**************@cern.ch> writes:
Name Interpreted Compiled

LispWorks 66 1.0 s
Clisp 41 9.5 s
CMUCL Got bored waiting 1.5 s
SBCL Compiles everything 1.6 s
Python Compiles everything 20 s
So, we have times of 1.0s, 1.5s, 1.6s, 9.5s and 20s. Now one of those
Common Lisp implementations does NOT compile to native; it compiles to
bytecode. Can you guess which one it is, by looking at the timings ?


Just for fun, I threw all the declarations that came to my head at the
Lisp function, making in look thus:

(defun fib (n)
(declare (fixnum n))
(declare (optimize (safety 0) (speed 3) (debug 0)
(space 0) (compilation-speed 0)))
(if (< n 2)
1
(the fixnum
(+ (the fixnum (fib (- n 1)))
(the fixnum (fib (- n 2)))))))

I also tried a C version:

int fib(int n) {
if (n<2) {
return 1;
}
return fib(n-1) + fib(n-2);
}

int main() {
return fib(35);
}

Here's the table with the the results for the above added in:
Name Interpreted Compiled With declarations

LispWorks 66 1.0 1.6
Clisp 41 9.5 9.5
CMUCL Got bored waiting 1.5 0.45
SBCL Compiles everything 1.6 0.49
Python Compiles everything 20
gcc No interactivity 0.29
(I also tried it on Allegro, via their telnet prompt (telnet
prompt.franz.com). The uncompiled version went beyond the CPU limit
they give you; the compiled version without declarations was 400ms;
with declarations was 200ms. Of course, we don't know how their
processor compares to mine.)
Jul 18 '05 #14
Jacek Generowicz <ja**************@cern.ch> wrote in news:tyf1xl8n0yl.fsf_-
_@pcepsft001.cern.ch:
Name Interpreted Compiled With declarations

LispWorks 66 1.0 1.6
Clisp 41 9.5 9.5
CMUCL Got bored waiting 1.5 0.45
SBCL Compiles everything 1.6 0.49
Python Compiles everything 20
gcc No interactivity 0.29


Using Psyco speeds things up somewhat. On my machine this test in Python
without Psyco takes 14.31s, adding a call to psyco.full() reduces this to
0.51s
Jul 18 '05 #15
Duncan Booth <me@privacy.net> writes:
Using Psyco speeds things up somewhat. On my machine this test in Python
without Psyco takes 14.31s, adding a call to psyco.full() reduces this to
0.51s


Good call. How daft of me not to include it.

Here's the table with the psyco result on the same machine as the rest.
Name Interpreted Compiled With declarations Psyco

LispWorks 66 1.0 1.6
Clisp 41 9.5 9.5
CMUCL Got bored waiting 1.5 0.45
SBCL Compiles everything 1.6 0.49
Python Compiles everything 20 0.64
gcc No interactivity 0.29
Could we now just all agree, once and for all, that compiling dynamic
languages to native binary really can give significant speedups?

(No, of course we can't ... oh well :-)
Jul 18 '05 #16
Jacek Generowicz <ja**************@cern.ch> writes:
def fib(n): ... if n<2: return 1
... return fib(n-1) + fib(n-2)
... import time
a=time.time(); fib(35); time.time() - a

14930352
20.425565958023071
Here are the results gathered in a table:
Name Interpreted Compiled

LispWorks 66 1.0 s
Clisp 41 9.5 s
CMUCL Got bored waiting 1.5 s
SBCL Compiles everything 1.6 s
Python Compiles everything 20 s
So, we have times of 1.0s, 1.5s, 1.6s, 9.5s and 20s. Now one of those
Common Lisp implementations does NOT compile to native; it compiles to
bytecode. Can you guess which one it is, by looking at the timings ?


Using this code:

import psyco
psyco.full()

def fib(n):
if n<2: return 1
return fib(n-1) + fib(n-2)

import time
a=time.time()
fib(35)
print time.time()-a

I get 0.617813110352

The C version that you posted, using gcc 3.3.3 with -O3 option:

real 0m0.227s
user 0m0.208s
sys 0m0.001s
--
Valentino Volonghi aka Dialtone
Linux User #310274, Proud Gentoo User
Blog: http://vvolonghi.blogspot.com
Home Page: http://xoomer.virgilio.it/dialtone/
Jul 18 '05 #17
Jacek Generowicz wrote:

.........

Looks like compiling this partucular high-level dynamically typed
language makes it run considerably faster. Let's repeat the exercise
for 3 more implementations of this particular language I just happen
to have lying around on my machine, and compare it to Python's
performance on the equivalent program:

def fib(n):
... if n<2: return 1
... return fib(n-1) + fib(n-2)
...
import time
a=time.time(); fib(35); time.time() - a


14930352
20.425565958023071
Here are the results gathered in a table:
Name Interpreted Compiled

LispWorks 66 1.0 s
Clisp 41 9.5 s
CMUCL Got bored waiting 1.5 s
SBCL Compiles everything 1.6 s
Python Compiles everything 20 s
So, we have times of 1.0s, 1.5s, 1.6s, 9.5s and 20s. Now one of those
Common Lisp implementations does NOT compile to native; it compiles to
bytecode. Can you guess which one it is, by looking at the timings ?


I tried a modification in prothon and was suprised at how bad it was. My
windows box started to thrash with fib(35) so I reduced it to fib(25)

#fib.py
def fib(n):
if n<2: return 1
return fib(n-1) + fib(n-2)
print fib(25)
C:\Prothon\pr\test>timethis \Prothon\prothon fib.py

TimeThis : Command Line : \Prothon\prothon fib.py
TimeThis : Start Time : Tue May 25 19:41:32 2004

121393

TimeThis : Command Line : \Prothon\prothon fib.py
TimeThis : Start Time : Tue May 25 19:41:32 2004
TimeThis : End Time : Tue May 25 19:42:24 2004
TimeThis : Elapsed Time : 00:00:52.235

compare with

C:\Prothon\pr\test>timethis python fib.py

TimeThis : Command Line : python fib.py
TimeThis : Start Time : Tue May 25 19:43:02 2004

121393

TimeThis : Command Line : python fib.py
TimeThis : Start Time : Tue May 25 19:43:02 2004
TimeThis : End Time : Tue May 25 19:43:05 2004
TimeThis : Elapsed Time : 00:00:02.673

In fact the python time for fib(35) was about 31.1 seconds (ie less than
prothon for fib(25)) so something is spectacularly amiss with prothon.
--
Robin Becker
Jul 18 '05 #18

"Robin Becker" <ro***@SPAMREMOVEjessikat.fsnet.co.uk> wrote
In fact the python time for fib(35) was about 31.1 seconds (ie less than
prothon for fib(25)) so something is spectacularly amiss with prothon.


Yes, and that something is that Prothon is pre-alpha and full of debug code.
Take a look at the interpreter loop in interp.c and the reason will be
obvious immediately. We are not going to be addressing efficiency until
after the language is designed in July.

One step at a time...

Jul 18 '05 #19
Mark Hahn wrote:
"Robin Becker" <ro***@SPAMREMOVEjessikat.fsnet.co.uk> wrote

In fact the python time for fib(35) was about 31.1 seconds (ie less than
prothon for fib(25)) so something is spectacularly amiss with prothon.

Yes, and that something is that Prothon is pre-alpha and full of debug code.
Take a look at the interpreter loop in interp.c and the reason will be
obvious immediately. We are not going to be addressing efficiency until
after the language is designed in July.

One step at a time...


Wasn't criticising. I expected some degredation from an early version,
but this seems too much. From the memory usage I would guess that
perhaps the frames aren't being released.
--
Robin Becker
Jul 18 '05 #20


"Robin Becker" <ro***@SPAMREMOVEjessikat.fsnet.co.uk> wrote in message
news:40**************@jessikat.fsnet.co.uk...
Mark Hahn wrote:
"Robin Becker" <ro***@SPAMREMOVEjessikat.fsnet.co.uk> wrote

In fact the python time for fib(35) was about 31.1 seconds (ie less than
prothon for fib(25)) so something is spectacularly amiss with prothon.

Yes, and that something is that Prothon is pre-alpha and full of debug code. Take a look at the interpreter loop in interp.c and the reason will be
obvious immediately. We are not going to be addressing efficiency until
after the language is designed in July.

One step at a time...


Wasn't criticising. I expected some degredation from an early version,
but this seems too much. From the memory usage I would guess that
perhaps the frames aren't being released.


Oh, I didn't realize you were talking about memory. We do have serious
memory and object leaks. Maybe the garbage collector isn't working right.
That area is also waiting for July as we are considering it part of
performance.
Jul 18 '05 #21
Jacek Generowicz <ja**************@cern.ch> writes:
Could we now just all agree, once and for all, that compiling
dynamic languages to native binary really can give significant
speedups?


Has anyone really been arguing that? Oh dear. What *I* at least have
been trying to argue against is the idea that *any* compilation to
native code *must* result in a significant speed up...

Cheers,
mwh

--
About the use of language: it is impossible to sharpen a
pencil with a blunt axe. It is equally vain to try to do
it with ten blunt axes instead.
-- E.W.Dijkstra, 18th June 1975. Perl did not exist at the time.
Jul 18 '05 #22
gabor <ga***@z10n.net> schreef:
i only want to WRITE that the variable is private once...
i don't want to deal with _prefixed variables in my whole code...


With the _prefix it's clear in /every/ part of the code that something is
intended for local use...

--
JanC

"Be strict when sending and tolerant when receiving."
RFC 1958 - Architectural Principles of the Internet - section 3.9
Jul 18 '05 #23
Michael Hudson <mw*@python.net> writes:
Jacek Generowicz <ja**************@cern.ch> writes:
Could we now just all agree, once and for all, that compiling
dynamic languages to native binary really can give significant
speedups?
Has anyone really been arguing that?


Ryan Paul <se*******@sbcglobal.net> writes in this very thread:
I dont understand why everybody seems to want a machinecode compiler. It
wont make a high-level, dynamically typed language run any faster.


I guess there are different ways of interpreting the plethora of
contributions such as the above. Maybe my I'm misunderstanding them.
Jul 18 '05 #24
Jacek Generowicz <ja**************@cern.ch> writes:
Michael Hudson <mw*@python.net> writes:
Jacek Generowicz <ja**************@cern.ch> writes:
Could we now just all agree, once and for all, that compiling
dynamic languages to native binary really can give significant
speedups?


Has anyone really been arguing that?


Ryan Paul <se*******@sbcglobal.net> writes in this very thread:
I dont understand why everybody seems to want a machinecode compiler. It
wont make a high-level, dynamically typed language run any faster.


OK, that looks wrong :-)

Cheers,
mwh

--
Have you considered downgrading your arrogance to a reasonable level?
-- Erik Naggum, comp.lang.lisp, to yet another C++-using troll
Jul 18 '05 #25
Jacek Generowicz <ja**************@cern.ch> wrote in message news:<ty*************@pcepsft001.cern.ch>...

Ryan Paul <se*******@sbcglobal.net> writes in this very thread:
I dont understand why everybody seems to want a machinecode compiler. It
wont make a high-level, dynamically typed language run any faster.


I guess there are different ways of interpreting the plethora of
contributions such as the above. Maybe my I'm misunderstanding them.


Then I must have misunderstood them, too. Perhaps the author could
qualify his statement in light of the evidence brought to light, in
addition to the existence of technologies such as Psyco. The intention
may really have been to claim that even a sufficiently powerful type
inferencing mechanism, acting before run-time on very general code
which uses polymorphism extensively (roll out all those short code
snippets), won't produce significantly faster programs; but short and
snappy sweeping statements generally sacrifice meaningful
qualification for the sake of leaving an impression of higher
knowledge withheld from the inquiring masses.

Paul
Jul 18 '05 #26

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

reply views Thread by Mark Hahn | last post: by
reply views Thread by Mark Hahn | last post: by
145 posts views Thread by David MacQuigg | last post: by
27 posts views Thread by Michele Simionato | last post: by
21 posts views Thread by Mark Hahn | last post: by
28 posts views Thread by David MacQuigg | last post: by
22 posts views Thread by Paul Prescod | last post: by
49 posts views Thread by Mark Hahn | last post: by
20 posts views Thread by Mark Hahn | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.