By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
429,426 Members | 1,721 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 429,426 IT Pros & Developers. It's quick & easy.

Improving interpreter startup speed

P: n/a
Hi guys,
Is there a way to improve the interpreter startup speed?

In my machine (cold startup) python takes 0.330 ms and ruby takes
0.047 ms, after cold boot python takes 0.019 ms and ruby 0.005 ms to
start.
TIA
Oct 25 '08 #1
Share this Question
Share on Google+
32 Replies


P: n/a
On Sat, 25 Oct 2008 12:32:07 -0700, Pedro Borges wrote:
Hi guys,
Is there a way to improve the interpreter startup speed?

In my machine (cold startup) python takes 0.330 ms and ruby takes 0.047
ms, after cold boot python takes 0.019 ms and ruby 0.005 ms to start.
TIA
um... does it really matter? It's less than a second and only once on the
program startup...

if you've found yourself destroying small python processes thousands of
times, try creating the controller program in python, so this controller
program would import the "small modules" and doesn't restart python
interpreters that many times.

Oct 25 '08 #2

P: n/a
On Sat, 25 Oct 2008 12:32:07 -0700, Pedro Borges wrote:
Hi guys,
Is there a way to improve the interpreter startup speed?
Get a faster computer?
In my machine (cold startup) python takes 0.330 ms and ruby takes 0.047
ms, after cold boot python takes 0.019 ms and ruby 0.005 ms to start.
How are you measuring this?
--
Steven
Oct 26 '08 #3

P: n/a
2008/10/25 Pedro Borges <pe*******@gmail.com>:
Is there a way to improve the interpreter startup speed?

In my machine (cold startup) python takes 0.330 ms and ruby takes
0.047 ms, after cold boot python takes 0.019 ms and ruby 0.005 ms to
start.
How are you getting those numbers? 330 μs is still pretty fast, isn't
it? :) Most disks have a seek time of 10-20 ms so it seem implausible
to me that Ruby would be able to cold start in 47 ms.
--
mvh Björn
Oct 26 '08 #4

P: n/a
Pedro Borges wrote:
Hi guys,
Is there a way to improve the interpreter startup speed?

In my machine (cold startup) python takes 0.330 ms and ruby takes
0.047 ms, after cold boot python takes 0.019 ms and ruby 0.005 ms to
start.
You of course mean CPython, but Version, version, what Version?
3.0 starts much quicker than 2.5. Don't have 2.6.

Oct 26 '08 #5

P: n/a
On Sun, Oct 26, 2008 at 11:23 AM, BJörn Lindqvist <bj*****@gmail.comwrote:
How are you getting those numbers? 330 μs is still pretty fast, isn't
it? :) Most disks have a seek time of 10-20 ms so it seem implausible
to me that Ruby would be able to cold start in 47 ms.
$ time python -c "pass"

real 0m0.051s
user 0m0.036s
sys 0m0.008s

$ time python3.0 -c "pass"

real 0m0.063s
user 0m0.048s
sys 0m0.004s

And yes I agree. the CPython interpreter startup times is
a stupid thing to be worrying about, especially since that
is never the bottleneck.

Python loads plenty fast enough!

--JamesMills

--
--
-- "Problems are solved by method"
Oct 26 '08 #6

P: n/a
The scripts i need to run but be executed with no apparent delay
specially when the text transforms are simple.
On Oct 26, 2008, at 11:13 AM, James Mills wrote:
On Sun, Oct 26, 2008 at 11:23 AM, BJörn Lindqvist
<bj*****@gmail.comwrote:
>How are you getting those numbers? 330 μs is still pretty fast,
isn't
it? :) Most disks have a seek time of 10-20 ms so it seem implausible
to me that Ruby would be able to cold start in 47 ms.

$ time python -c "pass"

real 0m0.051s
user 0m0.036s
sys 0m0.008s

$ time python3.0 -c "pass"

real 0m0.063s
user 0m0.048s
sys 0m0.004s

And yes I agree. the CPython interpreter startup times is
a stupid thing to be worrying about, especially since that
is never the bottleneck.

Python loads plenty fast enough!

--JamesMills

--
--
-- "Problems are solved by method"
Oct 26 '08 #7

P: n/a
Pedro Borges <pe*******@gmail.comwrites:
The scripts i need to run but be executed with no apparent delay
specially when the text transforms are simple.
Basically you should keep the interpreter running and the script in
memory in that case.
Oct 26 '08 #8

P: n/a
Benjamin Kaplan wrote:
I disagree. The extra time Python takes to start makes it unsuitable
for many uses. For example, if you write a simple text editor then
Pythons longer startup time might be to much.
You must be in a real big hurry if half a second matters that much to
you. Maybe if it took 5 seconds for the interpreter to start up, I could
understand having a problem with the start up time.
The secret is to start Python at least a second before one is actually
ready to start.

Oct 26 '08 #9

P: n/a
On Sun, Oct 26, 2008 at 11:19 PM, Pedro Borges <pe*******@gmail.comwrote:
The scripts i need to run but be executed with no apparent delay specially
when the text transforms are simple.
That makes no sense whatsoever!

If you are performing data conversion with
Python, interpreter startup times are going
to be so insignificant.

--JamesMills

--
--
-- "Problems are solved by method"
Oct 27 '08 #10

P: n/a
On Mon, Oct 27, 2008 at 4:12 AM, Benjamin Kaplan
<be*************@case.eduwrote:
You must be in a real big hurry if half a second matters that much to you.
Maybe if it took 5 seconds for the interpreter to start up, I could
understand having a problem with the start up time.
+1 This thread is stupid and pointless.
Even for a so-called cold startup 0.5s is fast enough!

--JamesMills

--
--
-- "Problems are solved by method"
Oct 27 '08 #11

P: n/a
On Mon, Oct 27, 2008 at 3:45 AM, BJörn Lindqvist <bj*****@gmail.comwrote:
Pedro was talking about cold startup time:

$ sudo sh -c "echo 3 /proc/sys/vm/drop_caches"
$ time python -c "pass"

real 0m0.627s
user 0m0.016s
sys 0m0.008s
$ sudo sh -c "echo 3 /proc/sys/vm/drop_caches"
$ time python -S -c "pass"

real 0m0.244s
user 0m0.004s
sys 0m0.008s

With -S (don't imply 'import site' on initialization)

I suspect that's to do with importing modules,
the site specific modules, etc. Disk access will
tend to chew into this "startup time". Use -S
if you're that worried about startup times (heaven
knows what affect it'll have on your app though).

--JamesMils

--
--
-- "Problems are solved by method"
Oct 27 '08 #12

P: n/a
On Mon, Oct 27, 2008 at 10:52 AM, James Mills
<pr******@shortcircuit.net.auwrote:
On Mon, Oct 27, 2008 at 4:12 AM, Benjamin Kaplan
+1 This thread is stupid and pointless.
Even for a so-called cold startup 0.5s is fast enough!
Not if the startup is the main cost for a command you need to repeat many times.

David
Oct 27 '08 #13

P: n/a
On Mon, Oct 27, 2008 at 3:15 PM, David Cournapeau <co******@gmail.comwrote:
Not if the startup is the main cost for a command you need to repeat many times.
Seriously if you have to spawn and kill python
processes that many times for an initial cold
startup and subsequent warm startups to be
any significance, you're doing something wrong!

--JamesMills

--
--
-- "Problems are solved by method"
Oct 27 '08 #14

P: n/a
David Cournapeau wrote:
On Mon, Oct 27, 2008 at 10:52 AM, James Mills
<pr******@shortcircuit.net.auwrote:
>On Mon, Oct 27, 2008 at 4:12 AM, Benjamin Kaplan
+1 This thread is stupid and pointless.
Even for a so-called cold startup 0.5s is fast enough!

Not if the startup is the main cost for a command you need to repeat many times.
It this a theoretical problem or an actual one, that we might have other
suggestions for?

Oct 27 '08 #15

P: n/a
On Mon, Oct 27, 2008 at 2:36 PM, Terry Reedy <tj*****@udel.eduwrote:
>
It this a theoretical problem or an actual one, that we might have other
suggestions for?
Any command line based on python is a real example of that problem.
There are plenty of them.

David
Oct 27 '08 #16

P: n/a
On Mon, Oct 27, 2008 at 3:36 PM, Terry Reedy <tj*****@udel.eduwrote:
It this a theoretical problem or an actual one, that we might have other
suggestions for?
Heaven knows! I hardly think invoking hundreds
and possibly thousands of short-lived python
interpreters to be an optimal solution that may
have spawned this particular thread.

--JamesMills

--
--
-- "Problems are solved by method"
Oct 27 '08 #17

P: n/a
On Mon, Oct 27, 2008 at 5:28 PM, David Cournapeau <co******@gmail.comwrote:
Any command line based on python is a real example of that problem.
There are plenty of them.
Yes, but in most cases you are not invoking your
command-line app x times per y units of time.

--JamesMills

--
--
-- "Problems are solved by method"
Oct 27 '08 #18

P: n/a
"James Mills" <pr******@shortcircuit.net.auwrites:
Heaven knows! I hardly think invoking hundreds
and possibly thousands of short-lived python
interpreters to be an optimal solution that may
have spawned this particular thread.
It's not optimal but it is very common (CGI for example).
Oct 27 '08 #19

P: n/a
On Mon, Oct 27, 2008 at 4:33 PM, James Mills
<pr******@shortcircuit.net.auwrote:
Yes, but in most cases you are not invoking your
command-line app x times per y units of time.
Depends on the tool: build tool and source control tools are example
it matters (specially when you start interfaciing them with IDE or
editors). Having fast command line tools is an important feature of
UNIX, and if you want to insert a python-based tool in a given
pipeline, it can hurt it the pipeline is regularly updated.

cheers,

David
Oct 27 '08 #20

P: n/a
On Mon, Oct 27, 2008 at 5:36 PM, Paul Rubin
<"http://phr.cx"@nospam.invalidwrote:
It's not optimal but it is very common (CGI for example).
Which is why we (The Python Community)
created WSGI and mod_wsgi. C"mon guys
these "problems" are a bit old and out
dated :)

--JamesMills

--
--
-- "Problems are solved by method"
Oct 27 '08 #21

P: n/a
En Sun, 26 Oct 2008 23:52:32 -0200, James Mills
<pr******@shortcircuit.net.auescribi:
On Mon, Oct 27, 2008 at 4:12 AM, Benjamin Kaplan
<be*************@case.eduwrote:
>You must be in a real big hurry if half a second matters that much to
you.
Maybe if it took 5 seconds for the interpreter to start up, I could
understand having a problem with the start up time.

+1 This thread is stupid and pointless.
Even for a so-called cold startup 0.5s is fast enough!
I don't see the need to be rude.
And I DO care for Python startup time and memory footprint, and others do
too. Even if it's a stupid thing (for you).

--
Gabriel Genellina

Oct 27 '08 #22

P: n/a
On Mon, Oct 27, 2008 at 5:40 PM, David Cournapeau <co******@gmail.comwrote:
Depends on the tool: build tool and source control tools are example
it matters (specially when you start interfaciing them with IDE or
editors). Having fast command line tools is an important feature of
UNIX, and if you want to insert a python-based tool in a given
pipeline, it can hurt it the pipeline is regularly updated.
Fair enough. But still:
0.5s old startup is fast enough
0.08s warm startup is fast enough.

Often "fast enough" is "fast enough"

--JamesMills

--
--
-- "Problems are solved by method"
Oct 27 '08 #23

P: n/a
On Mon, Oct 27, 2008 at 5:46 PM, Gabriel Genellina
<ga*******@yahoo.com.arwrote:
>+1 This thread is stupid and pointless.
Even for a so-called cold startup 0.5s is fast enough!

I don't see the need to be rude.
And I DO care for Python startup time and memory footprint, and others do
too. Even if it's a stupid thing (for you).
I apologize. I do not see the point comparing Python with
RUby however, or Python with anything else.

So instead of coming up with arbitary problems, why don't
we come up with solutions for "Improving Interpreter Startup Speeds" ?

I have only found that using the -S option speeds it up
significantly, but that's only if you're not using any site
packages and only using the built in libraries.

Can site.py be improved ?

--JamesMills

--
--
-- "Problems are solved by method"
Oct 27 '08 #24

P: n/a
To make faster python, you can do:

1.) Use mod_python, and not cgi.
2.) Use other special python server that remaining in memory, and call
it from compiled C code. For example, the C code communicate this server
with pipes, tcp, (or with special files, and the result will come back
in other file).
You can improve this server when you split threads to python
subprocesses, and they still alive for X minutes.
You have one control process (py), and this (like the apache)
communicate the subprocesses, kill them after timeout, and start a new
if needed.

dd

James Mills rta:
On Mon, Oct 27, 2008 at 5:46 PM, Gabriel Genellina
<ga*******@yahoo.com.arwrote:
>>+1 This thread is stupid and pointless.
Even for a so-called cold startup 0.5s is fast enough!
I don't see the need to be rude.
And I DO care for Python startup time and memory footprint, and others do
too. Even if it's a stupid thing (for you).

I apologize. I do not see the point comparing Python with
RUby however, or Python with anything else.

So instead of coming up with arbitary problems, why don't
we come up with solutions for "Improving Interpreter Startup Speeds" ?

I have only found that using the -S option speeds it up
significantly, but that's only if you're not using any site
packages and only using the built in libraries.

Can site.py be improved ?

--JamesMills
Oct 27 '08 #25

P: n/a
David Cournapeau wrote:
On Mon, Oct 27, 2008 at 2:36 PM, Terry Reedy <tj*****@udel.eduwrote:
>It this a theoretical problem or an actual one, that we might have other
suggestions for?

Any command line based on python is a real example of that problem.
No it is not.
The specific problem that you wrote and I responded to was

"Not if the startup is the main cost for a command you need to repeat
many times. "

in a short enough period so that the startup overhead was a significant
fraction of the total time and therefore a burden.

Oct 27 '08 #26

P: n/a
James Mills wrote:
So instead of coming up with arbitary problems, why don't
we come up with solutions for "Improving Interpreter Startup Speeds" ?
The current developers, most of whom use Python daily, are aware that
faster startup would be better. 2.6 and 3.0 start up quicker because
the some devs combed through the list of startup imports to see what
could be removed (or, in one case, I believe, consolidated). Some were.
Anyone who is still itching on this subject can seek further
improvements and, if successful, submit a patch.

Or, one could check the Python wiki for a StartUpTime page and see if
one needs to be added or improved/updated with information from the
PyDev list archives to make it easier for a new developer to get up to
speed on what has already been done in this area and what might be done.

tjr

Oct 27 '08 #27

P: n/a
Lie
On Oct 27, 2:36*pm, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
"James Mills" <prolo...@shortcircuit.net.auwrites:
Heaven knows! I hardly think invoking hundreds
and possibly thousands of short-lived python
interpreters to be an optimal solution that may
have spawned this particular thread.

It's not optimal but it is very common (CGI for example).
CGI? When you're talking about CGI, network traffic is simply the
biggest bottleneck, not something like python interpreter startup
time. Also, welcome to the 21st century, where CGI is considered as an
outdated protocol.
Oct 29 '08 #28

P: n/a
On Wed, Oct 29, 2008 at 9:43 PM, Lie <Li******@gmail.comwrote:
On Oct 27, 2:36 pm, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
>It's not optimal but it is very common (CGI for example).

CGI? When you're talking about CGI, network traffic is simply the
biggest bottleneck, not something like python interpreter startup
time. Also, welcome to the 21st century, where CGI is considered as an
outdated protocol.
That's right. That's why we have WSGI. That's
why we built mod_wsgi for Apache. Hell taht's
why we actually have really nice Web Frameworks
such as: CherryPy, Pylons, Paste, etc. They
perform pretty damn well!

--JamesMills

--
--
-- "Problems are solved by method"
Oct 29 '08 #29

P: n/a
Maybe Ruby is the right language for your need.

Just sayin'.

On Sun, 2008-10-26 at 13:19 +0000, Pedro Borges wrote:
The scripts i need to run but be executed with no apparent delay
specially when the text transforms are simple.
On Oct 26, 2008, at 11:13 AM, James Mills wrote:
On Sun, Oct 26, 2008 at 11:23 AM, BJörn Lindqvist
<bj*****@gmail.comwrote:
How are you getting those numbers? 330 μs is still pretty fast,
isn't
it? :) Most disks have a seek time of 10-20 ms so it seem implausible
to me that Ruby would be able to cold start in 47 ms.
$ time python -c "pass"

real 0m0.051s
user 0m0.036s
sys 0m0.008s

$ time python3.0 -c "pass"

real 0m0.063s
user 0m0.048s
sys 0m0.004s

And yes I agree. the CPython interpreter startup times is
a stupid thing to be worrying about, especially since that
is never the bottleneck.

Python loads plenty fast enough!

--JamesMills

--
--
-- "Problems are solved by method"

--
http://mail.python.org/mailman/listinfo/python-list
Oct 29 '08 #30

P: n/a
Terry Reedy:
The current developers, most of whom use Python daily, [...]
Thank you for bringing some light in this thread so filled with worse
than useless comments.

Bye,
bearophile
Oct 29 '08 #31

P: n/a
2008/10/27 James Mills <pr******@shortcircuit.net.au>:
On Mon, Oct 27, 2008 at 5:40 PM, David Cournapeau <co******@gmail.comwrote:
>Depends on the tool: build tool and source control tools are example
it matters (specially when you start interfaciing them with IDE or
editors). Having fast command line tools is an important feature of
UNIX, and if you want to insert a python-based tool in a given
pipeline, it can hurt it the pipeline is regularly updated.

Fair enough. But still:
0.5s old startup is fast enough
0.08s warm startup is fast enough.

Often "fast enough" is "fast enough"
Nope, when it comes to start up speed the only thing that is fast
enough is "instantly." :) For example, if I write a GUI text editor in
Python, the total cold start up time might be 1500 ms on a cold
system. 750 ms for the interpreter and 750 ms for the app itself.
However, if I also have other processes competing for IO, torrent
downloads or compilations for example, the start up time grows
proportional to the disk load. For example, if there is 50% constant
disk load, my app will start in 1.5 / (1 - 0.5) = 3 seconds (in the
best case, assuming IO access is allocated as efficiently as possible
when the number of processes grows, which it isn't). If the load is
75%, the start time becomes 1.5 / (1 - 0.75) = 6 seconds.

Now if the Python interpreters start up time was 200 ms, by apps start
up time with 75% disk load becomes (0.2 + 0.75) / (1 - 0.75) = 3.8
seconds which is significantly better.
--
mvh Bjrn
Oct 29 '08 #32

P: n/a
BJrn Lindqvist wrote:
2008/10/27 James Mills <pr******@shortcircuit.net.au>:
>On Mon, Oct 27, 2008 at 5:40 PM, David Cournapeau <co******@gmail.comwrote:
>>Depends on the tool: build tool and source control tools are example
it matters (specially when you start interfaciing them with IDE or
editors). Having fast command line tools is an important feature of
UNIX, and if you want to insert a python-based tool in a given
pipeline, it can hurt it the pipeline is regularly updated.
Fair enough. But still:
0.5s old startup is fast enough
0.08s warm startup is fast enough.

Often "fast enough" is "fast enough"

Nope, when it comes to start up speed the only thing that is fast
enough is "instantly." :) For example, if I write a GUI text editor in
Python, the total cold start up time might be 1500 ms on a cold
system. 750 ms for the interpreter and 750 ms for the app itself.
However, if I also have other processes competing for IO, torrent
downloads or compilations for example, the start up time grows
proportional to the disk load. For example, if there is 50% constant
disk load, my app will start in 1.5 / (1 - 0.5) = 3 seconds (in the
best case, assuming IO access is allocated as efficiently as possible
when the number of processes grows, which it isn't). If the load is
75%, the start time becomes 1.5 / (1 - 0.75) = 6 seconds.

Now if the Python interpreters start up time was 200 ms, by apps start
up time with 75% disk load becomes (0.2 + 0.75) / (1 - 0.75) = 3.8
seconds which is significantly better.

But still not fast enough to be regarded as even close to "instant", so
you appear to be fiddling while Rome burns ...

reqards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC http://www.holdenweb.com/

Oct 30 '08 #33

This discussion thread is closed

Replies have been disabled for this discussion.