By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
455,587 Members | 1,677 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 455,587 IT Pros & Developers. It's quick & easy.

Embedding: Is it possible to limit number of virtual instructionsexecuted?

P: n/a
Hi experts-

I have a potential use case of embedding Python where it must co-
operate with the host c/c++ application in a single thread. That is,
the c code and python interpreter need to cooperate and time share, or
yield to each other.

The main loop would need to do something like:

check for events
for events with c handlers:
process in c
for events with python handlers:
add to queue
if have time:
python_continue_execution(number of vm instructions)

The goal is to be able to set some limit on the amount of time in the
interpreter in each loop, and not rely on the python side code. (I
realise this will not be absolute, especially if the python code calls
c extensions, but it will be more straight forward for the users.)

Does anyone have any suggestions for how this could be achieved?

Thanks in advance,

Julian
Nov 1 '08 #1
Share this Question
Share on Google+
7 Replies


P: n/a
Does anyone have any suggestions for how this could be achieved?

You'll have to adjust Python/ceval.c. Look for _Py_Ticker, which
provides some fairness for Python threads (releasing the GIL after
_Py_CheckInterval instructions). If you decrement another global
variable there, you can determine that the limit has been reached,
and raise an exception (say).

Regards,
Martin
Nov 1 '08 #2

P: n/a
ju************@gmail.com wrote:
Hi experts-

I have a potential use case of embedding Python where it must co-
operate with the host c/c++ application in a single thread. That is,
the c code and python interpreter need to cooperate and time share, or
yield to each other.

The main loop would need to do something like:

check for events
for events with c handlers:
process in c
for events with python handlers:
add to queue
if have time:
python_continue_execution(number of vm instructions)

The goal is to be able to set some limit on the amount of time in the
interpreter in each loop, and not rely on the python side code. (I
realise this will not be absolute, especially if the python code calls
c extensions, but it will be more straight forward for the users.)

Does anyone have any suggestions for how this could be achieved?
Does this help at all?
>>help(sys.setcheckinterval)
Help on built-in function setcheckinterval in module sys:

setcheckinterval(...)
setcheckinterval(n)

Tell the Python interpreter to check for asynchronous events every
n instructions. This also affects how often thread switches occur.

I *don't* know any more that this.

tjr

Nov 1 '08 #3

P: n/a
ju************@gmail.com writes:
The goal is to be able to set some limit on the amount of time in the
interpreter in each loop, and not rely on the python side code. ...
Does anyone have any suggestions for how this could be achieved?
Not really. Limiting the number of virtual instructions (byte codes)
is of no help, since each one can consume unlimited runtime. I think
for example that bigint exponentiation is one bytecode, and 9**999999
probably takes several seconds to compute.
Nov 1 '08 #4

P: n/a
Does anyone have any suggestions for how this could be achieved?

You'll have to adjust Python/ceval.c. Look for _Py_Ticker, which
provides some fairness for Python threads (releasing the GIL after
_Py_CheckInterval instructions). If you decrement another global
variable there, you can determine that the limit has been reached,
and raise an exception (say).

Regards,
Martin
Nov 1 '08 #5

P: n/a
ju************@gmail.com wrote:
Hi experts-

I have a potential use case of embedding Python where it must co-
operate with the host c/c++ application in a single thread. That is,
the c code and python interpreter need to cooperate and time share, or
yield to each other.

The main loop would need to do something like:

check for events
for events with c handlers:
process in c
for events with python handlers:
add to queue
if have time:
python_continue_execution(number of vm instructions)

The goal is to be able to set some limit on the amount of time in the
interpreter in each loop, and not rely on the python side code. (I
realise this will not be absolute, especially if the python code calls
c extensions, but it will be more straight forward for the users.)

Does anyone have any suggestions for how this could be achieved?
Does this help at all?
>>help(sys.setcheckinterval)
Help on built-in function setcheckinterval in module sys:

setcheckinterval(...)
setcheckinterval(n)

Tell the Python interpreter to check for asynchronous events every
n instructions. This also affects how often thread switches occur.

I *don't* know any more that this.

tjr

Nov 1 '08 #6

P: n/a
ju************@gmail.com writes:
The goal is to be able to set some limit on the amount of time in the
interpreter in each loop, and not rely on the python side code. ...
Does anyone have any suggestions for how this could be achieved?
Not really. Limiting the number of virtual instructions (byte codes)
is of no help, since each one can consume unlimited runtime. I think
for example that bigint exponentiation is one bytecode, and 9**999999
probably takes several seconds to compute.
Nov 1 '08 #7

P: n/a
Thanks to those who replied; it seems that it is possible to do,
but would require some changes to the python core, and may not be
as fine-grained as I'd presumed. I think I'll consider running
python in a seperate thread and create a couple of message queues
to/from that thread and the main thread.

Julian

On Nov 1, 12:46*am, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
juliangrend...@gmail.com writes:
The goal is to be able to set some limit on the amount of time in the
interpreter in each loop, and not rely on the python side code. ...
Does anyone have any suggestions for how this could be achieved?

Not really. *Limiting the number of virtual instructions (byte codes)
is of no help, since each one can consume unlimited runtime. *I think
for example that bigint exponentiation is one bytecode, and 9**999999
probably takes several seconds to compute.
Nov 4 '08 #8

This discussion thread is closed

Replies have been disabled for this discussion.