By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
458,108 Members | 1,634 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 458,108 IT Pros & Developers. It's quick & easy.

Execution state persistence for workflow application

P: n/a
Hi all,
I'm pretty new to the python language so please excuse me
if this is FAQ... I'm very glad to be part of the list! :-)

I'm looking into a way to implement a generic workflow framework with python.
The basic idea is to use python scripts as the way to specify workflow
behavior. The framework should not only use scripts as a specification language
but is going to leverage on python interpreter for the execution of the
scripts.
One of the possible solution is to have a one to one mapping
between a interpreter process and a workflow instance.

Since I want to be able to model also long running processes I need to
cope with Execution state persistence. In other words..
I'd like to stop the framework and restart it having all workflow
instance processes resume exactly where they left.

Ideally I would be nice to have dumpexec() e loadexec() builtin functions
to dump and reload the state of the interpreter. I've not seen anything
like that unfortunately...

I've tried to look at exec statement but it doesn't seem to be good
for execution persistence...

Questions are:

- does there exist a python framework for workflow that use
python script as a specification language and that
support "long running" workflows? I don't want to reinvent the wheel...

- does the implementation idea quickly depicted makes sense to you?

- how do you suggest to implement execution persistence? Is there
any "standard" solution?

Thanks!!!

Paolo
Jul 18 '05 #1
Share this Question
Share on Google+
6 Replies


P: n/a

"Paolo Losi" <p.****@netline.it> wrote in message news:ma*************************************@pytho n.org...
Hi all,
I'm pretty new to the python language so please excuse me
if this is FAQ... I'm very glad to be part of the list! :-)

I'm looking into a way to implement a generic workflow framework with python.
The basic idea is to use python scripts as the way to specify workflow
behavior. The framework should not only use scripts as a specification language
but is going to leverage on python interpreter for the execution of the
scripts.
One of the possible solution is to have a one to one mapping
between a interpreter process and a workflow instance.

Since I want to be able to model also long running processes I need to
cope with Execution state persistence. In other words..
I'd like to stop the framework and restart it having all workflow
instance processes resume exactly where they left.

Ideally I would be nice to have dumpexec() e loadexec() builtin functions
to dump and reload the state of the interpreter. I've not seen anything
like that unfortunately...

I've tried to look at exec statement but it doesn't seem to be good
for execution persistence...

Questions are:

- does there exist a python framework for workflow that use
python script as a specification language and that
support "long running" workflows? I don't want to reinvent the wheel... Not that I'm aware of.

- does the implementation idea quickly depicted makes sense to you? Partly. It's not complete and that makes it look more simple than
data persistence when it's not. The problem is that one day you will
have to upgrade your program and your last dumpexec won't be
compatible with your next loadexec(). You will have to separate
code from data to do it. So it means execution persistence is not
enough for real life use. Why not just use data persistence alone?

- how do you suggest to implement execution persistence? Is there
any "standard" solution?

Use data persistence and your own custom loader. Don't be afraid
of the word loader. It's very simple. For a simple persistant "hello,
world!" program it's about 5-10 lines. I think ZoDB is the most
popular data persistence framework. It's very nice and simple.

-- Serge.


Jul 18 '05 #2

P: n/a
Serge Orlov wrote:
Questions are:

- does there exist a python framework for workflow that use
python script as a specification language and that
support "long running" workflows? I don't want to reinvent the wheel...


Not that I'm aware of.

- does the implementation idea quickly depicted makes sense to you?


Partly. It's not complete and that makes it look more simple than
data persistence when it's not. The problem is that one day you will
have to upgrade your program and your last dumpexec won't be
compatible with your next loadexec(). You will have to separate
code from data to do it. So it means execution persistence is not
enough for real life use. Why not just use data persistence alone?


In fact data persistence is not sufficient to stop and resume scripts
in case, for example, system reboot.
I do want my workflow scripts to resume exactly (and with the same
globals/locals setup) where they left...

The real alternative would be to define a new script language
with standard constructs (for, while,...) but again... i don't want
to reinvent the wheel.

I do not seen execution persistence as an alternative to data
persistence: I would need both.
- how do you suggest to implement execution persistence? Is there
any "standard" solution?


Use data persistence and your own custom loader. Don't be afraid
of the word loader. It's very simple. For a simple persistant "hello,
world!" program it's about 5-10 lines. I think ZoDB is the most
popular data persistence framework. It's very nice and simple.

-- Serge.


Thanks!
Paolo
Jul 18 '05 #3

P: n/a
[Serge Orlov]
The problem is that one day you will
have to upgrade your program and your last dumpexec won't be
compatible with your next loadexec(). You will have to separate
code from data to do it. So it means execution persistence is not
enough for real life use. Why not just use data persistence alone?

[Paolo Losi] In fact data persistence is not sufficient to stop and resume scripts
in case, for example, system reboot.
I do want my workflow scripts to resume exactly (and with the same
globals/locals setup) where they left...

The real alternative would be to define a new script language
with standard constructs (for, while,...) but again... i don't want
to reinvent the wheel.

I do not seen execution persistence as an alternative to data
persistence: I would need both.


You might want to investigate Stackless python, an excellent research
work which can save and resume execution state, to some degree. Try
the following google query

http://www.google.com/search?q=pickl...Astackless.com

HTH,

--
alan kennedy
-----------------------------------------------------
check http headers here: http://xhaus.com/headers
email alan: http://xhaus.com/mailto/alan
Jul 18 '05 #4

P: n/a
On Mon, 24 Nov 2003 08:11:57 +0100, Paolo Losi <p.****@netline.it> wrote:
Hi all,
I'm pretty new to the python language so please excuse me
if this is FAQ... I'm very glad to be part of the list! :-)

I'm looking into a way to implement a generic workflow framework with python.
The basic idea is to use python scripts as the way to specify workflow
behavior. The framework should not only use scripts as a specification language
but is going to leverage on python interpreter for the execution of the
scripts.
One of the possible solution is to have a one to one mapping
between a interpreter process and a workflow instance.
I'm not clear on what all that meant ;-)

for word_or_phrase in ["workflow framework", "workflow behavior", "workflow",
"specification language", "intepreter process", "workflow instance"]:
if you_please(): explain_what_you_mean_by(word_or_phrase)

;-)
Since I want to be able to model also long running processes I need to
cope with Execution state persistence. In other words..
I'd like to stop the framework and restart it having all workflow
instance processes resume exactly where they left.

Ideally I would be nice to have dumpexec() e loadexec() builtin functions
to dump and reload the state of the interpreter. I've not seen anything
like that unfortunately... That seems pretty large-grained. What happens if you have a power failure
in the middle of dumpexec()?

IOW, is sounds to me like you need something like a transactional database system to log
your state changes, so you can pick up from a consistent state no matter what.
The question then is how to design your system so that it has states that
can be recorded and recovered that way. I think there would be a lot of overhead
in capturing an entire system checkpoint image every time your app trasitioned to
a new state, even if such a snapshot function were available.

So OTTOMH ISTM you will wind up designing some state machine that can be initialized
from a TDB, and which will log state changes incrementally in smaller chunks than
all-encompassing blobs. I don't know what kind of control state you want to persist,
but I would guess it should be in terms of getting from one data transaction to the
next, not python loop variables and heap states and such.

I've tried to look at exec statement but it doesn't seem to be good
for execution persistence...

Questions are:

- does there exist a python framework for workflow that use "workflow" is a bit too generic for me to guess what you have in mind.
python script as a specification language and that
support "long running" workflows? I don't want to reinvent the wheel... ditto
- does the implementation idea quickly depicted makes sense to you?

- how do you suggest to implement execution persistence? Is there
any "standard" solution?

First thing in seeking help thinking about stuff is to define terms,
perhaps informally by examples or metaphors, etc., since everyone doesn't
speak the vernacular of your current focus.

HTH

Regards,
Bengt Richter
Jul 18 '05 #5

P: n/a
I hacked together a python vm coded in python not too long ago and I managed
to get it to hibernate and then resume. It is not too difficult. You should
search Google for the following keywords for other similar projects:

process reification
mobile code
checkpointing

The key issues are

a) Most non-trivial workflows aren't simple programs. They are event driven
and tasks may run concurrently. For example, see how Zope's (the web site)
workflow for submission, acceptance, retraction of published documents. You
will need to write pretty convoluted program in python to express the states
that are involved.

b) Open file handles, sockets need to be restored across sessions

c) Most of these problems can only be overcome by a language that support
workflow semantics, and python and most C style languages are not quite
enough

CT

"Alan Kennedy" <al****@hotmail.com> wrote in message
news:3F***************@hotmail.com...
[Serge Orlov]
The problem is that one day you will
have to upgrade your program and your last dumpexec won't be
compatible with your next loadexec(). You will have to separate
code from data to do it. So it means execution persistence is not
enough for real life use. Why not just use data persistence alone?


[Paolo Losi]
In fact data persistence is not sufficient to stop and resume scripts
in case, for example, system reboot.
I do want my workflow scripts to resume exactly (and with the same
globals/locals setup) where they left...

The real alternative would be to define a new script language
with standard constructs (for, while,...) but again... i don't want
to reinvent the wheel.

I do not seen execution persistence as an alternative to data
persistence: I would need both.


You might want to investigate Stackless python, an excellent research
work which can save and resume execution state, to some degree. Try
the following google query

http://www.google.com/search?q=pickl...Astackless.com

HTH,

--
alan kennedy
-----------------------------------------------------
check http headers here: http://xhaus.com/headers
email alan: http://xhaus.com/mailto/alan

Jul 18 '05 #6

P: n/a
Alan Kennedy wrote:
[Serge Orlov]
The problem is that one day you will
have to upgrade your program and your last dumpexec won't be
compatible with your next loadexec(). You will have to separate
code from data to do it. So it means execution persistence is not
enough for real life use. Why not just use data persistence alone?

[Paolo Losi]
In fact data persistence is not sufficient to stop and resume scripts
in case, for example, system reboot.
I do want my workflow scripts to resume exactly (and with the same
globals/locals setup) where they left...

The real alternative would be to define a new script language
with standard constructs (for, while,...) but again... i don't want
to reinvent the wheel.

I do not seen execution persistence as an alternative to data
persistence: I would need both.

You might want to investigate Stackless python, an excellent research
work which can save and resume execution state, to some degree. Try
the following google query


Here is a small example that I wrote last weekend for Zope.
It looks very simple, it lets you run 10 answers
*from* the web server against the client in a loop,
with no visible call-backs.
It *is* a server loop, but ti's obvious that there
cannot be a simple loop, since the server freezes
until the next request come in.
Well, it looks simple, but here is the real power!

http://www.centera.de/tismer/stackle...e_demo/runDemo

Will try to finish this and publish, soon -- chris

--
Christian Tismer :^) <mailto:ti****@tismer.com>
Mission Impossible 5oftware : Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a : *Starship* http://starship.python.net/
14109 Berlin : PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34 home +49 30 802 86 56 mobile +49 173 24 18 776
PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04
whom do you want to sponsor today? http://www.stackless.com/

Jul 18 '05 #7

This discussion thread is closed

Replies have been disabled for this discussion.