By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
455,446 Members | 1,566 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 455,446 IT Pros & Developers. It's quick & easy.

Python OS

P: n/a
Is it possible to prototype an operating system in Python? If so, what
would such a task entail? (i.e. How would one write a boot-loader in
Python?)

- Richard B.
Jul 18 '05 #1
Share this Question
Share on Google+
24 Replies


P: n/a
Richard Blackwood wrote:
Is it possible to prototype an operating system in Python? If so, what
would such a task entail? (i.e. How would one write a boot-loader in
Python?)

There have been lengthy discussions on this subject in this group - google
is your friend.

Generally spaeking, its not possible - as there is a fair amount of
low-level stuff to be progammed for interrupt routines and the like that
can not be done in python.

--
Regards,

Diez B. Roggisch
Jul 18 '05 #2

P: n/a
Diez B. Roggisch wrote:
Richard Blackwood wrote:
Is it possible to prototype an operating system in Python? If so, what
would such a task entail? (i.e. How would one write a boot-loader in
Python?)

There have been lengthy discussions on this subject in this group - google
is your friend.

I know, I read them. The conclusion was that indeed, it can and in fact
has been done.
Generally spaeking, its not possible - as there is a fair amount of
low-level stuff to be progammed for interrupt routines and the like that
can not be done in python.

More what I meant was whether I can _prototype_ an OS in Python (i.e.
make a virtual OS).

P.S. If one can program interrupt routines in C, they can do the same
in Python.
Jul 18 '05 #3

P: n/a
> I know, I read them. The conclusion was that indeed, it can and in fact
has been done.
Hm. I'll reread them myself and see if what has been achieved in this field.
But I seriously doubt that someone took a python interpreter and started
writing an OS.

I can imagine having an OS _based_ on python, where the os api is exposed
using python - but still this requires a fair amount of lowlevel stuff that
can't be done in python itself, but must be written in C or even assembler.
More what I meant was whether I can _prototype_ an OS in Python (i.e.
make a virtual OS).
P.S. If one can program interrupt routines in C, they can do the same
in Python.


Show me how you set the interrupt jumptables to your routines in python and
how you write time-critical code which has to be executed in a few hundred
cpu-cycles. And how good that works together with the GIL.
--
Regards,

Diez B. Roggisch
Jul 18 '05 #4

P: n/a
Diez B. Roggisch wrote:
Hm. I'll reread them myself and see if what has been achieved in this field.
But I seriously doubt that someone took a python interpreter and started
writing an OS.


I am aware of at least one attempt. See
http://cleese.sourceforge.net/cgi-bin/moin.cgi and the mailing list
archive. They managed to boot a specially modified VM and run simple
Python programs. Unfortunately the project was abandoned about a year ago.
--
Benji York
be***@benjiyork.com

Jul 18 '05 #5

P: n/a
Benji York wrote:
Diez B. Roggisch wrote:
Hm. I'll reread them myself and see if what has been achieved in this
field. But I seriously doubt that someone took a python interpreter and
started writing an OS.


I am aware of at least one attempt. See
http://cleese.sourceforge.net/cgi-bin/moin.cgi and the mailing list
archive. They managed to boot a specially modified VM and run simple
Python programs. Unfortunately the project was abandoned about a year
ago.


And it backs my assertions:

--------------
Code that is written in either Assembly Language, Boa (see BoaPreprocessor)
or C is often referred to here (and in mailing list discussions) as ABC
Code. (Coincidently, "ABC" is one of the languages that Python was
originally based on)
Different components of Cleese live in different layers:
Layer 1
start-up code, low-level hardware services and any library code required by
Python virtual machine - ABC code

Layer 2
the Python virtual machine - C code

Layer 3
the kernel and modules - Python code

Layer 4
user-level applications - Python code
---------------

http://cleese.sourceforge.net/cgi-bi...seArchitecture

So the lowest layer _is_ written in assembler, C or something else that's
allowing for pythonesque source but generates assembler. That was precisely
my point.

--
Regards,

Diez B. Roggisch
Jul 18 '05 #6

P: n/a
Diez B. Roggisch wrote:
And it backs my assertions:


Definitely. I missed that part.

I wonder if someone were to start a similar project today if they would
be able to use Pyrex (which generates C) to do large parts of the OS.

Perhaps when I retire <wink>.
--
Benji York
be***@benjiyork.com

Jul 18 '05 #7

P: n/a
P.S. If one can program interrupt routines in C, they can do the same
in Python.


Show me how you set the interrupt jumptables to your routines in python and
how you write time-critical code which has to be executed in a few hundred
cpu-cycles. And how good that works together with the GIL.

Here is my logic: If one can do X in C and Python is C-aware (in other
words Python can be exposed to C) then Python can do X via such exposure.
Jul 18 '05 #8

P: n/a
Benji York wrote:
Diez B. Roggisch wrote:
And it backs my assertions:

Definitely. I missed that part.

I wonder if someone were to start a similar project today if they
would be able to use Pyrex (which generates C) to do large parts of
the OS.

Perhaps when I retire <wink>.


An intriguing possibility, I do not see what obstacles exist here but I
am sure they exist. A question: In Pyrex may I perform a malloc?
Jul 18 '05 #9

P: n/a
Richard Blackwood wrote:
P.S. If one can program interrupt routines in C, they can do the same
in Python.


Show me how you set the interrupt jumptables to your routines in
python and
how you write time-critical code which has to be executed in a few
hundred
cpu-cycles. And how good that works together with the GIL.

Here is my logic: If one can do X in C and Python is C-aware (in other
words Python can be exposed to C) then Python can do X via such exposure.


Unfortunately the logic is flawed, even in the case that you
quoted above!

You _cannot_ use Python for a time-critical interrupt, even
when you allow for pure C or assembly code as the bridge
(since Python _cannot_ be used natively for interrupts, of
course), because -- as noted above! -- the interrupt must
execute in a few hundred CPU cycles.

Given that the cost of invoking the Python interpreter on
a bytecode-interrupt routine would be several orders of
magnitude higher, I don't understand why you think it
is possible for it to be as fast.

Of course, if you will allow both assembly/C code here and
there as a bridge, *and* you are willing to accept an operating
system that is arbitrarily slower at certain time-critical
operations (such as responding to mouse activities) than we
are used to now, then certainly Python can be used for such things...

-Peter
Jul 18 '05 #10

P: n/a
Peter Hansen wrote:
Richard Blackwood wrote:
P.S. If one can program interrupt routines in C, they can do the same
in Python.
Show me how you set the interrupt jumptables to your routines in
python and
how you write time-critical code which has to be executed in a few
hundred
cpu-cycles. And how good that works together with the GIL.
Here is my logic: If one can do X in C and Python is C-aware (in
other words Python can be exposed to C) then Python can do X via such
exposure.

Unfortunately the logic is flawed, even in the case that you
quoted above!


Yes and no. See below.

You _cannot_ use Python for a time-critical interrupt, even
when you allow for pure C or assembly code as the bridge
(since Python _cannot_ be used natively for interrupts, of
course), because -- as noted above! -- the interrupt must
execute in a few hundred CPU cycles.
I'm really not trying to contradict nor stir things up. But the OP
wanted to know if it were possible to prototype an OS and in a
follow-up, referred to a virtual OS. Maybe I mis-read the OP, but it
seems that he is not concerned with creating a _real_ OS (one that talks
directly to the machine), it seems that he is concerned with building
all the components that make up an OS for the purpose of....well.....he
didn't really state that.....or maybe I missed it.

So, asking in total ignorance, and deferring to someone with obviously
more experience that I have (like you, Peter), would it be possible to
create an OS-like application that runs in a Python interpreter that
does OS-like things (i.e. scheduler, interrupt handling, etc.) and talks
to a hardware-like interface? If we're talking about a virtual OS,
(again, I'm asking in ignorance) would anything really be totally time
critical? Especially if the point were learning how an OS works?

Given that the cost of invoking the Python interpreter on
a bytecode-interrupt routine would be several orders of
magnitude higher, I don't understand why you think it
is possible for it to be as fast.
I totally agree with you...sort of. I totally agree with your technical
assessment. However, I'm reading the OP a different way. If he did
mean a virtual OS and if time isn't critical, and he was thinking,
"well, I'm getting shot down for proposing to do this in Python, so
maybe it isn't possible in Python, but it is possible in C and since I
can call C from Python, then I should be able to do it", then maybe he
has a point. Or, maybe I'm just totally misreading the OP. So, if he's
saying that he can just call the C code from python and it'd be just as
fast doing interrupt handling that way, then I agree with you. But if
he's talking about just the functionality and not the time, is that
possible?

Of course, if you will allow both assembly/C code here and
there as a bridge, *and* you are willing to accept an operating
system that is arbitrarily slower at certain time-critical
operations (such as responding to mouse activities) than we
are used to now, then certainly Python can be used for such things...
OK - so here's my answer. It should be possible, but it will be slower,
which seems to be acceptable for what he meant when mentioning
prototyping and a virtual OS. But here's another question. Would it be
possible, if I've interpreted him correctly, to write the whole thing in
Python without directly calling C code or assembly? Even if it were
unbearably slow and unfit for use for anything other than, say, a
learning experience? Kind of like a combustion engine that has part of
it replaced with transparent plastic - you dare not try to run it, but
you can manually move the pistons, etc. It's only good for education.
-Peter


Jeremy

Jul 18 '05 #11

P: n/a
<SNIP>

I'm really not trying to contradict nor stir things up. But the OP
wanted to know if it were possible to prototype an OS and in a
follow-up, referred to a virtual OS. Maybe I mis-read the OP, but it
seems that he is not concerned with creating a _real_ OS (one that
talks directly to the machine), it seems that he is concerned with
building all the components that make up an OS for the purpose
of....well.....he didn't really state that.....or maybe I missed it.
You understand me entirely Jeremy. The goal is to create a _virtual_ OS
that will represent and behave (minus speed) like a real OS. It will
be comprised of all the components necessary for a _real_ OS and if the
only way to do it is to simulate hardware as well, so be it. If the
only way to do it is to handle _real_ interrupts via excruciating slow
Python to C calls, so be it. So you understand me entirely as I do not
wish to create an OS that is usable in the traditional sense.

So, asking in total ignorance, and deferring to someone with obviously
more experience that I have (like you, Peter), would it be possible to
create an OS-like application that runs in a Python interpreter that
does OS-like things (i.e. scheduler, interrupt handling, etc.) and
talks to a hardware-like interface? If we're talking about a virtual
OS, (again, I'm asking in ignorance) would anything really be totally
time critical? Especially if the point were learning how an OS works?
Time is not an issue.

Given that the cost of invoking the Python interpreter on
a bytecode-interrupt routine would be several orders of
magnitude higher, I don't understand why you think it
is possible for it to be as fast.

I totally agree with you...sort of. I totally agree with your
technical assessment. However, I'm reading the OP a different way.
If he did mean a virtual OS and if time isn't critical, and he was
thinking, "well, I'm getting shot down for proposing to do this in
Python, so maybe it isn't possible in Python, but it is possible in C
and since I can call C from Python, then I should be able to do it",
then maybe he has a point. Or, maybe I'm just totally misreading the
OP. So, if he's saying that he can just call the C code from python
and it'd be just as fast doing interrupt handling that way, then I
agree with you. But if he's talking about just the functionality and
not the time, is that possible?


I agree as well, but somehow it was interpreted that I believe it
possible to achieve the same speed in Python as in C, which I never
said. So again, you understand me entirely. Functionality and not
time, exactly.

Of course, if you will allow both assembly/C code here and
there as a bridge, *and* you are willing to accept an operating
system that is arbitrarily slower at certain time-critical
operations (such as responding to mouse activities) than we
are used to now, then certainly Python can be used for such things...

OK - so here's my answer. It should be possible, but it will be
slower, which seems to be acceptable for what he meant when mentioning
prototyping and a virtual OS. But here's another question. Would it
be possible, if I've interpreted him correctly, to write the whole
thing in Python without directly calling C code or assembly? Even if
it were unbearably slow and unfit for use for anything other than,
say, a learning experience? Kind of like a combustion engine that has
part of it replaced with transparent plastic - you dare not try to run
it, but you can manually move the pistons, etc. It's only good for
education.

That is my question, but the consensus seems to be no.
Jul 18 '05 #12

P: n/a
Peter Hansen wrote:
Richard Blackwood wrote:
P.S. If one can program interrupt routines in C, they can do the same
in Python.
Show me how you set the interrupt jumptables to your routines in
python and
how you write time-critical code which has to be executed in a few
hundred
cpu-cycles. And how good that works together with the GIL.
Here is my logic: If one can do X in C and Python is C-aware (in
other words Python can be exposed to C) then Python can do X via such
exposure.

Unfortunately the logic is flawed, even in the case that you
quoted above!


Reread what I wrote above Peter, I said _nothing_ about speed. It says,
"If....then Python can do X": _can do_ NOT that If.....then Python can
do X just as well speed wise as C. It is not writ.

You _cannot_ use Python for a time-critical interrupt, even
when you allow for pure C or assembly code as the bridge
(since Python _cannot_ be used natively for interrupts, of
course), because -- as noted above! -- the interrupt must
execute in a few hundred CPU cycles.
Are you so sure of this Peter? It certainly seems that this might be
possible but as you point out, if one uses Python for _time critical_
interrupts, the code will not live up to the _time critical_ aspect.
Indeed, I agree and never said otherwise. I could case less if it is
extremely slow as speed or even usability was not my aim (I never
indicated such either and in fact, I indicated otherwise by utilizing
such terms as _virtual_ and _prototype_ [As Jeremy bravely points out] ).

Given that the cost of invoking the Python interpreter on
a bytecode-interrupt routine would be several orders of
magnitude higher, I don't understand why you think it
is possible for it to be as fast.
I never said this Peter.

Of course, if you will allow both assembly/C code here and
there as a bridge, *and* you are willing to accept an operating
system that is arbitrarily slower at certain time-critical
operations (such as responding to mouse activities) than we
are used to now, then certainly Python can be used for such things...


Why thank you Peter, that is exactly my aim.
Jul 18 '05 #13

P: n/a
>
Are you so sure of this Peter? It certainly seems that this might be
possible but as you point out, if one uses Python for _time critical_
interrupts, the code will not live up to the _time critical_ aspect.
Indeed, I agree and never said otherwise. I could case less if it is
extremely slow as speed or even usability was not my aim (I never
indicated such either and in fact, I indicated otherwise by utilizing
such terms as _virtual_ and _prototype_ [As Jeremy bravely points out] ).

I did not only mention the timing aspect, but also the GIL (Global
Interpretor Lock) aspect of a python-coded interrupt routine. That renders
python useless in such a case as it causes a deadlock.

I've had my share of embedded programming on various cores with variying
degrees of OS already available to me, as well as low-lever assembly
hacking on old 68k-machines as the amiga. My feeble attempts on task
schedulers and the like don't qualify as OS, but the teached me about the
difficulties that arise when trying to cope with data structure integritiy
in a totally asynchronous event like an interrupt.

All that stuff has to be so low-level and carefully adjusted to timing
requirements that python is ruled out there- sometimes even C doesn't make
up for it.

Thats what I had in mind when answering your question.

--
Regards,

Diez B. Roggisch
Jul 18 '05 #14

P: n/a
Diez B. Roggisch wrote:
Are you so sure of this Peter? It certainly seems that this might be
possible but as you point out, if one uses Python for _time critical_
interrupts, the code will not live up to the _time critical_ aspect.
Indeed, I agree and never said otherwise. I could case less if it is
extremely slow as speed or even usability was not my aim (I never
indicated such either and in fact, I indicated otherwise by utilizing
such terms as _virtual_ and _prototype_ [As Jeremy bravely points out] ).

I did not only mention the timing aspect, but also the GIL (Global
Interpretor Lock) aspect of a python-coded interrupt routine. That renders
python useless in such a case as it causes a deadlock.

I am ignorant in these respects, but deadlocks do not sound like a good
thing (contrary to what Martha would say).
I've had my share of embedded programming on various cores with variying
degrees of OS already available to me, as well as low-lever assembly
hacking on old 68k-machines as the amiga. My feeble attempts on task
schedulers and the like don't qualify as OS, but the teached me about the
difficulties that arise when trying to cope with data structure integritiy
in a totally asynchronous event like an interrupt.

All that stuff has to be so low-level and carefully adjusted to timing
requirements that python is ruled out there- sometimes even C doesn't make
up for it.
Do you mean that there are no entirely C OSs?
Thats what I had in mind when answering your question.

Understood, however, note the terms prototype and virtual. Perhaps I
could create virtual hardware where needed, and play around with the
timing issues in this manner (emulate them and create solutions
prototyped in Python).
Jul 18 '05 #15

P: n/a
> Do you mean that there are no entirely C OSs?

I doubt it. e.g. GCC lacks the ability to create proper interrupt routines
for 68k-based architectures, as these need an rte (return from exception)
instead of an rts (return from subroutine) at their end, which GCC isn't
aware of. So for creating a interrupt registry, I had to resort to
assembler - at least by "poking" the right values into ram. This consisted
of a structure
typedef struct {
short prefix[9];
t_intHandler handler;
short suffix[8];
} t_intWrapper;

where handler was set to my C function that was supposed to become an
interrupt handler and

t_intWrapper w = {
0x4e56, 0x0000,0x08f9,
0x0001, 0x00ff, 0xfa19,
0x48e7, 0xfffe, 0x4eb9,
handler,
0x4cdf, 0x7fff, 0x08b9, 0x0001, 0x00ff, 0xfa19, 0x4e5e, 0x4e73
};

beeing the initialisation of that t_intWrapper struct. The code simply
pushes registers on the stack and then jumps into the C function. The
address of such a struct was then set to the appropriate interrupt vector.

Now one can argue if that is still C, as it's written in a cpp-file thats
run through a compiler - but if that really counts as C, then of course you
can write anything in python (or VB or whatever languague you choose) by
simply writing out hexdigits to a file....
Understood, however, note the terms prototype and virtual. Perhaps I
could create virtual hardware where needed, and play around with the
timing issues in this manner (emulate them and create solutions
prototyped in Python).


In you first post, you didn't mention virtual - only prototype. And
prototyping an OS for whatever hardware can't be done in pure python. Thats
all that was said.

Writing a virtual machine for emulation can be done in python of course. But
then you can't write a OS for it in python as well (at least not in
CPython )- as your virtual machine must have some sort of byte-code, memory
model and so on. But the routines you write for the OS now must work
against that machine model, _not_ operate in CPython's execution model
(which is based on an existing os running on real hardware)

Now generating code for that machine in python means that you have to port
python to to your machine model - like jython is to java. That porting work
is akin to the one projects like ceese or unununium, writing memory
allocation routines and so on - for your machine model.

So I'd still say: No, you can't prototype an OS in python, neither for
virtual or real hardware. You _can_ add all sorts of modules, C-code and
what not to boot a machine into a python-interpreter that has all sorts of
os-services at its hand by certain modules, like the forementioned projects
attempt.

But creating a VM only leverages that work to the next level, retaining the
initial problems. If you don't do it that way, you don't prototype anything
like an OS nor implement a machine model but instead meddle some things
together by creating a machine model that is so powerful (e.g. has an
built-in notion for lists, dicts and the like) that it can't be taken
seriously as educational for writing an OS - at least to me, as the virtue
of writing an OS is to actually _deal_ with low-level matters, otherwise
there is no challenge in it.

I don't say that this is not a worthy project to undertake for educational
purposes - but I wouldn't call it an OS, as it does not teach what creating
an OS from scratch is supposed to teach.

For example you can't write a OS for the JAVA VM, as there is no such things
like interrupts defined for it - instead IO happends "magically" and is
dealt with on the low level with the underlying OS. The VM only consumes
the results.

--
Regards,

Diez B. Roggisch
Jul 18 '05 #16

P: n/a
Diez B. Roggisch wrote:
Do you mean that there are no entirely C OSs?

<LOTS OF SWELL CODE SNIPPED>

Now one can argue if that is still C, as it's written in a cpp-file thats
run through a compiler - but if that really counts as C, then of course you
can write anything in python (or VB or whatever languague you choose) by
simply writing out hexdigits to a file....

Looks like a Hybrid.

Understood, however, note the terms prototype and virtual. Perhaps I
could create virtual hardware where needed, and play around with the
timing issues in this manner (emulate them and create solutions
prototyped in Python).
In you first post, you didn't mention virtual - only prototype. And
prototyping an OS for whatever hardware can't be done in pure python. Thats
all that was said.

Writing a virtual machine for emulation can be done in python of course. But
then you can't write a OS for it in python as well (at least not in
CPython )- as your virtual machine must have some sort of byte-code, memory
model and so on. But the routines you write for the OS now must work
against that machine model, _not_ operate in CPython's execution model
(which is based on an existing os running on real hardware)

All of that can be virtually emulated (virtual memory and so forth).
But creating a VM only leverages that work to the next level, retaining the
initial problems. If you don't do it that way, you don't prototype anything
like an OS nor implement a machine model but instead meddle some things
together by creating a machine model that is so powerful (e.g. has an
built-in notion for lists, dicts and the like) that it can't be taken
seriously as educational for writing an OS - at least to me, as the virtue
of writing an OS is to actually _deal_ with low-level matters, otherwise
there is no challenge in it.

*laugh* Now that is hilarious.
I don't say that this is not a worthy project to undertake for educational
purposes - but I wouldn't call it an OS, as it does not teach what creating
an OS from scratch is supposed to teach.

Understood.
For example you can't write a OS for the JAVA VM, as there is no such things
like interrupts defined for it - instead IO happends "magically" and is
dealt with on the low level with the underlying OS. The VM only consumes
the results.

This has been done as well, there are Java operating systems were IO is
handled by Java and not magic, or so I understand.
Jul 18 '05 #17

P: n/a
"Diez B. Roggisch" <de*********@web.de> writes:
For example you can't write a OS for the JAVA VM, as there is no such things
like interrupts defined for it - instead IO happends "magically" and is
dealt with on the low level with the underlying OS. The VM only consumes
the results.


OS's have been written for VMs (LISP and Forth) that didn't have the
notion of interrupt before they were built. For LISPMs, interrupt
handlers are LISP objects (*). Java may not be as powerful as LISP,
but I'm pretty sure you could turn interrupts into method invocations
without having to extend the VM.

<mike

(*) <URL: http://home.comcast.net/%7Eprunesquallor/memo444.htm >,
under the section on Stack Groups.

--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Jul 18 '05 #18

P: n/a
> OS's have been written for VMs (LISP and Forth) that didn't have the
notion of interrupt before they were built. For LISPMs, interrupt
handlers are LISP objects (*). Java may not be as powerful as LISP,
but I'm pretty sure you could turn interrupts into method invocations
without having to extend the VM.


How so? An interrupt is a address the processor directly jumps to by
adjusting its PC. The JVM doesn't even have the idea of function pointers.
Invoking a (even static) method involves several lookups in dicts until the
actual code pointer is known - and that's byte code then, not machine
code.

As your examples show, one can implement a VM ontop of a considederably thin
layer of low level code and expose hooks that allow for system
functionality to be developed in a high-level language running bytecode.
Fine. Never doubted that. I've written myself C-wrappings that allowed
python callables to be passed as callbacks to the C-lib - no black magic
there. But that took some dozen lines of C-code, and time-critical
interrupts won't work properly if you allow their code beeing implemented
in a notably slower language.

--
Regards,

Diez B. Roggisch
Jul 18 '05 #19

P: n/a
> Looks like a Hybrid.

Nicely observed. Now you asked for pure C. Do you call pure C beeing hybrid?
All of that can be virtually emulated (virtual memory and so forth).
Yeah. So you got your virtual memory - and how do you plan to bring it to
use? The CPython implementation conveniently uses malloc, that fetches its
memory from the "real" memory. And allocating python objects will use that.
So how exactly do you plan to put your nice new simulated memory to use, in
python, the language you want to create an OS in?
This has been done as well, there are Java operating systems were IO is
handled by Java and not magic, or so I understand.


I found this one:

http://www.savaje.com/faqs1.html#item1

Somehow these guys made the design decision to use C/C++ for low-level
stuff. So they seem to share my hilarious viewpoints to a certain degree.

But I'm sure you can show me a JAVA OS thats purely written in java. I'm
looking forward to it. I'm especially interested in their core IO driver
code beeing written in JAVA.
--
Regards,

Diez B. Roggisch
Jul 18 '05 #20

P: n/a
http://unununium.org/

Regards
Jul 18 '05 #21

P: n/a
"Diez B. Roggisch" <de*********@web.de> writes:
OS's have been written for VMs (LISP and Forth) that didn't have the
notion of interrupt before they were built. For LISPMs, interrupt
handlers are LISP objects (*). Java may not be as powerful as LISP,
but I'm pretty sure you could turn interrupts into method invocations
without having to extend the VM. How so? An interrupt is a address the processor directly jumps to by
adjusting its PC. The JVM doesn't even have the idea of function pointers.
Invoking a (even static) method involves several lookups in dicts until the
actual code pointer is known - and that's byte code then, not machine
code.


No, that's how most modern machines generate interrupts. That doesn't
mean the your JVM-based system would have to do it that way. An
interrupt could trigger a lookup in a privlieged dict to find a code
pointer.

As your examples show, one can implement a VM ontop of a considederably thin
layer of low level code and expose hooks that allow for system
functionality to be developed in a high-level language running bytecode.
Fine. Never doubted that. I've written myself C-wrappings that allowed
python callables to be passed as callbacks to the C-lib - no black magic
there. But that took some dozen lines of C-code, and time-critical
interrupts won't work properly if you allow their code beeing implemented
in a notably slower language.


If you followed the link in my last post, you saw a paper on one of
the early LISPMs, which put a virtual LISP machine in silicon. The
same could be done for JVM. However, the success of LISPMs and Forth
in silicon would contraindicate doing that.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Jul 18 '05 #22

P: n/a
Jeremy Jones wrote:
Peter Hansen wrote:
You _cannot_ use Python for a time-critical interrupt, even
when you allow for pure C or assembly code as the bridge
(since Python _cannot_ be used natively for interrupts, of
course), because -- as noted above! -- the interrupt must
execute in a few hundred CPU cycles.
I'm really not trying to contradict nor stir things up. But the OP
wanted to know if it were possible to prototype an OS and in a
follow-up, referred to a virtual OS. Maybe I mis-read the OP, but it
seems that he is not concerned with creating a _real_ OS (one that talks
directly to the machine), it seems that he is concerned with building
all the components that make up an OS for the purpose of....well.....he
didn't really state that.....or maybe I missed it.


Richard's post, to which I was replying, didn't include the
proper attributions for his quoted material, so I don't at
this point know who actually wrote what. There was someone
who said you could do interrupts in Python (the argument
being that this was so "because you can do them in C", which
is non-sensical given that they are different languages with
different capabilities), and then someone else wrote about
"time-critical code" of "a few hundred CPU cycles", and then
Richard's reply in effect justified the original argument
again (at least, that was my interpretation. I don't know
if he meant to do that. I suspect not, given his rebuttal.)

Note that I have *no idea* who the OP was in this thread, nor
what questions he asked or what claims he was making. Sorry,
I didn't read it in that detail. I just saw the message to
which I replied, with the unattributed stuff just as I quoted
it, and reacted purely to it. Probably a bad idea, but I
stand by what I said.

If, as you suggest, the OP just wants a prototype, then I say
wonderful, use Python. Just don't waste time writing real
interrupts in Python, and certainly don't try to make the
prototype work in an environment where 100 CPU cycles is
important...
So, asking in total ignorance, and deferring to someone with obviously
more experience that I have (like you, Peter), would it be possible to
create an OS-like application that runs in a Python interpreter that
does OS-like things (i.e. scheduler, interrupt handling, etc.) and talks
to a hardware-like interface? If we're talking about a virtual OS,
(again, I'm asking in ignorance) would anything really be totally time
critical? Especially if the point were learning how an OS works?
Sure, everything you describe would be straightforward and
work wonderfully with Python.
I totally agree with you...sort of. I totally agree with your technical
assessment. However, I'm reading the OP a different way.


I didn't read the OP, just Richard. Unless he was the OP,
in which case I'm confused about various comments that have
been made, but not concerned enough to go back and try to
figure the whole thing out. *Your* comments appear right
on the mark, as far as I can see. ;-)
Of course, if you will allow both assembly/C code here and
there as a bridge, *and* you are willing to accept an operating
system that is arbitrarily slower at certain time-critical
operations (such as responding to mouse activities) than we
are used to now, then certainly Python can be used for such things...

OK - so here's my answer. It should be possible, but it will be slower,
which seems to be acceptable for what he meant when mentioning
prototyping and a virtual OS. But here's another question. Would it be
possible, if I've interpreted him correctly, to write the whole thing in
Python without directly calling C code or assembly?


Nope. Python has no ability to interface to something that is
defined only at the assembly level (interrupt routines) without
using assembly. (I don't even mention C here, as it requires
special non-standard C extensions to support these things, in
effect doing a quick escape to assembly.)

I'll add an additional note: there's a qualitative difference
between being fast enough to respond to hardware interrupts
at the 100-CPU cycle sort of level of performance, and at
a speed 100 times slower. It's not a matter of just having
a slower overall system, which might be fine for a prototype
or a simulation. In fact *it simply won't work*. That's
because if hardware interrupts aren't answered fast enough,
in most non-trivial cases _information will be lost_, and
that means the system is broken. That's the definition of
a "hard realtime system", by the way, and an unfortunate
reason that Python in its current form (i.e. assuming its
bytecode interpreted rather than compiled to some kind of
native code) cannot ever be used at the lowest levels of
an OS.

The argument should really be about *how little* assembly
and/or C you can get away with, and perhaps here is where
Mel Wilson should step in with some of his thoughts on the
matter, as I've heard him discuss some ideas in this area. ;-)

-Peter
Jul 18 '05 #23

P: n/a
> If you followed the link in my last post, you saw a paper on one of
the early LISPMs, which put a virtual LISP machine in silicon. The
From that page:

"""
n the software of the Lisp Machine system, code is written in only two
languages (or "levels"): Lisp, and CONS machine microcode There is never
any reason to hand-code macrocode, since it corresponds so closely with
Lisp; anything one could write in macrocode could be more easily end
clearly written in the corresponding Lisp. The READ, EVAL, and PRINT
functions are completely written in Lisp, including their subfunctions
(except that APPLY of compiled functions is in micro-code). This
illustrates the ability to write system functions in Lisp.
"""

So they do have a underlying assembler. As I said before: I never doubted
the possibilty to tightly integrate a higher level language that even is
interpreted and a piece of hardware together so that the OS is based on
that hl language.

But you simply don't _want_ your virtual memory paging code written in
python or (uncompiled) lisp, lest you like to drink lots of coffe sitting
idly in front of your machine :)
same could be done for JVM. However, the success of LISPMs and Forth
in silicon would contraindicate doing that.


Then we're not talking about "V"Ms ;) AFAIK there have been attempts to make
java bytecode interpreters in hardware - but I'm not aware that these have
been a large success in the market or even reached a level where they
entered the market at all.

--
Regards,

Diez B. Roggisch
Jul 18 '05 #24

P: n/a
I didn't read the OP, just Richard. Unless he was the OP,
in which case I'm confused about various comments that have
been made, but not concerned enough to go back and try to
figure the whole thing out. *Your* comments appear right
on the mark, as far as I can see. ;-)
Richard was the OP.
Nope. Python has no ability to interface to something that is
defined only at the assembly level (interrupt routines) without
using assembly. (I don't even mention C here, as it requires
special non-standard C extensions to support these things, in
effect doing a quick escape to assembly.)
My words....
I'll add an additional note: there's a qualitative difference
between being fast enough to respond to hardware interrupts
at the 100-CPU cycle sort of level of performance, and at
a speed 100 times slower. It's not a matter of just having
a slower overall system, which might be fine for a prototype
or a simulation. In fact *it simply won't work*. That's
because if hardware interrupts aren't answered fast enough,
in most non-trivial cases _information will be lost_, and
that means the system is broken. That's the definition of
a "hard realtime system", by the way, and an unfortunate
reason that Python in its current form (i.e. assuming its
bytecode interpreted rather than compiled to some kind of
native code) cannot ever be used at the lowest levels of
an OS.


Yup. And writing an OS is exactly about these nitty gritty things -
otherwise, its only a collection of more or less useful lib routines. So
writing a "virtual os" that doesn't have to deal with problems like these
is barely useful for teaching anything about writing an os at all...

--
Regards,

Diez B. Roggisch
Jul 18 '05 #25

This discussion thread is closed

Replies have been disabled for this discussion.