>Um... So let's say you have a opaque object ref from the OS thatYou can try to assign the buffer in the shared memory space, that can
represents hundreds of megs of data (e.g. memory-resident video). How
do you get that back to the parent process without serialization and
IPC? What should really happen is just use the same address space so
just a pointer changes hands. THAT's why I'm saying that a separate
address space is generally a deal breaker when you have large or
intricate data sets (ie. when performance matters).
be managed by Nikita the Spider's shm module.
Then you can implement what would be essentially a systolic array
structure, passing the big buffer along to the processes who
may, or may not, be running on different processors, to do whatever
magic each process has to do, to complete the whole transformation.
(filter, fft, decimation, compression, mpeg, whatever...)
<aside>
This may be faster than forking an OS thread - don't subprocesses get
a COPY of the parent's environment?
<\aside>
But this will give you only one process running at a time, as you
can't do stuff simultaneously to the same data.
So you will need to split a real big ram area into your big buffers so
that each of the processes you contemplate running seperately can be given
one 100 M area (out of the shared big one) to own to do its magic on.
When it is finished, it passes the ownership back, and the block is
assigned to the next process in the sequence, while a new block from
the OS is assigned to the first process, and so on.
So you still have shared ram IPC, but there is no serialisation.
And you don't move the data, unless you want to. You can update
or twiddle in place. Its the serialisation that kills the performance.
And the pointers can be passed by the same mechanism, if I
understand what shm does after a quick look.
So you can build a real ripsnorter - it rips this, while it
snorts the previous and tears the antepenultimate...
- Hendrik