Ah, I love when I hit tab and then reallize I can't tab in a text box
because it changes focus, then hit backspace and it goes back a page
and I lose everything I have written :(
In a nutshell, if you have a recursive process that uses the recursive
calls as parameters to itself like Do(Do(n+1,term), Do(n+2,term)) then
your calls can be visuallized as a tree, and you would hand off the
right paramter to another thread, but only until you've filled all
processors. At that point there would be no more handing off, and each
thread would work it's part of the call tree using the normal call
stack process.
If a thread finished before the others, then one of the others could
hand off again at the next most convient split point.
Since we don't want to mess with threads if the recursive process isn't
going to get very deep, the programmer could specifiy conditions in an
imperetive manner such as:
DoInit(int n, int term)
{
if( (n-term) >= 100)
{
FlagUsageOfMultithreading();
}
Do(n,term);
}
If you were processing some sort of large set of data, your condition
would be based on the number of items in the data set. Of course it's
not provable when or if something will terminate, but since you're only
concerned with identifying what is "too small" for multithreading, then
for small values or small sets of data, the programmer should be able
to make a good estimate of how many calls will be required. With that
they would estimate what the breaking point is that makes
multithreading worth it.
I suppose there has already been alot of research in these regards in
the realm of distributed computing. It seems that since the future is
going to be single user systems running multi core processors, then
there is going to be a need for easy to use techniques that allow
programmers to get the most out of these processors.
Barry Kelly wrote:
"Atmapuri" <di*@community.nospam> wrote:
Hi!
cores, imagining that CalcFirst, CalcSecond and '*' on ints were
operations sufficiently slow to benefit from parallel calculation, which
implies extra latency for communicating the return values.
Considering that the OS switches between threads about every 20ms.
This assumes that there are other tasks that are using 100% of every
CPU. Also, a thread's time slice ends sooner if it blocks on a
synchronization object. It only makes sense to multi-thread these
calculations if not all CPUs are being utilized to 100% anyway, which
implies that there will (probably) be an idle CPU for the task, which in
turn implies that it should take less than 20ms before the thread gets
started (a slice "timeout" will not be needed at all if a CPU is idle).
It would only make sense to use multiple cores for tasks that take
longer than 20ms to compute. And that is usually a very "fat" chunk of
processing.
I don't agree with your reasoning.
-- Barry
--
http://barrkel.blogspot.com/