468,780 Members | 2,254 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,780 developers. It's quick & easy.

C++ inventor Bjarne Stroustrup answers the Multicore Proust Questionnaire

20 1849
gremlin wrote:
http://www.cilk.com/multicore-blog/b...-Questionnaire

It's an interesting interview.
Sep 28 '08 #2
"gremlin" <gr*****@rosetattoo.comwrote in message
news:v5******************************@comcast.com. ..
http://www.cilk.com/multicore-blog/b...-Questionnaire
I get a not found error:

The requested URL
/multicore-blog/bid/6703/C-Inventor-Bjarne-Stroustrup-answers-the-Multicore-Proust-Questionnaire
was not found on this server.

Where is the correct location?

Sep 28 '08 #3
Chris M. Thomasson wrote:
"gremlin" <gr*****@rosetattoo.comwrote in message
news:v5******************************@comcast.com. ..
>http://www.cilk.com/multicore-blog/b...-Questionnaire

I get a not found error:

The requested URL
/multicore-blog/bid/6703/C-Inventor-Bjarne-Stroustrup-answers-the-Multicore-Proust-Questionnaire
was not found on this server.

Where is the correct location?
The link is the correct location I just tried it.

--
Ian Collins.
Sep 28 '08 #4
"Ian Collins" <ia******@hotmail.comwrote in message
news:6k************@mid.individual.net...
Chris M. Thomasson wrote:
>"gremlin" <gr*****@rosetattoo.comwrote in message
news:v5******************************@comcast.com ...
>>http://www.cilk.com/multicore-blog/b...-Questionnaire

I get a not found error:

The requested URL
/multicore-blog/bid/6703/C-Inventor-Bjarne-Stroustrup-answers-the-Multicore-Proust-Questionnaire
was not found on this server.

Where is the correct location?

The link is the correct location I just tried it.
Hey, it works now! Weird; perhaps temporary server glitch. Who knows.

;^/

Sep 28 '08 #5

"Chris M. Thomasson" <no@spam.invalidwrote in message
news:Gx******************@newsfe12.iad...
"Ian Collins" <ia******@hotmail.comwrote in message
news:6k************@mid.individual.net...
>Chris M. Thomasson wrote:
>>"gremlin" <gr*****@rosetattoo.comwrote in message
news:v5******************************@comcast.co m...
http://www.cilk.com/multicore-blog/b...-Questionnaire
I get a not found error:

The requested URL
/multicore-blog/bid/6703/C-Inventor-Bjarne-Stroustrup-answers-the-Multicore-Proust-Questionnaire
was not found on this server.

Where is the correct location?

The link is the correct location I just tried it.

Hey, it works now! Weird; perhaps temporary server glitch. Who knows.

;^/

Q: The most important problem to solve for multicore software:

A: How to simplify the expression of potential parallelism.
Humm... What about scalability? That's a very important problem to solve.
Perhaps the most important. STM simplifies expression of parallelism, but
its not really scaleable at all.

I guess I would have answered:
CT-A: How to simplify the expression of potential parallelism __without
sacrificing scalability__.


Q: My worst fear about how multicore technology might evolve:

A: Threads on steroids.
Well, threads on steroids and proper distributed algorihtms can address the
scalability issue. Nothing wrong with threading on steroids. Don't be
afraid!!!!! I am a threading freak, so I am oh so VERY BIASED!!! ;^|

Oh well, that my 2 cents.

Sep 28 '08 #6
Chris M. Thomasson wrote:
"Ian Collins" <ia******@hotmail.comwrote in message
news:6k************@mid.individual.net...
>Chris M. Thomasson wrote:
>>"gremlin" <gr*****@rosetattoo.comwrote in message
news:v5******************************@comcast.co m...
http://www.cilk.com/multicore-blog/b...-Questionnaire

I get a not found error:

The requested URL
/multicore-blog/bid/6703/C-Inventor-Bjarne-Stroustrup-answers-the-Multicore-Proust-Questionnaire

was not found on this server.

Where is the correct location?

The link is the correct location I just tried it.

Hey, it works now! Weird; perhaps temporary server glitch. Who knows.
Well it is running on windows :)

--
Ian Collins.
Sep 28 '08 #7
In article <2G*****************@newsfe12.iad>, "Chris M. Thomasson"
<no@spam.invalidwrote:
"Chris M. Thomasson" <no@spam.invalidwrote in message
news:Gx******************@newsfe12.iad...
"Ian Collins" <ia******@hotmail.comwrote in message
news:6k************@mid.individual.net...
Chris M. Thomasson wrote:
"gremlin" <gr*****@rosetattoo.comwrote in message
news:v5******************************@comcast.com ...
http://www.cilk.com/multicore-blog/b...-Questionnaire
[...]
Q: The most important problem to solve for multicore software:

A: How to simplify the expression of potential parallelism.
Humm... What about scalability? That's a very important problem to solve.
Perhaps the most important. STM simplifies expression of parallelism, but
its not really scaleable at all.

I guess I would have answered:

CT-A: How to simplify the expression of potential parallelism __without
sacrificing scalability__.
Q: My worst fear about how multicore technology might evolve:

A: Threads on steroids.
Well, threads on steroids and proper distributed algorihtms can address the
scalability issue. Nothing wrong with threading on steroids. Don't be
afraid!!!!! I am a threading freak, so I am oh so VERY BIASED!!! ;^|
Sounds like Stroustrup wants to minimize extra notation in source code
relating to parallel execution, to not have it take on a life of its own.
Exceptions might be an example of minimal impact, where lots of code
doesn't require any explicit mention of handling exceptions.
Sep 28 '08 #8
On Sep 28, 6:31 am, "Chris M. Thomasson" <n...@spam.invalidwrote:
"Chris M. Thomasson" <n...@spam.invalidwrote in
messagenews:Gx******************@newsfe12.iad...
"Ian Collins" <ian-n...@hotmail.comwrote in message
news:6k************@mid.individual.net...
Chris M. Thomasson wrote:
"gremlin" <grem...@rosetattoo.comwrote in message
news:v5******************************@comcast.co m...
http://www.cilk.com/multicore-blog/b...Bjarne-Stroust....
[...]
Q: The most important problem to solve for multicore software:
A: How to simplify the expression of potential parallelism.
Humm... What about scalability? That's a very important
problem to solve. Perhaps the most important. STM simplifies
expression of parallelism, but its not really scaleable at
all.
And what do you think simplifying the expression of potential
parallelism achieves, if not scalability?

[...]
Q: My worst fear about how multicore technology might evolve:
A: Threads on steroids.
Well, threads on steroids and proper distributed algorihtms
can address the scalability issue. Nothing wrong with
threading on steroids. Don't be afraid!!!!! I am a threading
freak, so I am oh so VERY BIASED!!! ;^|
I'm not too sure what Stroustrup was getting at here, but having
to write explicitly multithreaded code (with e.g. manual locking
and synchronization) is not a good way to achieve scalability.
Futures are probably significantly easier to use, and in modern
Fortran, if I'm not mistaken, there are special constructs to
tell the compiler that certain operations can be parallelized.
And back some years ago, there was a fair amount of research
concerning automatic parallelization by the compiler; I don't
know where it is now.

Of course, a lot depends on the application. In my server,
there's really nothing that could be parallelized in a given
transaction, but we can run many transactions in parallel. For
that particular model of parallelization, classical explicit
threading works fine.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Sep 28 '08 #9
James Kanze wrote:
On Sep 28, 6:31 am, "Chris M. Thomasson" <n...@spam.invalidwrote:
>A: Threads on steroids.
>Well, threads on steroids and proper distributed algorihtms
can address the scalability issue. Nothing wrong with
threading on steroids. Don't be afraid!!!!! I am a threading
freak, so I am oh so VERY BIASED!!! ;^|

I'm not too sure what Stroustrup was getting at here, but having
to write explicitly multithreaded code (with e.g. manual locking
and synchronization) is not a good way to achieve scalability.
Futures are probably significantly easier to use, and in modern
Fortran, if I'm not mistaken, there are special constructs to
tell the compiler that certain operations can be parallelized.
And back some years ago, there was a fair amount of research
concerning automatic parallelization by the compiler; I don't
know where it is now.
We along with Fortran and C programmers, can use OpenMP which from my
limited experience with, works very well.

--
Ian Collins.
Sep 28 '08 #10
On Sep 27, 5:55*pm, "gremlin" <grem...@rosetattoo.comwrote:
http://www.cilk.com/multicore-blog/b...Bjarne-Stroust...
Q: The most important problem to solve for multicore software:
A: How to simplify the expression of potential parallelism.

That is interesting. Especially since the C++0x committee aims at the
very low level of library-based parallelism.---They will not be able
to "simplify the expression of potential parallelism" that way. Not in
C++0x, at least.

Best Regards,
Szabolcs
Sep 28 '08 #11
James Kanze <ja*********@gmail.comwrote:
Q: My worst fear about how multicore technology might evolve:
A: Threads on steroids.
Well, threads on steroids and proper distributed algorihtms
can address the scalability issue. Nothing wrong with
threading on steroids. Don't be afraid!!!!! I am a threading
freak, so I am oh so VERY BIASED!!! ;^|

I'm not too sure what Stroustrup was getting at here, but having
to write explicitly multithreaded code (with e.g. manual locking
and synchronization) is not a good way to achieve scalability.
Agreed. I have played around with Occam's expression of Communicating
Sequential Processes (CSP). I would like to see CSP explored further in
C++.

What I have found so far is
http://www.twistedsquare.com/cppcspv1/docs/index.html
Sep 28 '08 #12
"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:18**********************************@x35g2000 hsb.googlegroups.com...
On Sep 27, 5:55 pm, "gremlin" <grem...@rosetattoo.comwrote:
http://www.cilk.com/multicore-blog/b...Bjarne-Stroust...
Q: The most important problem to solve for multicore software:
A: How to simplify the expression of potential parallelism.
That is interesting. Especially since the C++0x committee aims at the
very low level of library-based parallelism.
Your mistaken. The language and the library are highly integrated and depend
on one another. Its definitely NOT a purely library based solution. Similar
to the way a POSIX compliant compiler interacts with a PThread library.

---They will not be able
to "simplify the expression of potential parallelism" that way. Not in
C++0x, at least.
C++ is not about simplification as its a low-level systems language.
However, due to its low-level nature you can certainly use C++0x to create a
brand new language that attempts to "simplify the expression of potential
parallelism".

Sep 28 '08 #13

"James Kanze" <ja*********@gmail.comwrote in message
news:4a**********************************@m73g2000 hsh.googlegroups.com...
On Sep 28, 6:31 am, "Chris M. Thomasson" <n...@spam.invalidwrote:
"Chris M. Thomasson" <n...@spam.invalidwrote in
messagenews:Gx******************@newsfe12.iad...
"Ian Collins" <ian-n...@hotmail.comwrote in message
>news:6k************@mid.individual.net...
>Chris M. Thomasson wrote:
>>"gremlin" <grem...@rosetattoo.comwrote in message
>>>news:v5******************************@comcast.c om...
>>>>http://www.cilk.com/multicore-blog/b...Bjarne-Stroust...
[...]
Q: The most important problem to solve for multicore software:
A: How to simplify the expression of potential parallelism.
Humm... What about scalability? That's a very important
problem to solve. Perhaps the most important. STM simplifies
expression of parallelism, but its not really scaleable at
all.
And what do you think simplifying the expression of potential
parallelism achieves, if not scalability?
Take one attempt at simplifying the expression of potential parallelism;
STM. Unfortunately, its not really able to scale. The simplification can
introduce overhead which interfere with scalability. IMVHO, message-passing
has potential. At least I know how to implement it in a way that basically
scales to any number of processors.


[...]
Q: My worst fear about how multicore technology might evolve:
A: Threads on steroids.
Well, threads on steroids and proper distributed algorihtms
can address the scalability issue. Nothing wrong with
threading on steroids. Don't be afraid!!!!! I am a threading
freak, so I am oh so VERY BIASED!!! ;^|
I'm not too sure what Stroustrup was getting at here, but having
to write explicitly multithreaded code (with e.g. manual locking
and synchronization) is not a good way to achieve scalability.
That's relative to the programmer. I have several abstractions packaged into
a library which allows one to create highly scaleable programs using
threads. However, if the programmer is not skilled in the art of
multi-threading, well, then its not going to do any good!

;^(

Futures are probably significantly easier to use,
They have some "caveats". I have implemented futures, and know that a truly
scaleable impl needs to use distributed queuing which does not really follow
true global FIFO. There can be ordering anomalies that the programmer does
not know about, and will bit them in the a$% if some of their algorihtms
depend on certain orders of actions.

and in modern
Fortran, if I'm not mistaken, there are special constructs to
tell the compiler that certain operations can be parallelized.
And back some years ago, there was a fair amount of research
concerning automatic parallelization by the compiler; I don't
know where it is now.
No silver bullets in any way shape or form. Automatic parallelization
sometimes works for a narrow type of algorithm. Usually, breaking up arrays
across multiple threads. But, the programmer is not out of the woods,
because they will still need to manually implement enhancements that are KEY
to scalability (e.g., cache-blocking.). No silver bullets indeed.

Of course, a lot depends on the application. In my server,
there's really nothing that could be parallelized in a given
transaction, but we can run many transactions in parallel. For
that particular model of parallelization, classical explicit
threading works fine.
Absolutely.

Sep 28 '08 #14
"Daniel T." <da******@earthlink.netwrote in message
news:da****************************@earthlink.vsrv-sjc.supernews.net...
James Kanze <ja*********@gmail.comwrote:
Q: My worst fear about how multicore technology might evolve:
A: Threads on steroids.
Well, threads on steroids and proper distributed algorihtms
can address the scalability issue. Nothing wrong with
threading on steroids. Don't be afraid!!!!! I am a threading
freak, so I am oh so VERY BIASED!!! ;^|

I'm not too sure what Stroustrup was getting at here, but having
to write explicitly multithreaded code (with e.g. manual locking
and synchronization) is not a good way to achieve scalability.

Agreed. I have played around with Occam's expression of Communicating
Sequential Processes (CSP). I would like to see CSP explored further in
C++.
IMO, CPS is WAY to high level to be integrated into the language. However,
you can definitely use C++0x to fully implement Occam and/or CSP. If you
want to use CSP out of the box, well, C++ is NOT for you; period. Keep in
mind, C++ is a low-level systems language.

What I have found so far is
http://www.twistedsquare.com/cppcspv1/docs/index.html
Sep 28 '08 #15
On Sep 28, 7:00*pm, "Daniel T." <danie...@earthlink.netwrote:
[...]
Agreed. I have played around with Occam's expression of Communicating
Sequential Processes (CSP). I would like to see CSP explored further in
C++.

What I have found so far ishttp://www.twistedsquare.com/cppcspv1/docs/index.html
Well, CSP is a very nice language concept and OCCAM is an interesting
instance of it.

However, CSP is not matching very well with objects and shared
resources. If you want to map processes in the CSP sense to objects,
you will end up with what you would call single threaded object. The
object has its private state space and any request comes via events.
The object can await several potential event but whenever one or more
event is applicable, it selects one non-deterministically and performs
the corresponding action on its own private data space.

It is worth mentioning that at the time CSP was published, there
appeared another elegant language concept: Distributed Processes (DP).
This is something that can be mapped very naturally to objects and
gives you high level of potential parallelism. In DP an object starts
its own thread which operates on its own data space and other objects
can call its methods asynchronously. The initial thread and the called
methods then are executed in an interleaved manner. If the initial
thread finishes, the object continues to serve the potentially
simultaneous method calls, i.e. it becomes a (shared) passive object.
http://brinch-hansen.net/papers/1978a.pdf

Well, both language proposals are much higher level with respect to
parallelism than what is planned into the new brave C++0x.

Best Regards,
Szabolcs
Sep 28 '08 #16

"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:01**********************************@t54g2000 hsg.googlegroups.com...
On Sep 28, 7:00 pm, "Daniel T." <danie...@earthlink.netwrote:
[...]
Agreed. I have played around with Occam's expression of Communicating
Sequential Processes (CSP). I would like to see CSP explored further in
C++.

What I have found so far
ishttp://www.twistedsquare.com/cppcspv1/docs/index.html
Well, CSP is a very nice language concept and OCCAM is an interesting
instance of it.
[...]
It is worth mentioning that at the time CSP was published, there
appeared another elegant language concept: Distributed Processes (DP).
This is something that can be mapped very naturally to objects and
gives you high level of potential parallelism. In DP an object starts
its own thread which operates on its own data space and other objects
can call its methods asynchronously.
Each object starts its own thread? lol, NO WAY! I can definitely implement a
high-performance version of DP in which a plurality of thread multiplexes
multiple objects. N number of threads for O number of objects. N = 4; O =
10000. Sure. No problem. DP does not have to work like you explicitly
suggest. Sorry, but you make huge mistake.

The initial thread and the called
methods then are executed in an interleaved manner. If the initial
thread finishes, the object continues to serve the potentially
simultaneous method calls, i.e. it becomes a (shared) passive object.
http://brinch-hansen.net/papers/1978a.pdf
Well, both language proposals are much higher level with respect to
parallelism than what is planned into the new brave C++0x.
DP is NOT as expensive as you claim it is. E.g. each object starts it's OWN
thread. No way!

;^|

Sep 29 '08 #17
On Sep 29, 12:06*pm, "Chris M. Thomasson" <n...@spam.invalidwrote:
"Szabolcs Ferenczi" <szabolcs.feren...@gmail.comwrote in message

news:01**********************************@t54g2000 hsg.googlegroups.com...
On Sep 28, 7:00 pm, "Daniel T." <danie...@earthlink.netwrote:
[...]
Agreed. I have played around with Occam's expression of Communicating
Sequential Processes (CSP). I would like to see CSP explored further in
C++.
What I have found so far
ishttp://www.twistedsquare.com/cppcspv1/docs/index.html
Well, CSP is a very nice language concept and OCCAM is an interesting
instance of it.

[...]
It is worth mentioning that at the time CSP was published, there
appeared another elegant language concept: Distributed Processes (DP).
This is something that can be mapped very naturally to objects and
gives you high level of potential parallelism. In DP an object starts
its own thread which operates on its own data space and other objects
can call its methods asynchronously.

Each object starts its own thread? lol, NO WAY!
Hmm... Yes, *logically* "Each object starts its own thread". That is
the way it is defined by the author of the concept.
I can definitely implement a
high-performance version of DP in which a plurality of thread multiplexes
multiple objects.
Obviously, you do not know what you are talking about. You go and read
about the programming concept Distributed Processes (DP) before you
start claiming how you would like to hack it.
http://brinch-hansen.net/papers/1978a.pdf
N number of threads for O number of objects. N = 4; O =
10000. Sure. No problem. DP does not have to work like you explicitly
suggest. Sorry, but you make huge mistake.
Well, it is not me who suggest it that way but the author of the
programming language concept. Again, you may perhaps try to read about
it first.
http://brinch-hansen.net/papers/1978a.pdf
The initial thread and the called
methods then are executed in an interleaved manner. If the initial
thread finishes, the object continues to serve the potentially
simultaneous method calls, i.e. it becomes a (shared) passive object.
http://brinch-hansen.net/papers/1978a.pdf
Well, both language proposals are much higher level with respect to
parallelism than what is planned into the new brave C++0x.

DP is NOT as expensive as you claim it is. E.g. each object starts it's OWN
thread. No way!
I did not claim it was expensive. You have concluded it but it is just
because of your ignorance. It is just because you talk about it
without knowing anything about it.

I hope, I could help, though.

Best Regards,
Szabolcs
Oct 20 '08 #18
"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:8a**********************************@t18g2000 prt.googlegroups.com...
On Sep 29, 12:06 pm, "Chris M. Thomasson" <n...@spam.invalidwrote:
"Szabolcs Ferenczi" <szabolcs.feren...@gmail.comwrote in message

news:01**********************************@t54g2000 hsg.googlegroups.com...
On Sep 28, 7:00 pm, "Daniel T." <danie...@earthlink.netwrote:
[...]
Agreed. I have played around with Occam's expression of
Communicating
Sequential Processes (CSP). I would like to see CSP explored further
in
C++.
What I have found so far
ishttp://www.twistedsquare.com/cppcspv1/docs/index.html
Well, CSP is a very nice language concept and OCCAM is an interesting
instance of it.
[...]
It is worth mentioning that at the time CSP was published, there
appeared another elegant language concept: Distributed Processes (DP).
This is something that can be mapped very naturally to objects and
gives you high level of potential parallelism. In DP an object starts
its own thread which operates on its own data space and other objects
can call its methods asynchronously.
Each object starts its own thread? lol, NO WAY!
Hmm... Yes, *logically* "Each object starts its own thread". That is
the way it is defined by the author of the concept.
Right. I correct your major mistake. You stated that each object starts its
own thread. Well, your idea on how to implement DP is not scaleable. I
corrected you; why don't you try learning from it? Wow.

I can definitely implement a
high-performance version of DP in which a plurality of thread
multiplexes
multiple objects.
Obviously, you do not know what you are talking about.
Wrong. Try again.

You go and read
about the programming concept Distributed Processes (DP) before you
start claiming how you would like to hack it.
http://brinch-hansen.net/papers/1978a.pdf
Listen, you would implement it with an object per-thread because that's what
you explicitly said. I know for a fact that I can do it with in a way which
is scaleable. Multiple objects can be bound to a single thread and
communication between them is multiplexed. DP can be single-threaded in the
context. However, I would do it with a thread-pool.


N number of threads for O number of objects. N = 4; O =
10000. Sure. No problem. DP does not have to work like you explicitly
suggest. Sorry, but you make huge mistake.
Well, it is not me who suggest it that way but the author of the
programming language concept.
He is smarter than you are.

Again, you may perhaps try to read about
it first.
http://brinch-hansen.net/papers/1978a.pdf

IMVHOP, the author SURELY knows that multiplexing can be used to implement
DP; you do not. Sorry, but that's the way it is. You think that a thread
per-object is needed. Well, its not; your mistaken.

The initial thread and the called
methods then are executed in an interleaved manner. If the initial
thread finishes, the object continues to serve the potentially
simultaneous method calls, i.e. it becomes a (shared) passive object.
>http://brinch-hansen.net/papers/1978a.pdf
Well, both language proposals are much higher level with respect to
parallelism than what is planned into the new brave C++0x.
DP is NOT as expensive as you claim it is. E.g. each object starts it's
OWN
thread. No way!
I did not claim it was expensive.
Yes you did. You said that an object creates a thread. Dare me to quote you?
Well, what if there is 100,000 objects?

You have concluded it but it is just
because of your ignorance. It is just because you talk about it
without knowing anything about it.
I hope, I could help, though.
You helped me confirm my initial thoughts. Sorry, but DP can be implemented
via thread-pool and multiplexing. No object per-thread crap is needed. I
quote you:


"Well, it is not me who suggest it that way but the author of the
programming language concept. "
Sorry. But, the author knows that DP can be implemented through
message-passing, thread-pool, and multiplexing such that it can be
single-threaded, or run on a ten-thousand processor system. You need to
understand that fact. I suggest that you learn about how to create scaleable
algorithms. DP is one of them. However, the way you describe it is
detestable at best.

:^|

Oct 21 '08 #19

"Chris M. Thomasson" <no@spam.invalidwrote in message
news:cJ*****************@newsfe02.iad...
"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:8a**********************************@t18g2000 prt.googlegroups.com...
On Sep 29, 12:06 pm, "Chris M. Thomasson" <n...@spam.invalidwrote:
"Szabolcs Ferenczi" <szabolcs.feren...@gmail.comwrote in message

news:01**********************************@t54g2000 hsg.googlegroups.com...
On Sep 28, 7:00 pm, "Daniel T." <danie...@earthlink.netwrote:

[...]
Agreed. I have played around with Occam's expression of
Communicating
Sequential Processes (CSP). I would like to see CSP explored
further in
C++.

What I have found so far
ishttp://www.twistedsquare.com/cppcspv1/docs/index.html
Well, CSP is a very nice language concept and OCCAM is an interesting
instance of it.

[...]

It is worth mentioning that at the time CSP was published, there
appeared another elegant language concept: Distributed Processes
(DP).
This is something that can be mapped very naturally to objects and
gives you high level of potential parallelism. In DP an object starts
its own thread which operates on its own data space and other objects
can call its methods asynchronously.

Each object starts its own thread? lol, NO WAY!
>Hmm... Yes, *logically* "Each object starts its own thread". That is
the way it is defined by the author of the concept.

Right. I correct your major mistake. You stated that each object starts
its own thread. Well, your idea on how to implement DP is not scaleable. I
corrected you; why don't you try learning from it? Wow.
I corrected you by informing you that a thread-pool and multiplexing can
implement DP using a bounded number of threads. Your answer what that I
don't know what I am writing about. Well, you make be laugh Szabolcs. One
thing you know how to do is make me laugh. Thanks.

Oct 21 '08 #20
I was perhaps WAY to harsh... Let me sum things up... I know for a FACT
that:

Distributed Processes
http://brinch-hansen.net/papers/1978a.pdf

can be implemented with a thread-pool, message-passing and multiplexing such
that N threads can handle O objects. Think N == 4; O == 1,000,000. I KNOW
the author understands this fact; its COMMON SENSE. So be it. However...

Szabolcs Ferenczi seemed to suggest that an object needed to have its own
thread. Well, if I take that to another extreme, an object needs its own
personal process. I think I flamed him to harshly. He is in need of
information, not flames. Well, I am sorry!

;^/

Oct 21 '08 #21

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

13 posts views Thread by Teddy | last post: by
1 post views Thread by blangela | last post: by
1 post views Thread by CARIGAR | last post: by
reply views Thread by zhoujie | last post: by
2 posts views Thread by Marin | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.