473,387 Members | 1,493 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,387 software developers and data experts.

What's the connection between objects and threads?

Hi

I have to write a multi-threaded program. I decided to take an OO
approach to it. I had the idea to wrap up all of the thread functions
in a mix-in class called Threadable. Then when an object should run
in its own thread, it should implement this mix-in class. Does this
sound like plausible design decision?

I'm surprised that C++ doesn't have such functionality, say in its
STL. This absence of a thread/object relationship in C++ leads me to
believe that my idea isn't a very good one.

I would appreciate your insights. thanks
Jun 27 '08
167 8114
On May 20, 11:45*pm, gpderetta <gpdere...@gmail.comwrote:
On May 20, 9:26 pm, Szabolcs Ferenczi <szabolcs.feren...@gmail.com>
wrote:
On May 19, 8:01 pm, gpderetta <gpdere...@gmail.comwrote:
On May 19, 6:45 pm, Szabolcs Ferenczi <szabolcs.feren...@gmail.com>
wrote:
On May 19, 11:50 am, James Kanze <james.ka...@gmail.comwrote:
On May 18, 11:00 pm, Szabolcs Ferenczi <szabolcs.feren...@gmail.com>
wrote:
since you were trolling in the other discussion
thread in "Threading in new C++ standard" too,
...
that threads are not
going to be included at the language level in C++0x. You were
corrected there by people taking part in C++0x standardisation.
And that is simply a lie.
Is it?
[...]
How about this:
<quote>
23. *Pete Becker * *View profile * More options Apr 20, 7:04pm
On 2008-04-20 12:36:50 -0400, James Kanze <james.ka...@gmail.com>
said:
There was never a proposal for introducing the
threading API at the language level, however; that just seems
contrary to the basic principles of C++, and in practice, is
very limiting and adds nothing.
There actually was a proposal for threading at the language level:http://www.open-std.org/jtc1/sc22/wg...005/n1875.html.
...
</quote>http://groups.google.com/group/comp....a5fdf80fd7f461
Q.E.D.
Nope:
Hmmmm...
"threading API at the language level" != "threading at the language
level"
"language level" != "API"
C++ will have threading at the language level, but you will not be
able to
access the threading primitives via keywords;
Then it is not at the language level. Do you know at all what language
is?
You react:
*sigh*. We have been trying to explain it painfully to you for a long
time.
So you keep claiming that C++0x has threading at the language level
and you are wondering why some of us who already know some concurrent
programming languages cannot accept this.

A fable might be due here:

<fable>
Once the animals gathered at a meeting to decide who is the strongest
animal next to the lion. The bear claimed he is the strongest one next
to the lion. The wolf claimed he is the strongest one next to the
lion. The lion had enough of it and he declared that the rabbit is the
strongest animal next to him and closed the meeting.

The rabbit hearing this got very scared and started shivering. His
wife seeing this told him: Are you mad? Why are you shivering? Go and
bite anybody and you will see that they are scared of you.

The rabbit went to the wolf and bit him on the bottom. The wolf
thought it was a small bee but when he looked back he could see that
it was the rabbit. The rabbit who is the strongest animal next to the
lion. So the wolf got scared, apologised, and slinked away.

The rabbit went to the bear and bit him on the bottom. The bear
thought it was a small mosquito but when he looked back he could see
that it was the rabbit. The rabbit who is the strongest animal next to
the lion. So the bear got scared, apologised, and slinked away.

The rabbit started to enjoy the situation and became very cheeky. He
went to the tiger and bit him on the bottom. The tiger looked back and
promptly swated the rabbit.

Well, it happened that the tiger was not present at the meeting and
hence he did not know about that the rabbit was the strongest animal
next to the lion.
</fable>

Well, it might just happened that some of us experienced in concurrent
programming were not there in the meeting where the committee has
decided that C++0x is a concurrent programming language and that it
has threading at the language level.

Best Regards,
Szabolcs
Jun 27 '08 #101
On May 21, 6:41*am, "Chris Thomasson" <cris...@comcast.netwrote:
"Szabolcs Ferenczi" <szabolcs.feren...@gmail.comwrote in message

news:ab**********************************@56g2000h sm.googlegroups.com...
On May 19, 6:42 am, Rolf Magnus <ramag...@t-online.dewrote:
Yes. Copying a process with fork() is very fast. You don't get much ofa
performance boost from using threads.
Did you ever hear about heavy weight and light weight processes? I am
just curious.

Name a platform?
I will not name any platform since I did not claim that "fork() is
very fast". Please ask the one who claimed it. Besides, you can guess
the platform.

I have concluded already that some of you have a well developed
conditioned reflex wich is so accute that you tend to address me even
if I did not claim anything.

Best Regards,
Szabolcs
Jun 27 '08 #102
On May 21, 3:20*am, c...@mailvault.com wrote:
On May 20, 5:34*pm, Szabolcs Ferenczi <szabolcs.feren...@gmail.com>
wrote:
On May 20, 11:45*pm, gpderetta <gpdere...@gmail.comwrote:
think about a language like java, which has explicit
synchronized {} blocks. Certainly, by all definitions, this
is language support for threading (even if I'm sure it doesn't fit
your vision of good language threading support).
The `synchronized {} blocks' are language support for specifying
Critical Regions but, you are right, it is not correctly defined in
Java.
Let's consider a little variant of java/c++ hybrid with a
builtin mutex type, where the synchronized block take
the mutex explicitly as a parameter (instead of implicitly
like in java), and obvious semantics.
* mutex m;
* ...
* synchronized(m) {
* * *// do something in mutual exclusion
* }
So far so good.
So far not so good at all.
1) First of all the mutex is a library level tool. It is not a
language tool. It shall not appear at the language level. The mutex is
part of the library-based solution for achieving mutual exclusion of
the processes. There can be various language means for specifying
Critical Region, and one of them is the `synchronized' keyword.
However, it is not about the keyword, since you can call it
`synchronized', `region', `cr', or whatever. The main issue is that if
it is a language level means, it has consequences, see the semantics.
One important semantical issue is that the block marked by this
keyword may contain access to shared variables and access to shared
variables can only appear in these kind of blocks. Note that the low
level mutex can be inserted by the compiler to implement the Critical
Region defined at the language level. On the other hand, the high
level Critical Region can be implemented by any other means in the
compiler as far as it ensures mutual exclusion. That is one of the
benefit of a high level language.
2) In procedural languages if you introduce a keyword for a Critical
Region, you must also give some means to associate which shared
variables are involved. So, fixing your example, if there is a shared
resource `m' you can define a critical region using a keyword like
`synchronized' such as this:
shared int m; * * // the `shared' keyword marks the shared variable
...
synchronized(m) {
* // here you can access `m'
}
Now the compiler can make sure that `m' is accessed within Critical
Regions and only within Critical Regions.
Can you see the difference between the language level means and the
library level means?
3) In object-based or higher level object-oriented languages the unit
of shared resource is obviously an object. So the association of the
Critical Region and the critical resource is naturally merged in an
object. That was one of the failure of Java that they were not brave
enough to mark the class as the shared resource. In a decent
concurrent object-oriented language you could specify a keyword for
the class to mark a shared object (and not for the methods). E.g.:
synchronized class M {
* int m;
public:
* void foo() {
* * // do something with `m' in mutual exclusion
* }
};
You must mark the class as a whole and not the individual methods of
it. Then the compiler can again help you to make correct concurrent
programs. This is again not possible with library-based solution.
You say:
I'm not sure marking the class like that is a good idea.
That is why I try to explain it to you.
>*Would you
mark vector as synchronized?
It depends what do you want to express in the programming language
notation. If you want to express your intention that the whole vector
is a critical resource, then yes, put it as a member in a class marked
with the keyword `synchronized' or `shared' or whatever. If your
intention is not that the whole vector is a shared resource, you can
partition it.
>*If yes, do you expect compilers to
figure
out that only one thread is able to access some of your vector objects
at
any point in time and disable the synchronization?
No, if I follow you correctly. The compiler does not have to figure
out anything because you have expressed exactly what you wanted. You
wanted the whole vector to be a shared resource. That means the
resource must be protected against simultaneous access.

If you want it otherwise, you must express it so in your programming
language notation.
>*I don't want to
pay for
extra synchronization when it isn't needed.
Then, you could build your algorithm accordingly. Everything depends
on the notation as a tool of thought. That is why a programming
language must be designed carefully.

Best Regards,
Szabolcs
Jun 27 '08 #103
On May 20, 10:26 pm, Szabolcs Ferenczi <szabolcs.feren...@gmail.com>
wrote:
On May 20, 11:36 am, James Kanze <james.ka...@gmail.comwrote:
On May 19, 9:22 pm, Szabolcs Ferenczi <szabolcs.feren...@gmail.com>
wrote:
On May 19, 8:43 pm, gpderetta <gpdere...@gmail.comwrote:
On May 19, 8:23 pm, Szabolcs Ferenczi <szabolcs.feren...@gmail.com>
wrote:
For instance here:
"The SGI implementation of STL is thread-safe only in
the sense that simultaneous accesses to distinct
containers are safe, and simultaneous read accesses to
to shared containers are safe. If multiple threads
access a single container, and at least one thread may
potentially write, then the user is responsible for
ensuring mutual exclusion between the threads during
the container accesses. "
>http://www.sgi.com/tech/stl/thread_safety.html
Yes, in that sense is thread safe.
Please explain it to the forum fighters.
No need. They all, especially James Kanze, know very well.
Are you suggesting that he intentionally makes fool of
himself?http://groups.google.com/group/comp....81088bfec40dab
You ask:
Do you understand English?
I guess so.
You don't seem to.
The SGI statement says exactly what
I said, that the implementation is thread safe.
SGI statement simply contradicts you: "If multiple threads
access a single container, and at least one thread may
potentially write, then THE USER IS RESPONSIBLE FOR ENSURING
MUTUAL EXCLUSION BETWEEN THE THREADS during the container
accesses." It is just safe for reading, so what I said was
correct that it is thread safe to a certain extent.
That's not the only guarantee it gives. It specifies the
contract that you have to respect. In other words, it is
completely thread safe.
The SGI also confirms me: "The SGI implementation of STL is
thread- safe ONLY IN THE SENSE that ..."
Only in the sense of thread safety normally used by the experts
in the domain. It's true that they felt they had to add this
statement because a lot of beginners have misconceptions about
the meaning of the word. (Maybe you're one of them, and that's
the problem.)
That is it is not "completely thread safe" as you claimed.
Sorry, but that is the accepted definition of "completely thread
safe".
You like to talk big, don't you?
And what is that supposed to mean? Pointing out your
misstatements is "talking big"?

[...]
Besides, you still must show us how can you get elements from
the plain "completely thread safe" STL containers with
multiple consumers. (You cannot show this because you just
talk big as usual.)
Are you trying to say that you cannot use STL containers for
communications between threads? I use std::deque, for example,
in my message queue, and it works perfectly.

This is so basic, there has to be some misunderstanding on your
part.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #104
On May 21, 1:34 am, Szabolcs Ferenczi <szabolcs.feren...@gmail.com>
wrote:
On May 20, 11:45 pm, gpderetta <gpdere...@gmail.comwrote:
2) In procedural languages if you introduce a keyword for a
Critical Region, you must also give some means to associate
which shared variables are involved. So, fixing your example,
if there is a shared resource `m' you can define a critical
region using a keyword like `synchronized' such as this:
shared int m; // the `shared' keyword marks the shared variable
...
synchronized(m) {
// here you can access `m'
}
Now the compiler can make sure that `m' is accessed within
Critical Regions and only within Critical Regions.
And what does that buy us? It prevents some very useful idioms,
while adding no real additional safety.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #105
On May 21, 3:20 am, c...@mailvault.com wrote:
On May 20, 5:34 pm, Szabolcs Ferenczi <szabolcs.feren...@gmail.com>
wrote:
[...]
3) In object-based or higher level object-oriented languages
the unit of shared resource is obviously an object. So the
association of the Critical Region and the critical resource
is naturally merged in an object. That was one of the
failure of Java that they were not brave enough to mark the
class as the shared resource. In a decent concurrent
object-oriented language you could specify a keyword for the
class to mark a shared object (and not for the methods).
E.g.:
synchronized class M {
int m;
public:
void foo() {
// do something with `m' in mutual exclusion
}
};
You must mark the class as a whole and not the individual
methods of it. Then the compiler can again help you to make
correct concurrent programs. This is again not possible with
library-based solution.
I'm not sure marking the class like that is a good idea.
Would you mark vector as synchronized? If yes, do you expect
compilers to figure out that only one thread is able to access
some of your vector objects at any point in time and disable
the synchronization? I don't want to pay for extra
synchronization when it isn't needed.
That's not really the point (although it certainly would be in
some applications). The point is that this idea was put forward
many, many years ago; it works well when you're dealing with
simple objects, like int's, but it doesn't work when you start
dealing with sets of objects grouped into transactions.
Ensuring transactional integrity in a multi-threaded
environment, without deadlocks, still requires manually managing
locking and unlocking---even scoped locking doesn't really work
here. (You can implement transactions with scoped locking, but
only if you only handle one transaction at a time. In which
case, there's really no point in being multithreaded.)

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #106
On May 21, 10:47*am, James Kanze <james.ka...@gmail.comwrote:
Besides, you still must show us how can you get elements from
the plain "completely thread safe" STL containers with
multiple consumers. *(You cannot show this because you just
talk big as usual.)

Are you trying to say that you cannot use STL containers for
communications between threads?
What I am saying is that you cannot use it without any extra
synchronisation provided there are multiple producer and consumer
threads. That was my original warning what triggered your conditioned
reflex.

<quote>
Be aware that although STL is thread-safe to a certain extent, you
must wrap around the STL data structure to make a kind of a bounded
buffer out of it.
</quote>
http://groups.google.com/group/comp....50c4f92d6e0211
>*I use std::deque, for example,
in my message queue, and it works perfectly.
That is your homework to show a solution where two competing threads
are consuming from a "completely thread safe" std::deque.

Yes, you talk big like the Bandar-log, that you can do that---but when
it comes to show it, you escape with same talk.

We are waiting for your report about your homework.

Best Regards,
Szabolcs
Jun 27 '08 #107
On 2008-05-21 05:18:56 -0400, James Kanze <ja*********@gmail.comsaid:
>
That's not really the point (although it certainly would be in
some applications). The point is that this idea was put forward
many, many years ago; it works well when you're dealing with
simple objects, like int's, but it doesn't work when you start
dealing with sets of objects grouped into transactions.
Ensuring transactional integrity in a multi-threaded
environment, without deadlocks, still requires manually managing
locking and unlocking---even scoped locking doesn't really work
here. (You can implement transactions with scoped locking, but
only if you only handle one transaction at a time. In which
case, there's really no point in being multithreaded.)
To put it a little more abstractly: ensuring data integrity in a
multi-threaded application requires an application-level solution. A
library or language can provide tools to make this easier, but they
cannot solve the problem.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)

Jun 27 '08 #108
On May 21, 6:32*am, Pete Becker <p...@versatilecoding.comwrote:
On 2008-05-21 05:18:56 -0400, James Kanze <james.ka...@gmail.comsaid:
That's not really the point (although it certainly would be in
some applications). *The point is that this idea was put forward
many, many years ago; it works well when you're dealing with
simple objects, like int's, but it doesn't work when you start
dealing with sets of objects grouped into transactions.
Ensuring transactional integrity in a multi-threaded
environment, without deadlocks, still requires manually managing
locking and unlocking---even scoped locking doesn't really work
here. *(You can implement transactions with scoped locking, but
only if you only handle one transaction at a time. *In which
case, there's really no point in being multithreaded.)

To put it a little more abstractly: ensuring data integrity in a
multi-threaded application requires an application-level solution. A
library or language can provide tools to make this easier, but they
cannot solve the problem.
I agree with the gist of that, but think Kanze put it well enough.
I wouldn't describe an approach that makes deadlocks more likely than
other approaches as having a problem with data integrity. I'd say it
was a problem of application integrity.

Brian Wood
Ebenezer Enterprises
www.webEbenezer.net
Jun 27 '08 #109
"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:de**********************************@34g2000h sh.googlegroups.com...
On May 21, 6:39 am, "Chris Thomasson" <cris...@comcast.netwrote:
"Szabolcs Ferenczi" <szabolcs.feren...@gmail.comwrote in message

news:de**********************************@t54g2000 hsg.googlegroups.com...
On May 19, 1:41 am, Ian Collins <ian-n...@hotmail.comwrote:
Szabolcs Ferenczi wrote:
What is your problem exactly? It would be good, if instead of
trolling
you would prove something finally.
I see your true colours have finally come through, insult all who
disagree with you.
I try to get some proof out of you instead of you keep trolling.
Besides, I new you would escape instead of proving your claims. It is
the nature of trollers.
Your claim:
Ian already tried to patiently explain why running threads from ctors
can be
a bad idea.
On the contrary I was patiently correct all his mistakes. The last
time in this post:
http://groups.google.com/group/comp....2557fe9a4b7909
Then he failed as I predicted.
I even gave you some code which shows how undefined behavior
results when a pure virtual function is invoked from the ctor of a base
class!
I made the remark to your the relevant piece of hack that it is a
clear example to the fact that hackers can misuse any formal notation.
http://groups.google.com/group/comp....d8850e3ec7ab3a
You misused C++ in a sequential mode in your demonstration.
I showed you one reason why its not a good idea to spawn threads from ctors.
I could show you some others, but I suspect that you already know them. See,
I think you know that starting threads from ctors is "generally" a bad idea,
and your just doing a little trolling for some reason. Oh well.

Jun 27 '08 #110
On May 21, 8:08*pm, "Chris Thomasson" <cris...@comcast.netwrote:
"Szabolcs Ferenczi" <szabolcs.feren...@gmail.comwrote in message

news:de**********************************@34g2000h sh.googlegroups.com...
On May 21, 6:39 am, "Chris Thomasson" <cris...@comcast.netwrote:
"Szabolcs Ferenczi" <szabolcs.feren...@gmail.comwrote in message
>news:de**********************************@t54g200 0hsg.googlegroups.com....
On May 19, 1:41 am, Ian Collins <ian-n...@hotmail.comwrote:
Szabolcs Ferenczi wrote:
What is your problem exactly? It would be good, if instead of
trolling
you would prove something finally.
I see your true colours have finally come through, insult all who
disagree with you.
I try to get some proof out of you instead of you keep trolling.
Besides, I new you would escape instead of proving your claims. It is
the nature of trollers.
Your claim:
Ian already tried to patiently explain why running threads from ctors
can be
a bad idea.
On the contrary I was patiently correct all his mistakes. The last
time in this post:
http://groups.google.com/group/comp....2557fe9a4b7909
Then he failed as I predicted.
I even gave you some code which shows how undefined behavior
results when a pure virtual function is invoked from the ctor of a base
class!
I made the remark to your the relevant piece of hack that it is a
clear example to the fact that hackers can misuse any formal notation.
http://groups.google.com/group/comp....d8850e3ec7ab3a
You misused C++ in a sequential mode in your demonstration.
Hmm...
I showed you one reason why its not a good idea to spawn threads from ctors.
You showed nothing else but that you can hack C++ even in sequential
mode so that the compiler cannot catch your misuse. That was it.
Congratulations.
I could show you some others, but I suspect that you already know them. See,
I think you know that starting threads from ctors is "generally" a bad idea,
and your just doing a little trolling for some reason. Oh well.
I taked about disciplined use. Of course, that is not for hackers like
you.

Hackers also do not know that there are computational models where
objects and processes are unified into active objects. And that
exactly means that the object starts its activity right after
initialising its internal state. So it is not at all new concept. Ok,
you and your friends cannot know that due to lack of education in
concurrent programming.

Best Regards,
Szabolcs
Jun 27 '08 #111
"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:29**********************************@d77g2000 hsb.googlegroups.com...
On May 21, 10:47 am, James Kanze <james.ka...@gmail.comwrote:
Besides, you still must show us how can you get elements from
the plain "completely thread safe" STL containers with
multiple consumers. (You cannot show this because you just
talk big as usual.)
Are you trying to say that you cannot use STL containers for
communications between threads?
What I am saying is that you cannot use it without any extra
synchronisation provided there are multiple producer and consumer
threads. That was my original warning what triggered your conditioned
reflex.
<quote>
Be aware that although STL is thread-safe to a certain extent, you
must wrap around the STL data structure to make a kind of a bounded
buffer out of it.
</quote>
http://groups.google.com/group/comp....50c4f92d6e0211
I use std::deque, for example,
in my message queue, and it works perfectly.
That is your homework to show a solution where two competing threads
are consuming from a "completely thread safe" std::deque.
Yes, you talk big like the Bandar-log, that you can do that---but when
it comes to show it, you escape with same talk.
We are waiting for your report about your homework.

Here is a very simple FIFO queue:
__________________________________________________ _____________________
#include <cstdio>
#include <deque>
#include <pthread.h>
class mutex {
friend class cond;
pthread_mutex_t m_mtx;

public:
class guard {
friend class cond;
mutex& m_mtx;
public:
guard(mutex& mtx) : m_mtx(mtx) { m_mtx.lock(); }
~guard() throw() { m_mtx.unlock(); }
};

mutex() {
pthread_mutex_init(&m_mtx, NULL);
}

~mutex() throw() {
pthread_mutex_destroy(&m_mtx);
}

void lock() throw() {
pthread_mutex_lock(&m_mtx);
}

void unlock() throw() {
pthread_mutex_unlock(&m_mtx);
}
};
class cond {
pthread_cond_t m_cond;

public:
cond() {
pthread_cond_init(&m_cond, NULL);
}

~cond() throw() {
pthread_cond_destroy(&m_cond);
}

void wait(mutex::guard& lock) throw() {
pthread_cond_wait(&m_cond, &lock.m_mtx.m_mtx);
}

void signal() throw() {
pthread_cond_signal(&m_cond);
}

void broadcast() throw() {
pthread_cond_broadcast(&m_cond);
}
};
template<typename T>
class fifo {
std::deque<Tm_con;
mutex m_mtx;
cond m_cond;

public:
void push(T const& state) {
mutex::guard lock(m_mtx);
m_con.push_back(state);
}

T& wait_pop() {
mutex::guard lock(m_mtx);
while (m_con.empty()) { m_cond.wait(lock); }
T& state = m_con.front();
m_con.pop_front();
return state;
}

bool try_pop(T& state) {
mutex::guard lock(m_mtx);
if (m_con.empty()) { return false; }
state = m_con.front();
m_con.pop_front();
return true;
}
};
int main() {
{
int i;
fifo<intnumbers;
for (i = 0; i < 10; ++i) {
numbers.push(i);
std::printf("Pushed %d\n", i);
}
std::puts("-----------------------------------------");
while (numbers.try_pop(i)) {
std::printf("Popped %d\n", i);
}
}

/*~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~*/
std::puts("\n\n___________________________________ _________\n\
Press <ENTERto exit...");
std::getchar();
return 0;
}

__________________________________________________ _____________________

Jun 27 '08 #112
On May 21, 9:02*pm, "Chris Thomasson" <cris...@comcast.netwrote:
"Szabolcs Ferenczi" <szabolcs.feren...@gmail.comwrote in message

news:29**********************************@d77g2000 hsb.googlegroups.com...
On May 21, 10:47 am, James Kanze <james.ka...@gmail.comwrote:
Besides, you still must show us how can you get elements from
the plain "completely thread safe" STL containers with
multiple consumers. (You cannot show this because you just
talk big as usual.)
Are you trying to say that you cannot use STL containers for
communications between threads?
What I am saying is that you cannot use it without any extra
synchronisation provided there are multiple producer and consumer
threads. That was my original warning what triggered your conditioned
reflex.
<quote>
Be aware that although STL is thread-safe to a certain extent, you
must wrap around the STL data structure to make a kind of a bounded
buffer out of it.
</quote>
http://groups.google.com/group/comp....50c4f92d6e0211
I use std::deque, for example,
in my message queue, and it works perfectly.
That is your homework to show a solution where two competing threads
are consuming from a "completely thread safe" std::deque.
Yes, you talk big like the Bandar-log, that you can do that---but when
it comes to show it, you escape with same talk.
We are waiting for your report about your homework.
You say:
Here is a very simple FIFO queue:
Who asked you to hack a FIFO queue here? Are you Kanze's pet or
servant? It was his homework.

Besides you better wait for your master's solution because according
to him, he is going to solve it just by the plain std::dequeue. He
clamed that he used to use it in his programs just like that because
it is "completely thread safe". The Bandar-log master will not use any
other mutex for it.

Be patient and wait for your master's solution, young friend.

Best Regards,
Szabolcs
Jun 27 '08 #113
"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:f4**********************************@d1g2000h sg.googlegroups.com...
On May 21, 8:08 pm, "Chris Thomasson" <cris...@comcast.netwrote:
[...]
I showed you one reason why its not a good idea to spawn threads from
ctors.
You showed nothing else but that you can hack C++ even in sequential
mode so that the compiler cannot catch your misuse. That was it.
Congratulations.
The sample code showed exactly how the OP could of shot himself in the foot.
See, I used the 'call_abstract_virtual_indirect()' function to make the
compiler shut up because its directly analogous to calling
'pthread_create()' and providing it with an entry function which calls the
virtual function used to represent the thread entry for the derived class.
The OP mentioned that he wanted to use a threading base class. I warned him
that starting threads in ctors can be dangerous because of a race-condition.
On the other hand, you basically told him that threads in ctors is fine. You
totally forgot to mention any of the caveats. Why did you do that?

I could show you some others, but I suspect that you already know them.
See,
I think you know that starting threads from ctors is "generally" a bad
idea,
and your just doing a little trolling for some reason. Oh well.
I taked about disciplined use. Of course, that is not for hackers like
you.
:^/

Hackers also do not know that there are computational models where
objects and processes are unified into active objects. And that
exactly means that the object starts its activity right after
initialising its internal state.
Right. It starts the thread __after__ initializing its internal state. An
objects state is not fully initialized until in has been completely
constructed. This means that an object is not technically initialized until
it constructor function has returned from the point of invocation; IMHO,
this is how C++ operates.

If you create the thread in the ctor that means that it makes up part of its
initialization procedure, and it could end up operating on an object whose
ctor has not completely finished yet; this is a race-condition. Generally,
you want to pass objects to threads __after__ the initialization has been
completely fulfilled...

So it is not at all new concept. Ok,
you and your friends cannot know that due to lack of education in
concurrent programming.
Why do you need to start threads in ctors? That can be dangerous. However,
if your careful, it can be done. Here is a simple example:

<pseudo-code sketch>
__________________________________________________ ____________
class thread_base {
virtual void on_entry() = 0;

public:
void start() { ... };
void join() { ... };
};
class active_object : public thread_base {
public:
active_object() {
// construct
}

private:
void on_entry() {
// running
}

virtual void on_object_state_shift() = 0;
};
template<typename T>
struct run {
T m_object;
run() { m_object.start(); }
~run() throw() { m_object.join(); }
};
class my_object : public active_object {
void on_object_state_shift() {
// whatever...
}
};
int main() {
{
run<my_objectmobj;
}
return 0;
}
-- or --
int main() {
{
my_object mobj;
mobj.start();
mobj.join();
}
return 0;
}
__________________________________________________ ____________


This works fairly well because the run<Ttemplate has nothing to do with
any active object state. It only ensures that the object has been fully
constructed __before__ the thread which drives it has been created...
Any thoughts?

Jun 27 '08 #114
"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:da**********************************@d77g2000 hsb.googlegroups.com...
On May 21, 9:02 pm, "Chris Thomasson" <cris...@comcast.netwrote:
[...]
You say:
Here is a very simple FIFO queue:
Who asked you to hack a FIFO queue here? Are you Kanze's pet or
servant? It was his homework.
I am quite sure that James can create trivial code like that.

Besides you better wait for your master's solution because according
to him, he is going to solve it just by the plain std::dequeue. He
clamed that he used to use it in his programs just like that because
it is "completely thread safe". The Bandar-log master will not use any
other mutex for it.
Be patient and wait for your master's solution, young friend.
Why do you think he is my master?

Jun 27 '08 #115
On May 21, 10:22*pm, "Chris Thomasson" <cris...@comcast.netwrote:
"Szabolcs Ferenczi" <szabolcs.feren...@gmail.comwrote in message

news:da**********************************@d77g2000 hsb.googlegroups.com...
On May 21, 9:02 pm, "Chris Thomasson" <cris...@comcast.netwrote:
[...]
You say:
Here is a very simple FIFO queue:
Who asked you to hack a FIFO queue here? Are you Kanze's pet or
servant? It was his homework.

I am quite sure that James can create trivial code like that.
I doubt that. So far he proved the opposite. He just talks big.
Besides you better wait for your master's solution because according
to him, he is going to solve it just by the plain std::dequeue. He
clamed that he used to use it in his programs just like that because
it is "completely thread safe". The Bandar-log master will not use any
other mutex for it.
Be patient and wait for your master's solution, young friend.

Why do you think he is my master?
You do his job. You work for him. It is so simple.

Best Regards,
Szabolcs
Jun 27 '08 #116
On 21 mai, 11:55, Szabolcs Ferenczi <szabolcs.feren...@gmail.com>
wrote:
On May 21, 10:47 am, James Kanze <james.ka...@gmail.comwrote:
Besides, you still must show us how can you get elements from
the plain "completely thread safe" STL containers with
multiple consumers. (You cannot show this because you just
talk big as usual.)
Are you trying to say that you cannot use STL containers for
communications between threads?
What I am saying is that you cannot use it without any extra
synchronisation provided there are multiple producer and
consumer threads.
And?

The SGI implementation gives a contract concerning how you must
use them in a multi-threaded environment. You follow the
contract, and there should be no problems. You violate the
contract, and who knows. That's no different from anything
else.

In practice, of course, there's no sense in offering any other
contract, given the interface of the STL. Things like
operator[] and * on an iterator return references. So you can't
offer anything more than the basic contract that is present for
individual instances of the contained object. In the case of
the SGI implementation, this corresponds to the Posix contract.

And while I'm not saying that Posix is perfect, I would consider
its use of language "standard". And according to Posix, the SGI
contract is thread-safety.
That was my original warning what triggered your conditioned
reflex.
You didn't word it like that. If you had, I'd have agreed.

Let's be very clear: thread safety is a question of contract. A
component is thread safe *if* it specifies exactly what the
contract is in the presense of threads. We can argue the issue
a little, if the contract starts offering less guarantees that
the Posix, say (e.g. accessing two different objects requires
external synchronization), but there are limits. The STL
doesn't claim to provide synchronization; calling
deque<>::front() will *not* suspend your thread until there is
something in the queue. But this just seems so normal to me,
for a *container*, that I can't imagine anyone thinking
otherwise.

And for what its worth, the message queue I use between
processes is based on std::deque. And it's now being used in a
number of different applications, with no problems. Just as
obviously, there's some other code around it, to ensure
synchronization; this code does more than just protect accesses
to the queue, it suspends a thread if the queue is empty, etc.
(For the record, it uses both a pthread_cond_t and a
pthread_mutex_t.)
<quote>
Be aware that although STL is thread-safe to a certain extent, you
must wrap around the STL data structure to make a kind of a bounded
buffer out of it.
</quote>http://groups.google.com/group/comp....50c4f92d6e0211
I use std::deque, for example,
in my message queue, and it works perfectly.
That is your homework to show a solution where two competing
threads are consuming from a "completely thread safe"
std::deque.
For various reasons, the actual code contains a lot of
"irrelevant" additions (templates, etc.), but it's really just
the classical implementation of a message queue under Posix,
with std::deque serving as the queuing mechanism.
Yes, you talk big like the Bandar-log, that you can do
that---but when it comes to show it, you escape with same
talk.
We are waiting for your report about your homework.
You're not my teacher. Since it's a classical implementation (I
think there's a similar example in the Butenhof), it seems like
a waste of time to post it here. The only really original thing
in it is the fact that I use std::auto_ptr in the interface, not
so much to manage memory as to ensure that once a thread has
passed an object off to the queue, it cannot access it any more.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #117
On May 21, 10:19*pm, "Chris Thomasson" <cris...@comcast.netwrote:
"Szabolcs Ferenczi" <szabolcs.feren...@gmail.comwrote in message

news:f4**********************************@d1g2000h sg.googlegroups.com...
On May 21, 8:08 pm, "Chris Thomasson" <cris...@comcast.netwrote:
[...]
Why do you need to start threads in ctors? That can be dangerous. However,
if your careful, it can be done. Here is a simple example:

<pseudo-code sketch>
__________________________________________________ ____________
class thread_base {
* virtual void on_entry() = 0;

public:
* void start() { ... };
* void join() { ... };

};

class active_object : public thread_base {
public:
* active_object() {
* * // construct
* }

private:
* void on_entry() {
* * // running
* }

* virtual void on_object_state_shift() = 0;

};

template<typename T>
struct run {
* T m_object;
* run() { m_object.start(); }
* ~run() throw() { m_object.join(); }

};

class my_object : public active_object {
* void on_object_state_shift() {
* * // whatever...
* }

};

int main() {
* {
* * run<my_objectmobj;
* }
* return 0;

}
Great. It looks good.
-- or --

int main() {
* {
* * my_object mobj;
* * mobj.start();
* * mobj.join();
* }
* return 0;}

__________________________________________________ ____________

This works fairly well because the run<Ttemplate has nothing to do with
any active object state. It only ensures that the object has been fully
constructed __before__ the thread which drives it has been created...

Any thoughts?
So far so good. I still have to check it out but this is something I
was looking for. If we replace the term `run' with `active' or
something of a more meaningful term, we are done.

I think this construction could be considered by the committee of C+
+0x as well.

Well done.

Best Regards,
Szabolcs
Jun 27 '08 #118
On 21 mai, 21:23, Szabolcs Ferenczi <szabolcs.feren...@gmail.com>
wrote:
On May 21, 9:02 pm, "Chris Thomasson" <cris...@comcast.netwrote:
Who asked you to hack a FIFO queue here? Are you Kanze's pet or
servant? It was his homework.
I'm sorry, but you are not my teacher, and you don't give me
homework. The work I do belongs to the people who pay me.
Technically, I'd need their authorization before I could post it
here. Practically, the code in question is small enough and
simple enough---and independent of our application domain---so I
doubt they'd care. But I'm certainly not going to post it just
because you want to play the teacher. (What is it they say:
those that can, do. Those that can't, teach. With my excuses
to the many competent teachers I've known.)
Besides you better wait for your master's solution
What's this business about "master"? What kind of social
relationships do you have, that people are masters and slaves?

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #119
On 21 mai, 19:52, c...@mailvault.com wrote:
On May 21, 6:32 am, Pete Becker <p...@versatilecoding.comwrote:
On 2008-05-21 05:18:56 -0400, James Kanze <james.ka...@gmail.comsaid:
That's not really the point (although it certainly would be in
some applications). The point is that this idea was put forward
many, many years ago; it works well when you're dealing with
simple objects, like int's, but it doesn't work when you start
dealing with sets of objects grouped into transactions.
Ensuring transactional integrity in a multi-threaded
environment, without deadlocks, still requires manually managing
locking and unlocking---even scoped locking doesn't really work
here. (You can implement transactions with scoped locking, but
only if you only handle one transaction at a time. In which
case, there's really no point in being multithreaded.)
To put it a little more abstractly: ensuring data integrity in a
multi-threaded application requires an application-level solution. A
library or language can provide tools to make this easier, but they
cannot solve the problem.
I agree with the gist of that, but think Kanze put it well enough.
Maybe, but Pete said it a lot clearer, in a lot fewer words.
It's an application level problem, in the end, and all the
library, the language (or the OS, or anything else) can provide
are tools.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #120
On May 21, 10:53*pm, James Kanze <james.ka...@gmail.comwrote:
On 21 mai, 21:23, Szabolcs Ferenczi <szabolcs.feren...@gmail.com>
wrote:
On May 21, 9:02 pm, "Chris Thomasson" <cris...@comcast.netwrote:
Who asked you to hack a FIFO queue here? Are you Kanze's pet or
servant? It was his homework.

I'm sorry, but you are not my teacher,
Right. You can be sorry about it, since if I were your teacher you
would have been properly educated in concurrent programming.
and you don't give me
homework.
I did. I just made use of your bad habit of talking big and caught
you.
>*The work I do belongs to the people who pay me.
It is not a big deal. Just show how you do what you claimed to do.
Technically, I'd need their authorization before I could post it
here.
It is an escape. Any one of us could eaasily make a reduced version of
any interesting part of the code base so that no one could identify
the source. But the one who just talks big, cannot do that for some
strange reason.
>*Practically, the code in question is small enough and
simple enough---and independent of our application domain---so I
doubt they'd care.
Then why don't you do it? Yes, it is a simple prolbem, you could type
it just like that. Of course, if you were not just a talking guy.
>*But I'm certainly not going to post it just
because you want to play the teacher.
That is the escape root of the incapable. Just like Bandar-log.
Congratulations. You have presented the proof well enough.

Best Regards,
Szabolcs
Jun 27 '08 #121
"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:f5**********************************@e39g2000 hsf.googlegroups.com...
On May 21, 10:19 pm, "Chris Thomasson" <cris...@comcast.netwrote:
"Szabolcs Ferenczi" <szabolcs.feren...@gmail.comwrote in message

news:f4**********************************@d1g2000h sg.googlegroups.com...
On May 21, 8:08 pm, "Chris Thomasson" <cris...@comcast.netwrote:
[...]
Why do you need to start threads in ctors? That can be dangerous.
However,
if your careful, it can be done. Here is a simple example:

<pseudo-code sketch>
__________________________________________________ ____________
[...]
Great. It looks good.
Thanks.

[...]
__________________________________________________ ____________

This works fairly well because the run<Ttemplate has nothing to do
with
any active object state. It only ensures that the object has been fully
constructed __before__ the thread which drives it has been created...

Any thoughts?
So far so good. I still have to check it out but this is something I
was looking for. If we replace the term `run' with `active' or
something of a more meaningful term, we are done.
Indeed. I think `active' is a good choice. After that chance you could do
stuff like:

class reader : public thread_base { ... };

I think this construction could be considered by the committee of C+
+0x as well.
Humm. Am not sure if anything exactly like it was proposed. I am most likely
wrong and am sure somebody can clear this up.
Well done.
:^)

Jun 27 '08 #122
Whoops! sorry for the last post, I sent it prematurely. :^(
"Chris Thomasson" <cr*****@comcast.netwrote in message
news:OP******************************@comcast.com. ..
"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:f5**********************************@e39g2000 hsf.googlegroups.com...
[...]
This works fairly well because the run<Ttemplate has nothing to do
with
any active object state. It only ensures that the object has been fully
constructed __before__ the thread which drives it has been created...

Any thoughts?
>So far so good. I still have to check it out but this is something I
was looking for. If we replace the term `run' with `active' or
something of a more meaningful term, we are done.

Indeed. I think `active' is a good choice. After that chance you could do
stuff like:

class reader : public thread_base { ... };

what do you think about this tweak:
__________________________________________________ __________________
namespace active {
class base {
virtual void on_active() = 0;
public:
void activate() { ... };
void wait() { ... };
};
template<typename T>
class object {
T& m_objref;

public:
active(T& objref) : m_objref(objref) {
m_objref.activate();
}

~active() throw() { m_objref.wait(); }

T& get() { return m_objref; }
T const& get() const { return m_objref; }
};
}

/* user code */
class foo : public active::base {
void on_active() { [...]; }
};
int main() {
{ active::object<foothis_foo; }
return 0;
}
__________________________________________________ __________________


[...]

Jun 27 '08 #123

"Chris Thomasson" <cr*****@comcast.netwrote in message
news:up******************************@comcast.com. ..
Whoops! sorry for the last post, I sent it prematurely. :^(
"Chris Thomasson" <cr*****@comcast.netwrote in message
news:OP******************************@comcast.com. ..
>"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:f5**********************************@e39g200 0hsf.googlegroups.com...
[...]
>This works fairly well because the run<Ttemplate has nothing to do
with
any active object state. It only ensures that the object has been
fully
constructed __before__ the thread which drives it has been created...

Any thoughts?
>>So far so good. I still have to check it out but this is something I
was looking for. If we replace the term `run' with `active' or
something of a more meaningful term, we are done.

Indeed. I think `active' is a good choice. After that chance you could do
stuff like:

class reader : public thread_base { ... };


what do you think about this tweak:
__________________________________________________ __________________
namespace active {
class base {
virtual void on_active() = 0;
public:
void activate() { ... };
void wait() { ... };
};
template<typename T>
class object {
T& m_objref;

public:
active(T& objref) : m_objref(objref) {
m_objref.activate();
}

~active() throw() { m_objref.wait(); }
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^
ARGH!! Fuc%ing typo!

template<typename T>
class object {
T& m_objref;

public:
object(T& objref) : m_objref(objref) {
m_objref.activate();
}

~object() throw() { m_objref.wait(); }

T& get() { return m_objref; }
T const& get() const { return m_objref; }
};

of course!

Sorry for that non-sense.
;^(...

Jun 27 '08 #124

"Chris Thomasson" <cr*****@comcast.netwrote in message
news:up******************************@comcast.com. ..
[...]
/* user code */
class foo : public active::base {
void on_active() { [...]; }
};
int main() {
{ active::object<foothis_foo; }
return 0;
}
__________________________________________________ __________________

of course the above is wrong. This is what I get for coding in the
newsreader!
int main() {
{
foo this_foo;
active::object<foothis_active_foo(this_foo);
}
return 0;
}

Humm...

Jun 27 '08 #125

"darren" <mi******@gmail.comwrote in message
news:70**********************************@c19g2000 prf.googlegroups.com...
Hi

I have to write a multi-threaded program. I decided to take an OO
approach to it. I had the idea to wrap up all of the thread functions
in a mix-in class called Threadable. Then when an object should run
in its own thread, it should implement this mix-in class. Does this
sound like plausible design decision?

I'm surprised that C++ doesn't have such functionality, say in its
STL. This absence of a thread/object relationship in C++ leads me to
believe that my idea isn't a very good one.

I would appreciate your insights. thanks

Here is a helper object you can use to run objects that provide a specific
interface (e.g., foo::start/join):
__________________________________________________ _______________
template<typename T>
struct active {
class guard {
T& m_object;

public:
guard(T& object) : m_object(object) {
m_object.start();
}

~guard() {
m_object.join();
}
};

T m_object;

active() {
m_object.start();
}

~active() {
m_object.join();
}
};
__________________________________________________ _______________

To create and start an object in one step you do:
{
active<foo_foo;
}

or you can separate the process and intervene in between the object
construction and activation like:
{
foo _foo;
[...];
active<foo>::guard active_foo(_foo);
}


However, other than perhaps some syntactic-sugar, I don't think that this
gives any advantages over Boost's method of representing and creating
threads...


Any thoughts?

Jun 27 '08 #126

"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:b8**********************************@k30g2000 hse.googlegroups.com...

>Right. You can be sorry about it, since if I were your teacher you
would have been properly educated in concurrent programming.
I'm really excited by everything you have said in this discussion and I
would really like to learm more.
It sounds like you are a University professor. I would love to take one of
your courses.

Where do you teach?

regards
Andy Little
Jun 27 '08 #127
"kwikius" <an**@servocomm.freeserve.co.ukwrote in message
news:48**********@mk-nntp-2.news.uk.tiscali.com...
>
"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:b8**********************************@k30g2000 hse.googlegroups.com...

>>Right. You can be sorry about it, since if I were your teacher you
would have been properly educated in concurrent programming.

I'm really excited by everything you have said in this discussion and I
would really like to learm more.
It sounds like you are a University professor. I would love to take one
of your courses.

Where do you teach?
Well, I don't know of any University Professor who refers to David Butenhof
as a fool. Perhaps Szabolcs had a bad day. Anyway, if you take his class, be
sure that you can handle direct criticism; you will excel if criticism
happens to be a motivating factor within the realm of your personality.
Unfortunately, if you try and inform/teach the Professor, well, be prepared
to be sarcastically insulted and shot right out of the sky; you will end up
in flames indeed.

IMVHO, if your interested in learning about POSIX Thread's, then David
Butenhof is one of the best resources to reap high quality information from.
If you ask a fairly intriguing question on PThreads over in
`comp.programming.threads', well, you just might hear from him!

;^)

Jun 27 '08 #128
"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:12547b1f-aec7-4376-b6***************@l42g2000hsc.googlegroups.com...
On May 21, 10:22 pm, "Chris Thomasson" <cris...@comcast.netwrote:
"Szabolcs Ferenczi" <szabolcs.feren...@gmail.comwrote in message

news:da**********************************@d77g2000 hsb.googlegroups.com...
On May 21, 9:02 pm, "Chris Thomasson" <cris...@comcast.netwrote:
[...]
You say:
Here is a very simple FIFO queue:
Who asked you to hack a FIFO queue here? Are you Kanze's pet or
servant? It was his homework.
I am quite sure that James can create trivial code like that.
I doubt that. So far he proved the opposite. He just talks big.
Besides you better wait for your master's solution because according
to him, he is going to solve it just by the plain std::dequeue. He
clamed that he used to use it in his programs just like that because
it is "completely thread safe". The Bandar-log master will not use any
other mutex for it.
Be patient and wait for your master's solution, young friend.
Why do you think he is my master?
You do his job. You work for him. It is so simple.
That highly trivial and crude code sample took me about 2-4 minutes to type
out, compile and run a couple of times. I would NOT even refer to it as
literal "work" in any sense of the term. Perhaps James did not want to waste
minutes of his precious time. I charge a per-hour minimum... Even seconds
worth of work == 1 hour pay. What do you charge for multi-threading
consultation per-hour Szabolcs?

Jun 27 '08 #129
"Ian Collins" <ia******@hotmail.comwrote in message
news:69*************@mid.individual.net...
Chris Thomasson wrote:
>>
Well, at this point, IMVHO, Szabolcs has rendered himself into a
complete troll with no intention on learning anything.

Yet we still keep feeding him...
IMVHO, it kind of seems like Szabolcs has a communication problem of sorts.
He is condescending and highly satirical to any type of opposition to say
the least. The thing that bothers me the most is that he acts that way to
known experts; Oh well.

Given that, perhaps I am suffering from a major communication problem as
well... Humm, I do respond to him. Well, perhaps I just might be helping
somebody in doing so. Who knows...

:^o

>He even called
David Butenhof a fool for patiently trying to explain to him that
condvars can be used to signal about events that result from state
mutations.

Now that was a classic. The put down was even better:

http://tinyurl.com/5wk4l6

I should have learned my lesson there.
Well, even David feels the need to response every now and then:

http://groups.google.com/group/comp....fee0d739699592
(refer to very last sentence...)

Just to inform others at the expense of possibly stuffing the trolls face...
Well, IMHO, the moral of the story is that: Shi% Happens...

Jun 27 '08 #130
Chris Thomasson wrote:
"kwikius" <an**@servocomm.freeserve.co.ukwrote in message
news:48**********@mk-nntp-2.news.uk.tiscali.com...
>>
"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:b8**********************************@k30g200 0hse.googlegroups.com...

>>Right. You can be sorry about it, since if I were your teacher you
would have been properly educated in concurrent programming.

I'm really excited by everything you have said in this discussion and
I would really like to learm more.
It sounds like you are a University professor. I would love to take
one of your courses.

Where do you teach?

Well, I don't know of any University Professor who refers to David
Butenhof as a fool.
Did I just see some sarcasm flying over your head Chris?

--
Ian Collins.
Jun 27 '08 #131
"Ian Collins" <ia******@hotmail.comwrote in message
news:69*************@mid.individual.net...
Chris Thomasson wrote:
>"kwikius" <an**@servocomm.freeserve.co.ukwrote in message
news:48**********@mk-nntp-2.news.uk.tiscali.com...
>>>
"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:b8**********************************@k30g20 00hse.googlegroups.com...
Right. You can be sorry about it, since if I were your teacher you
would have been properly educated in concurrent programming.

I'm really excited by everything you have said in this discussion and
I would really like to learm more.
It sounds like you are a University professor. I would love to take
one of your courses.

Where do you teach?

Well, I don't know of any University Professor who refers to David
Butenhof as a fool.

Did I just see some sarcasm flying over your head Chris?
OUCH!

Jun 27 '08 #132
Szabolcs Ferenczi wrote:
On May 21, 6:41Â*am, "Chris Thomasson" <cris...@comcast.netwrote:
>"Szabolcs Ferenczi" <szabolcs.feren...@gmail.comwrote in message

news:ab**********************************@56g2000 hsm.googlegroups.com...
On May 19, 6:42 am, Rolf Magnus <ramag...@t-online.dewrote:
Yes. Copying a process with fork() is very fast. You don't get much
of a performance boost from using threads.
Did you ever hear about heavy weight and light weight processes? I am
just curious.

Name a platform?

I will not name any platform since I did not claim that "fork() is
very fast".
Well, it seems you wanted to imply something with your posting. Otherwise,
it's about as useful for this thread as a question like "what did you have
to eat today?"
Please ask the one who claimed it. Besides, you can guess the platform.
The platform I'm using is Linux. But I think it's not so uncommon for
Unix-like systems to have a very efficient fork() implementation.
I have concluded already that some of you have a well developed
conditioned reflex wich is so accute that you tend to address me even
if I did not claim anything.
Why did you write your posting then? Anyway, the answer to you question
is "yes", if that helps.

Jun 27 '08 #133
On May 22, 1:55*am, "Chris Thomasson" <cris...@comcast.netwrote:
"darren" <minof...@gmail.comwrote in message

news:70**********************************@c19g2000 prf.googlegroups.com...
Hi
I have to write a multi-threaded program. I decided to take an OO
approach to it. *I had the idea to wrap up all of the thread functions
in a mix-in class called Threadable. *Then when an object should run
in *its own thread, it should implement this mix-in class. *Does this
sound like plausible design decision?
I'm surprised that C++ doesn't have such functionality, say in its
STL. *This absence of a thread/object relationship in C++ leads me to
believe that my idea isn't a very good one.
I would appreciate your insights. *thanks

Here is a helper object you can use to run objects that provide a specific
interface (e.g., foo::start/join):
__________________________________________________ _______________
template<typename T>
struct active {
* class guard {
* * T& m_object;

* public:
* * guard(T& object) : m_object(object) {
* * * m_object.start();
* * }

* * ~guard() {
* * * m_object.join();
* * }
* };

* T m_object;

* active() {
* * m_object.start();
* }

* ~active() {
* * m_object.join();
* }};

__________________________________________________ _______________

To create and start an object in one step you do:

{
* active<foo_foo;

}

or you can separate the process and intervene in between the object
construction and activation like:

{
* foo _foo;
* [...];
* active<foo>::guard active_foo(_foo);

}

However, other than perhaps some syntactic-sugar, I don't think that this
gives any advantages over Boost's method of representing and creating
threads...

Any thoughts?
I do think it has advantages over Boost's method. One such advantage
is the RAII nature of it.

Furthermore, I think it should be taken in into the C++0x standard on
the similar grounds as they provide higher level condition variable
wait API as well:

<quote>
template <class Predicate>
void wait(unique_lock<mutex>& lock, Predicate pred);
Effects:
As if:
while (!pred())
wait(lock);
</quote>
http://www.open-std.org/jtc1/sc22/wg...008/n2497.html

However, one further improvement could be nice, and in that case it
would be quite a general solution, if one could just denote which
method of the object one wants to start as a process.

Best Regards,
Szabolcs
Jun 27 '08 #134

"Ian Collins" <ia******@hotmail.comwrote in message
news:69*************@mid.individual.net...
Chris Thomasson wrote:
>"kwikius" <an**@servocomm.freeserve.co.ukwrote in message
news:48**********@mk-nntp-2.news.uk.tiscali.com...
>>>
"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:b8**********************************@k30g20 00hse.googlegroups.com...
Right. You can be sorry about it, since if I were your teacher you
would have been properly educated in concurrent programming.

I'm really excited by everything you have said in this discussion and
I would really like to learm more.
It sounds like you are a University professor. I would love to take
one of your courses.

Where do you teach?

Well, I don't know of any University Professor who refers to David
Butenhof as a fool.

Did I just see some sarcasm flying over your head Chris?
;-)

regards
Andy Little
Jun 27 '08 #135

"Chris Thomasson" <cr*****@comcast.netwrote in message
news:kv******************************@comcast.com. ..
"Ian Collins" <ia******@hotmail.comwrote in message
news:69*************@mid.individual.net...
>Chris Thomasson wrote:
>>"kwikius" <an**@servocomm.freeserve.co.ukwrote in message
news:48**********@mk-nntp-2.news.uk.tiscali.com...

"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:b8**********************************@k30g2 000hse.googlegroups.com...
Right. You can be sorry about it, since if I were your teacher you
would have been properly educated in concurrent programming.

I'm really excited by everything you have said in this discussion and
I would really like to learm more.
It sounds like you are a University professor. I would love to take
one of your courses.

Where do you teach?

Well, I don't know of any University Professor who refers to David
Butenhof as a fool.

Did I just see some sarcasm flying over your head Chris?

OUCH!
Well. There was a kind of insane hope there that Szabolcs had some solid
body of work expressing his own practises in the field of concurrent
programming.

Threading is a solution to a hardware architecture problem. (Its easy for me
to say :-)) C++ is very much locked in to the traditional architecture and
was designed around sequential, "step by step" programming. Threading seems
to work OK in practise, but also seems difficult to theorise about and thats
a big problem. (I may be wrong but as I understand it there is no formal
means to describe threads. think the term is a "calculus"?)

IOW threading in C++ is a very difficult problem, but there has been a it
seems a huge amount of work done so far in trying to get "something" ready
for the next version of the standard. And boy Oh boy whatever Is delivered,
you can bet its going to be way far from perfect.

Really AFAIK the best way forward for anyone who has serious issues with the
current work on threading in C++ is to write a paper and submit it to The
standards committee explaining their concerns and better providing well
thought out solutions. (Won't be me)

http://www.open-std.org/jtc1/sc22/wg21/

Bear in mind though that is I think quite late in the day to have much
impact.

Anyway for myself I find this whole hardware architecture issue
interesting. Its way O.T. but I love this talk:

http://www.youtube.com/watch?v=_ILu5SMis9E

(Apologies I already put this link in another thread, but I Love this!)

I have already started trying to figure out how to tackle writing software
on a beast such as described, what your primitives would be. Anyway C++
seems to be as good a language as any to write my sim, but no threads in
sight in the implementation!

regards
Andy Little

Jun 27 '08 #136
On May 22, 12:31*pm, "Alf P. Steinbach" <al...@start.nowrote:
* kwikius:
(I may be wrong but as I understand it there is no formal
means to describe threads. think the term is a "calculus"?)

I think there must be by now. *E.g. Hoare introduced a lot of notation and ways
to reason formally about these things in "Communicating sequential processes",
many tens of years ago. *But he didn't there address differences betweenthreads
and heavy-weight processes (his processes appear more like threads, sharing
memory, than what we currently mean by "process" in the context of OS).
Be aware that CSP (Communicating Sequential Processes) is not a shared
memory based model. Processes in CSP communicate by messages and
messages are handled with adapted form of Guarded Commands. One unit
in CSP is a sequential program, hence the name. Besides, there is a
software realisation of CSP called OCCAM and there is a hardware
realisation of it called the Transputer.

The model that works with shared memory as well is Distributed
Processes. A Distributed Process is a combination of a monitor and a
process but Distributed Processes are communicating with each other
with remote procedure call-like mechanism. DP also applies Guarded
Commands. Hence one unit in DP is a combination of a shared object and
process.

We have just taken a step towards DP with unifying objects and
processes
http://groups.google.com/group/comp....c14991f9e88482

Also note that the term process was introduced for the abstract
concept of a thread of computation (or virtual processor) irrespective
of whether it is a heavy-weight process with its own address space and
resources or whether it is a light-weight one with its shared address
space and resources.

Best Regards,
Szabolcs
Jun 27 '08 #137

"Alf P. Steinbach" <al***@start.nowrote in message
news:7b******************************@posted.comne t...
>* kwikius:
>(I may be wrong but as I understand it there is no formal means to
describe threads. think the term is a "calculus"?)

I think there must be by now. E.g. Hoare introduced a lot of notation and
ways to reason formally about these things in "Communicating sequential
processes", many tens of years ago. But he didn't there address
differences between threads and heavy-weight processes (his processes
appear more like threads, sharing memory, than what we currently mean by
"process" in the context of OS).
Well I managed to track what I think is the latest version down:

http://www.usingcsp.com/cspbook.pdf

(I avoided most of the technical stuff , but the pictures were nice :-) )

One passage was interesting.

P 209 section 7.2

"... In its full generality, multithreading is an incredibly complex and
errorprone
technique, not to be recommended in any but the smallest programs.
In excuse, we may plead that it was invented before the days of structured
programming, when even FORTRAN was still considered to be a high-level
programming
language!"

I think I am quoting somewhat out of context there and bear in mind he's
presumably a mathematician at heart, but essentially I think there are 2
schools, the message passing school and the shared memory school. As I
understand it C++0x is getting the shared memory school basically because
its performance potential is much better for C++, but OTOH doesnt lend
itself very well to formal analysis. And presumably you can build message
passing more easily anyway as its at a higher level.

regards
Andy Little
Jun 27 '08 #138
On May 22, 8:13*pm, "kwikius" <a...@servocomm.freeserve.co.ukwrote:
"Alf P. Steinbach" <al...@start.nowrote in messagenews:7b******************************@poste d.comnet...
* kwikius:
(I may be wrong but as I understand it there is no formal means to
describe threads. think the term is a "calculus"?)
I think there must be by now. *E.g. Hoare introduced a lot of notationand
ways to reason formally about these things in "Communicating sequential
processes", many tens of years ago. *But he didn't there address
differences between threads and heavy-weight processes (his processes
appear more like threads, sharing memory, than what we currently mean by
"process" in the context of OS).

Well I managed to track what I think is the latest version down:

http://www.usingcsp.com/cspbook.pdf
That is a bit different already from the original proposal. The
original proposal introduced seven primitives to build on.
(Communicating Sequential Processes, C.A.R. Hoare. Communications of
the ACM 21(8):666-677, August 1978)
[...]
*One passage was interesting.

P 209 section 7.2

"... In its full generality, multithreading is an incredibly complex and
errorprone
technique, not to be recommended in any but the smallest programs.
In excuse, we may plead that it was invented before the days of structured
programming, when even FORTRAN was still considered to be a high-level
programming
language!"

I think I am quoting somewhat out of context there [...]
Although you opened the book at the historical overview, this is our
topic here. He talks about the Unix-like fork-join style of processes.
That is what he calls multithreading. He does not mean the multi-
threading of today. Anyway, the criticism applies to nowadays
threading also: The process starting is unstructured.

Besides, that is what we tried to fix here in this discussion thread
in case of object-oriented programs in C++.

If you read further, you can find that Hoare considers structured
parallelism. That is what can be achieved with the unified active
objects we worked out here. See one of my first messages to this
discussion thread:

<quote>
There are programming models where there are objects which are active
entities. As I earlier mentioned one early proposal of this kind is
the Distributed Processes programming concept. ...

That means that you can use configurator blocks in your program where
you declare so-called thread objects and shared objects together.

{
Buffer b;
Producer p(b);
Consumer c(b);
}

The block realises a parallel block or fork-join block as they call
it
in a trendy terminology.
</quote>
http://groups.google.com/group/comp....0fdce805e9f407

If you read even further in that book, you will find the Conditional
Critical Region language concept too what I am suggesting to adapt
into C++0x.

Best Regards,
Szabolcs
Jun 27 '08 #139
In article <48**********@mk-nntp-2.news.uk.tiscali.com>,
an**@servocomm.freeserve.co.uk says...

[ ... ]
"... In its full generality, multithreading is an incredibly complex and
errorprone technique, not to be recommended in any but the smallest programs.
I think we can stop right there. For the moment, think of multithreading
only in a cooperatively multitasked, uniprocessor environment. This is
probably the most restricted case of multithreading, but even so it's
already essentially equivalent to unrestricted flow control (i.e.
spawning a thread is virtually equivalent to a goto in disguise).

Hoare was one of the early proponents of structured programming. As
such, I suspect he saw multhreading "in its full generality" as a
possible way of re-introducing the same old problems of unstructured
programming in a new guise.

I think his argument is primarily for creating a set of 'flow-control'
constructs for multithreading that give a structured view of its
capabilities, much like more conventional flow control constructs give a
structured view of single-threaded flow control capabilities.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jun 27 '08 #140
On 2008-05-21 05:47:37, James Kanze wrote:
>SGI statement simply contradicts you: "If multiple threads access a
single container, and at least one thread may potentially write, then
THE USER IS RESPONSIBLE FOR ENSURING MUTUAL EXCLUSION BETWEEN THE
THREADS during the container accesses." It is just safe for reading, so
what I said was correct that it is thread safe to a certain extent.

That's not the only guarantee it gives. It specifies the contract that
you have to respect. In other words, it is completely thread safe.
Using Wikipedia as one of the repositories that reflect "common use", it
seems that there is at least one use of "thread safety" that contradicts
you. From <http://en.wikipedia.org/wiki/Thread-safe>: "A piece of code is
thread-safe if it functions correctly during simultaneous execution by
multiple threads." Most STL implementations need additional implementation
elements, usually what the same article refers to as "mutual exclusion", to
achieve this goal -- which makes them not thread safe in the sense of this
article. At least that's how I read it.

(Your later comments about returned references notwithstanding...)

Gerhard
Jun 27 '08 #141
On May 24, 6:55 pm, Gerhard Fiedler <geli...@gmail.comwrote:
On 2008-05-21 05:47:37, James Kanze wrote:
SGI statement simply contradicts you: "If multiple threads
access a single container, and at least one thread may
potentially write, then THE USER IS RESPONSIBLE FOR
ENSURING MUTUAL EXCLUSION BETWEEN THE THREADS during the
container accesses." It is just safe for reading, so what I
said was correct that it is thread safe to a certain
extent.
That's not the only guarantee it gives. It specifies the
contract that you have to respect. In other words, it is
completely thread safe.
Using Wikipedia as one of the repositories that reflect
"common use",
Common use by whom? The Wikipedia isn't exactly what I would
call a reference.
it seems that there is at least one use of "thread safety"
that contradicts you.
There are a lot of people who seem to think that "thread safety"
means that the programmer doesn't have to do anything, and that
everything will just magically work. Such thread safety doesn't
exist in the real world, however; such a definition is useless;
and I don't know of any competent person who would use it.
From <http://en.wikipedia.org/wiki/Thread-safe>: "A piece of
code is thread-safe if it functions correctly during
simultaneous execution by multiple threads."
Which is rather ambiguous. What is meant by "simultaneous
execution"? The code in the allocators of an STL vector cannot
normally be executed simultaneously by multiple threads, but
certainly you wouldn't consider STL vector to not be thread safe
because it uses some sort of protection interally to ensure that
it doesn't execute simultaneously. And if you follow the rules
laid down by SGI (which are basically those of Posix), all of
the functions in their implementation work in a multithreaded
environment. (Roughly speaking, SGI uses the same definition as
Posix. Except that I don't think that any of the functions in
their implementation of the library are not thread safe, where
as in Posix, things like localtime, etc. are not thread safe.)
Most STL implementations need additional implementation
elements, usually what the same article refers to as "mutual
exclusion", to achieve this goal -- which makes them not
thread safe in the sense of this article.
If that's what the article actually says, then it's off base.
One can talk of different degrees of thread safety (although I
don't like the expression), with different types of guarantees,
but when you say that a function is not thread-safe, at all, you
mean something like localtime, which isn't thread safe and
cannot be used at all in a multithreaded environment. And if
you call the SGI implementation of std::vector "not
thread-safe", what do you call something like localtime?

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #142
In article <fcecda6c-1268-44a5-b648-0dac8960c810
@k30g2000hse.googlegroups.com>, ja*********@gmail.com says...

[ ... ]
If that's what the article actually says, then it's off base.
One can talk of different degrees of thread safety (although I
don't like the expression), with different types of guarantees,
but when you say that a function is not thread-safe, at all, you
mean something like localtime, which isn't thread safe and
cannot be used at all in a multithreaded environment. And if
you call the SGI implementation of std::vector "not
thread-safe", what do you call something like localtime?
Umm...."incompetent"? :-)

The Posix version of localtime is completely unsafe in a multithreaded
environment. There are other multithreaded environments (e.g. Win32) in
which one can use localtime quite safely. Of course, it's thread-safe
only in a manner roughly similar to SGI's STL -- you certainly _can_
cause problems if you abuse it sufficiently.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jun 27 '08 #143
"kwikius" <an**@servocomm.freeserve.co.ukwrote in message
news:48**********@mk-nntp-2.news.uk.tiscali.com...
>
"Alf P. Steinbach" <al***@start.nowrote in message
news:7b******************************@posted.comne t...
>>* kwikius:
>>(I may be wrong but as I understand it there is no formal means to
describe threads. think the term is a "calculus"?)

I think there must be by now. E.g. Hoare introduced a lot of notation
and ways to reason formally about these things in "Communicating
sequential processes", many tens of years ago. But he didn't there
address differences between threads and heavy-weight processes (his
processes appear more like threads, sharing memory, than what we
currently mean by "process" in the context of OS).

Well I managed to track what I think is the latest version down:

http://www.usingcsp.com/cspbook.pdf

(I avoided most of the technical stuff , but the pictures were nice :-) )

One passage was interesting.

P 209 section 7.2

"... In its full generality, multithreading is an incredibly complex and
errorprone
technique, not to be recommended in any but the smallest programs.
multi-threading is only as error prone as the programmer who makes use of
it... This thread deals with the topic:

http://groups.google.com/group/comp....92c5ffe9b47926

It kind of "seems" like Hoare tends to agree with the jist of the article
referenced in that thread. Basically, the author concludes that anybody who
uses shared memory multi-threading is suffering from a serious mental
condition. This conclusion is of coarse total bullshi%. Anyway...

In excuse, we may plead that it was invented before the days of structured
programming, when even FORTRAN was still considered to be a high-level
programming
language!"
I think I am quoting somewhat out of context there and bear in mind he's
presumably a mathematician at heart, but essentially I think there are 2
schools, the message passing school and the shared memory school. As I
understand it C++0x is getting the shared memory school basically because
its performance potential is much better for C++, but OTOH doesnt lend
itself very well to formal analysis. And presumably you can build message
passing more easily anyway as its at a higher level.
I don't want to get into a shared memory -vs- message passing flame war, so
I will show an example of a super-computer system that can support both:

http://groups.google.com/group/comp....dbf634f491f46b

Basically, you can use advanced shared memory techniques for intra-node
communications, and message passing for inter-node communications. Anyway,
one can eaisly create exterememly scaleable message passing right on top of
shared memory. Something like:
http://groups.google.com/group/comp....1fb95b412fd95a
(follow all links!)
As far as I am concerned, creating highly-efficient distributed
message-passing on top of shm is fairly trivial. I say highly efficient
because alls you need to do is create a single-producer/consumer queue which
of course does not need any interlocked RMW instructions or expensive
#StoreLoad or #LoadStore memory barrier constraints. You also need to be
familiar with (de)multi-plexing and distributing single-producer/consumer
unbounded message queues across N threads. Here is one example:
"You have to use distributed model. For instance, a
multiple-producer/single-consumer model in vZOOM can consist of per-producer
queue which are registered with a single consumer. This model is 100%
compatible with. You can transform that into a
multiple-producer/multiple-consumer model by hashing a pointer to a producer
thread into an index of consumer threads. The producer uses this index to
know what consumer to register its spsc queue with. All of the described
models can use the ultra-low overhead vZOOM unbounded wait-free queues. This
is ideal for a distributed message passing system."
Any thoughts?

Jun 27 '08 #144

"Chris Thomasson" <cr*****@comcast.netwrote in message
news:7M******************************@comcast.com. ..
Any thoughts?
Well, as I said various times before I'm pretty green when it comes to
threads.

My only thought was that it would be cool if I had control of the thread
scheduler and all threads communicated with the scheduler. The scheduler
keeps a data structure for each thread which is basically read only as far
as the scheduler is concerned Each thread has a yield_to_scheduler(
my_thread_id, mystate) primitive which means that when it has done some
significant work it suspends and its data structure is updated (The data
structure per thread would probably be as simple a I have now completed
step3 of my process or whatever). The scheduler then can block progress of
other threads or start other threads dependent on the current state it sees.
The scheduler acts somewhat like a synchronous clock and each thread excutes
one step or clock cycle then suspends. Essentially the idea would be to try
to make a synchronous state machine rather than asynchronous system.

What do you think.. Cool or naff ?

regards
Andy Little

Jun 27 '08 #145
On May 27, 10:46*am, "kwikius" <a...@servocomm.freeserve.co.ukwrote:
"Chris Thomasson" <cris...@comcast.netwrote in message

news:7M******************************@comcast.com. ..
Any thoughts?

Well, as I said various times before I'm pretty green when it comes to
threads.

My only thought was that it would be cool if I had control of the thread
scheduler and all threads communicated with the scheduler. The scheduler
keeps a data structure for each thread which is basically read only as far
as the scheduler is concerned Each thread has a yield_to_scheduler(
my_thread_id, mystate) primitive which means that when it has done some
significant work it suspends and its data structure is updated (The data
structure per thread would probably be as simple a I have now completed
step3 of my process or whatever). The scheduler then can block progress of
other threads or start other threads dependent on the current state it sees.
The scheduler acts somewhat like a synchronous clock and each thread excutes
one step or clock cycle then suspends. Essentially the idea would be to try
to make a synchronous state machine rather than asynchronous system.

What do you think.. Cool or naff ?
This is cooperative multithreading, which is orthogonal to preemptive
multithreading. The first is used to simplify event driven programs,
the second is used to take advantage of natural parallelism of the
platform you are running on. The first one is
important to simplify systems which have to deal with a very large
amount of tasks (for example a web service), most of which are idle
most of the time; while the second is to speed up parallel CPU bound
programs.

Really, the serve for two completely different purposes, and can even
be used at the same time! In fact many platforms for
years used the so called M:N model for implementing threads, i.e. M
cooperatively scheduled user space threads running on N preemptively
scheduled kernel threads. Most of these implementations have since
moved to pure kernel based threads though, because
the complexity of an user space scheduler didn't justify the ease of
spawning a thread for very fine grained tasks.

I think that the perfect solution is having preemptive threads as the
usual threading primitive, and, for event driven programs,
cooperatively scheduled threads on top of an application specific
scheduler.

--
Giovanni P. Deretta
Jun 27 '08 #146
On May 27, 10:46*am, "kwikius" <a...@servocomm.freeserve.co.ukwrote:
[...]
Well, as I said various times before I'm pretty green when it comes to
threads.
You have been begging for a course before:
http://groups.google.com/group/comp....50cee90665066b

Now, here is your mini-course:
My only thought was that it would be cool if I had control of the thread
scheduler and all threads communicated with the scheduler. The scheduler
keeps a data structure for each thread which is basically read only as far
as the scheduler is concerned Each thread has a yield_to_scheduler(
my_thread_id, mystate) primitive which means that when it has done some
significant work it suspends and its data structure is updated (The data
structure per thread would probably be as simple a I have now completed
step3 of my process or whatever). The scheduler then can block progress of
other threads or start other threads dependent on the current state it sees.
The scheduler acts somewhat like a synchronous clock and each thread excutes
one step or clock cycle then suspends. Essentially the idea would be to try
to make a synchronous state machine rather than asynchronous system.

What do you think.. Cool or naff ?
I think you are re-inventing the wheel here. If you learnt about
concurrent programming, you would know that something like this is
called non-preemptive scheduling. It simplifies a lot but on the other
hand it serialises the whole system at the global level. So it works
well on a uni-processor but it just simulates a uni-processor on a
multi-processor system.

However, there were attempts to partition the global system state
while keeping non-preemptive scheduling. It is the programming model
for which I have called your attention so many times in this
discussion thread too. Brinch Hansen in the Distributed Processes
programming model just partitions the global state and says that one
instance of the Distributed Processes (DP) works this way (with pre-
emptive scheduling, called interleaving in DP) while the Distributed
Processes themselves are active simultaneoulsy and they are
communicating with each other via Remote Procedure Calls (called
external requests in DP).

http://brinch-hansen.net/papers/1978a.pdf

You were considering something like DP but, of course, in a much lower
level.

We have just made a step during this discussion to get closer to
unifying objects and threads. The resulting active object is similar
to a Distributed Process except that it does not use non-preemptive
scheduling. See the result here from Chris:

http://groups.google.com/group/comp....c14991f9e88482

It would even make sense for the C++0x committee to take it over into
the draft proposal as an optional higher level construction:
structured parallelism.

Best Regards,
Szabolcs
Jun 27 '08 #147
On May 27, 10:50*am, gpderetta <gpdere...@gmail.comwrote:
On May 27, 10:46*am, "kwikius" <a...@servocomm.freeserve.co.ukwrote:
"Chris Thomasson" <cris...@comcast.netwrote in message
news:7M******************************@comcast.com. ..
Any thoughts?
Well, as I said various times before I'm pretty green when it comes to
threads.
My only thought was that it would be cool if I had control of the thread
scheduler and all threads communicated with the scheduler. The scheduler
keeps a data structure for each thread which is basically read only as far
as the scheduler is concerned Each thread has a yield_to_scheduler(
my_thread_id, mystate) primitive which means that when it has done some
significant work it suspends and its data structure is updated (The data
structure per thread would probably be as simple a I have now completed
step3 of my process or whatever). The scheduler then can block progress of
other threads or start other threads dependent on the current state it sees.
The scheduler acts somewhat like a synchronous clock and each thread excutes
one step or clock cycle then suspends. Essentially the idea would be to try
to make a synchronous state machine rather than asynchronous system.
What do you think.. Cool or naff ?

This is cooperative multithreading, which is orthogonal to preemptive
multithreading. The first is used to simplify event driven programs,
the second is used to take advantage of natural parallelism of the
platform you are running on. The first one is
important to simplify systems which have to deal with a very large
amount of tasks (for example a web service), most of which are idle
most of the time; while the second is to speed up parallel CPU bound
programs.
OK. So we are differentiating here between concurrency within an
application (user space) and running several unrelated applications
concurrently (kernel space).

What's different about the 2. IIRC Windows 3.1 used cooperative
multitasking in kernel space. An application (essentially a list of
callback functions called by the OS when user input occurred over
their patch) was supposed to call 'yield' periodically if it
was doing a lengthy operation.

The problem is that kernel space is hostile. The kernel just allocates
memory and services requests but has no real interest in what the
application is doing. The application is competing with other
applications and has no real incentive to call yield, it only makes
the app look slower. IOW cooperative multitasking doesnt work in
kernel space, the only solution then is preemptive multitasking, which
is a blunt instrument, but pragmatic.

Within an application however cooperative multitasking seems to make a
lot of sense. The application adds up to a common whole and its parts
are designed to cooperate. Its a friendly environment not a hostile
one. In this environment preemptive multitasking doesnt seem to make
much sense, although it seems to be what we have with a messy fight by
threads to take control of some slice of shared memory at some
arbitrary time and lock out other threads who might to try to grab it
at any time.

regards
Andy Little

Jun 27 '08 #148
On May 27, 3:52*pm, kwikius <a...@servocomm.freeserve.co.ukwrote:
On May 27, 10:50*am, gpderetta <gpdere...@gmail.comwrote:
On May 27, 10:46*am, "kwikius" <a...@servocomm.freeserve.co.ukwrote:
"Chris Thomasson" <cris...@comcast.netwrote in message
>news:7M******************************@comcast.com ...
Any thoughts?
Well, as I said various times before I'm pretty green when it comes to
threads.
My only thought was that it would be cool if I had control of the thread
scheduler and all threads communicated with the scheduler. The scheduler
keeps a data structure for each thread which is basically read only asfar
as the scheduler is concerned Each thread has a yield_to_scheduler(
my_thread_id, mystate) primitive which means that when it has done some
significant work it suspends and its data structure is updated (The data
structure per thread would probably be as simple a I have now completed
step3 of my process or whatever). The scheduler then can block progress of
other threads or start other threads dependent on the current state itsees.
The scheduler acts somewhat like a synchronous clock and each thread excutes
one step or clock cycle then suspends. Essentially the idea would be to try
to make a synchronous state machine rather than asynchronous system.
What do you think.. Cool or naff ?
This is cooperative multithreading, which is orthogonal to preemptive
multithreading. The first is used to simplify event driven programs,
the second is used to take advantage of natural parallelism of the
platform you are running on. The first one is
important to simplify systems which have to deal with a very large
amount of tasks (for example a web service), most of which are idle
most of the time; while the second is to speed up parallel CPU bound
programs.

OK. So we are differentiating here between concurrency within an
application (user space) and running several unrelated applications
concurrently (kernel space).

What's different about the 2. IIRC Windows 3.1 used cooperative
multitasking in kernel space. An application (essentially a list of
callback functions called by the OS when user input occurred over
their patch) was supposed to call 'yield' periodically if it
was doing a lengthy operation.

The problem is that kernel space is hostile. The kernel just allocates
memory and services requests but has no real interest in what the
application is doing. The application is competing with other
applications and has no real incentive to call yield, it only makes
the app look slower. IOW cooperative multitasking doesnt work in
kernel space, the only solution then is preemptive multitasking, which
is a blunt instrument, but pragmatic.
yes, more or less this is the situation.
>
Within an application however cooperative multitasking seems to make a
lot of sense. The application adds up to a common whole and its parts
are designed to cooperate. Its a friendly environment not a hostile
one. In this environment preemptive multitasking doesnt seem to make
much sense, although it seems to be what we have with a messy fight by
threads to take control of some slice of shared memory at some
arbitrary time and lock out other threads who might to try to grab it
at any time.
It still makes sense if you need one or more of these situations:

- need take advantage of multiple CPU for parallelism (no way to do
cooperative multitasking there and still get any reasonable
parallelism),

- get bounded worst case latencies (with a real time scheduler)

- your OS doesn't support an asynchronous variant of the blocking
system call you need.

--
gpd
Jun 27 '08 #149
On 2008-05-27 10:52:21, kwikius wrote:
In this environment preemptive multitasking doesnt seem to make much
sense, although it seems to be what we have with a messy fight by
threads to take control of some slice of shared memory at some arbitrary
time and lock out other threads who might to try to grab it at any time.
There are situations where one application needs to serve several
asynchronous events (for example a web server). In a cooperative
multitasking scheme the application is required to periodically poll each
device that may create an event. With preemptive multitasking, threads can
just wait for an event, and leave the actual communication mechanism to the
asynchronous event source up to the OS (which may poll a device, or react
to a hardware interrupt, or whatever).

Gerhard
Jun 27 '08 #150

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

15
by: Rob Nicholson | last post by:
I'm starting to worry a bit now. We're getting the above error when two users hit the same database/page on an ASP.NET application using ADO.NET, talking to a SQL 7 server. The error is perfectly...
7
by: Lau Lei Cheong | last post by:
Hello, Actually I think I should have had asked it long before, but somehow I haven't. Here's the scenerio: Say we have a few pages in an ASP.NET project, each of them needs to connect to...
10
by: nephish | last post by:
hey there, i have a huge app that connects to MySQL. There are three threads that are continually connecting and disconnecting to the db. The problem is, if there is an error, it faults out...
10
by: Steven Blair | last post by:
As I understand it, if I create a connection object in my application and close the connection, the next time I open a connection with the same connection string I should be using a pooled...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.