473,395 Members | 1,541 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,395 software developers and data experts.

a really simple C++ abstraction around pthread_t...

I use the following technique in all of my C++ projects; here is the example
code with error checking omitted for brevity:
__________________________________________________ _______________
/* Simple Thread Object
__________________________________________________ ____________*/
#include <pthread.h>
extern "C" void* thread_entry(void*);

class thread_base {
pthread_t m_tid;
friend void* thread_entry(void*);
virtual void on_active() = 0;

public:
virtual ~thread_base() = 0;

void active_run() {
pthread_create(&m_tid, NULL, thread_entry, this);
}

void active_join() {
pthread_join(m_tid, NULL);
}
};

thread_base::~thread_base() {}

void* thread_entry(void* state) {
reinterpret_cast<thread_base*>(state)->on_active();
return 0;
}

template<typename T>
struct active : public T {
active() : T() {
this->active_run();
}

~active() {
this->active_join();
}

template<typename T_p1>
active(T_p1 p1) : T(p1) {
this->active_run();
}

template<typename T_p1, typename T_p2>
active(T_p1 p1, T_p2 p2) : T(p1, p2) {
this->active_run();
}

// [and on and on for more params...]
};


/* Simple Usage Example
__________________________________________________ ____________*/
#include <string>
#include <cstdio>
class worker : public thread_base {
std::string const m_name;

void on_active() {
std::printf("(%p)->worker(%s)::on_thread_entry()\n",
(void*)this, m_name.c_str());
}

public:
worker(std::string const& name)
: m_name(name) {
std::printf("(%p)->worker(%s)::my_thread()\n",
(void*)this, m_name.c_str());
}

~worker() {
std::printf("(%p)->worker(%s)::~my_thread()\n",
(void*)this, m_name.c_str());
}
};
class another_worker : public thread_base {
unsigned const m_id;
std::string const m_name;

void on_active() {
std::printf("(%p)->my_thread(%u/%s)::on_thread_entry()\n",
(void*)this, m_id, m_name.c_str());
}

public:
another_worker(unsigned const id, std::string const& name)
: m_id(id), m_name(name) {
}
};
int main(void) {
{
active<workerworkers[] = {
"Chris",
"John",
"Jane",
"Steve",
"Richard",
"Lisa"
};

active<another_workerother_workers[] = {
active<another_worker>(21, "Larry"),
active<another_worker>(87, "Paul"),
active<another_worker>(43, "Peter"),
active<another_worker>(12, "Shelly"),
};
}

std::puts("\n\n\n__________________\nhit <ENTERto exit...");
std::fflush(stdout);
std::getchar();
return 0;
}
__________________________________________________ _______________


I personally like this technique better than Boost. I find it more straight
forward and perhaps more object oriented, the RAII nature of the `active'
helper class does not hurt either. Also, I really do think its more
"efficient" than Boost in the way it creates threads because it does not
copy anything...

IMHO, the really nice thing about it would have to be the `active' helper
class. It allows me to run and join any object from the ctor/dtor that
exposes a common interface of (T::active_run/join). Also, it allows me to
pass a variable number of arguments to the object it wraps directly through
its ctor; this is fairly convenient indeed...
Any suggestions on how I can improve this construct?

Oct 30 '08 #1
17 5774
On Oct 30, 10:39*pm, "Chris M. Thomasson" <n...@spam.invalidwrote:
I use the following technique in all of my C++ projects; here is the example
code with error checking omitted for brevity:
__________________________________________________ _______________
/* Simple Thread Object
__________________________________________________ ____________*/
#include <pthread.h>

extern "C" void* thread_entry(void*);

class thread_base {
* pthread_t m_tid;
* friend void* thread_entry(void*);
* virtual void on_active() = 0;

public:
* virtual ~thread_base() = 0;

* void active_run() {
* * pthread_create(&m_tid, NULL, thread_entry, this);
* }

* void active_join() {
* * pthread_join(m_tid, NULL);
* }

};

thread_base::~thread_base() {}

void* thread_entry(void* state) {
* reinterpret_cast<thread_base*>(state)->on_active();
* return 0;

}

template<typename T>
struct active : public T {
* active() : T() {
* * this->active_run();
* }

* ~active() {
* * this->active_join();
* }

* template<typename T_p1>
* active(T_p1 p1) : T(p1) {
* * this->active_run();
* }

* template<typename T_p1, typename T_p2>
* active(T_p1 p1, T_p2 p2) : T(p1, p2) {
* * this->active_run();
* }

* // [and on and on for more params...]

};

/* Simple Usage Example
__________________________________________________ ____________*/
#include <string>
#include <cstdio>

class worker : public thread_base {
* std::string const m_name;

* void on_active() {
* * std::printf("(%p)->worker(%s)::on_thread_entry()\n",
* * * (void*)this, m_name.c_str());
* }

public:
* worker(std::string const& name)
* * : m_name(name) {
* * std::printf("(%p)->worker(%s)::my_thread()\n",
* * * (void*)this, m_name.c_str());
* }

* ~worker() {
* * std::printf("(%p)->worker(%s)::~my_thread()\n",
* * *(void*)this, m_name.c_str());
* }

};

class another_worker : public thread_base {
* unsigned const m_id;
* std::string const m_name;

* void on_active() {
* * std::printf("(%p)->my_thread(%u/%s)::on_thread_entry()\n",
* * * (void*)this, m_id, m_name.c_str());
* }

public:
* another_worker(unsigned const id, std::string const& name)
* * : m_id(id), m_name(name) {
* }

};

int main(void) {
* {
* * active<workerworkers[] = {
* * * "Chris",
* * * "John",
* * * "Jane",
* * * "Steve",
* * * "Richard",
* * * "Lisa"
* * };

* * active<another_workerother_workers[] = {
* * * active<another_worker>(21, "Larry"),
* * * active<another_worker>(87, "Paul"),
* * * active<another_worker>(43, "Peter"),
* * * active<another_worker>(12, "Shelly"),
* * };
* }

* std::puts("\n\n\n__________________\nhit <ENTERto exit...");
* std::fflush(stdout);
* std::getchar();
* return 0;}

__________________________________________________ _______________

I personally like this technique better than Boost. I find it more straight
forward and perhaps more object oriented, the RAII nature of the `active'
helper class does not hurt either. Also, I really do think its more
"efficient" than Boost in the way it creates threads because it does not
copy anything...

IMHO, the really nice thing about it would have to be the `active' helper
class. It allows me to run and join any object from the ctor/dtor that
exposes a common interface of (T::active_run/join). Also, it allows me to
pass a variable number of arguments to the object it wraps directly through
its ctor; this is fairly convenient indeed...

Any suggestions on how I can improve this construct?
Now it is better that you have taken the advice about the terminology
(active):

http://groups.google.com/group/comp....915b5211cce641

Best Regards,
Szabolcs
Oct 30 '08 #2

"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:de**********************************@u29g2000 pro.googlegroups.com...
On Oct 30, 10:39 pm, "Chris M. Thomasson" <n...@spam.invalidwrote:
I use the following technique in all of my C++ projects; here is the
example
code with error checking omitted for brevity:
__________________________________________________ _______________
[...]
__________________________________________________ _______________

I personally like this technique better than Boost. I find it more
straight
forward and perhaps more object oriented, the RAII nature of the
`active'
helper class does not hurt either. Also, I really do think its more
"efficient" than Boost in the way it creates threads because it does not
copy anything...

IMHO, the really nice thing about it would have to be the `active'
helper
class. It allows me to run and join any object from the ctor/dtor that
exposes a common interface of (T::active_run/join). Also, it allows me
to
pass a variable number of arguments to the object it wraps directly
through
its ctor; this is fairly convenient indeed...

Any suggestions on how I can improve this construct?
Now it is better that you have taken the advice about the terminology
(active):
http://groups.google.com/group/comp....915b5211cce641
Yeah; I think your right.

Oct 30 '08 #3

"Chris M. Thomasson" <no@spam.invalidwrote in message
news:dI************@newsfe01.iad...
>I use the following technique in all of my C++ projects; here is the
example code with error checking omitted for brevity:
__________________________________________________ _______________
[...]
__________________________________________________ _______________
[...]
Any suggestions on how I can improve this construct?
One addition I forgot to add would be creating an explict `guard' helper
object within the `active' helper object so that one can create objects and
intervene between its ctor and when it actually gets ran... Here is full
example code showing this moment:
__________________________________________________ _______________
/* Simple Thread Object
__________________________________________________ ____________*/
#include <pthread.h>
extern "C" void* thread_entry(void*);

class thread_base {
pthread_t m_tid;
friend void* thread_entry(void*);
virtual void on_active() = 0;

public:
virtual ~thread_base() = 0;

void active_run() {
pthread_create(&m_tid, NULL, thread_entry, this);
}

void active_join() {
pthread_join(m_tid, NULL);
}
};

thread_base::~thread_base() {}

void* thread_entry(void* state) {
reinterpret_cast<thread_base*>(state)->on_active();
return 0;
}
template<typename T>
struct active : public T {
struct guard {
T& m_object;

guard(T& object) : m_object(object) {
m_object.active_run();
}

~guard() {
m_object.active_join();
}
};

active() : T() {
this->active_run();
}

~active() {
this->active_join();
}

template<typename T_p1>
active(T_p1 p1) : T(p1) {
this->active_run();
}

template<typename T_p1, typename T_p2>
active(T_p1 p1, T_p2 p2) : T(p1, p2) {
this->active_run();
}

// [and on and on for more params...]
};


/* Simple Usage Example
__________________________________________________ ____________*/
#include <string>
#include <cstdio>
class worker : public thread_base {
std::string const m_name;

void on_active() {
std::printf("(%p)->worker(%s)::on_active()\n",
(void*)this, m_name.c_str());
}

public:
worker(std::string const& name)
: m_name(name) {
std::printf("(%p)->worker(%s)::my_thread()\n",
(void*)this, m_name.c_str());
}

~worker() {
std::printf("(%p)->worker(%s)::~my_thread()\n",
(void*)this, m_name.c_str());
}
};
class another_worker : public thread_base {
unsigned const m_id;
std::string const m_name;

void on_active() {
std::printf("(%p)->another_worker(%u/%s)::on_active()\n",
(void*)this, m_id, m_name.c_str());
}

public:
another_worker(unsigned const id, std::string const& name)
: m_id(id), m_name(name) {
}
};
int main(void) {
{
worker w1("Amy");
worker w2("Kim");
worker w3("Chris");
another_worker aw1(123, "Kelly");
another_worker aw2(12345, "Tim");
another_worker aw3(87676, "John");

active<thread_base>::guard w12_aw12[] = {
w1, w2, w3,
aw1, aw2, aw3
};

active<workerworkers[] = {
"Jim",
"Dave",
"Regis"
};

active<another_workerother_workers[] = {
active<another_worker>(999, "Jane"),
active<another_worker>(888, "Ben"),
active<another_worker>(777, "Larry")
};
}

std::puts("\n\n\n__________________\nhit <ENTERto exit...");
std::fflush(stdout);
std::getchar();
return 0;
}
__________________________________________________ _______________


Take notice of the following code snippet residing within main:
worker w1("Amy");
worker w2("Kim");
worker w3("Chris");
another_worker aw1(123, "Kelly");
another_worker aw2(12345, "Tim");
another_worker aw3(87676, "John");

active<thread_base>::guard w12_aw12[] = {
w1, w2, w3,
aw1, aw2, aw3
};
This shows how one can make use of the guard object. The objects are fully
constructed, and they allow you do go ahead and do whatever you need to do
with them _before_ you actually run/join them. This can be a very convenient
ability.

Oct 30 '08 #4
"Chris M. Thomasson" wrote
"Chris M. Thomasson" wrote
I use the following technique in all of my C++ projects; here is the
example code with error checking omitted for brevity:

[...]
Any suggestions on how I can improve this construct?

One addition I forgot to add would be creating an explict `guard' helper
object within the `active' helper object so that one can create objects and
intervene between its ctor and when it actually gets ran... Here is full
example code showing this moment:
__________________________________________________ _______________
/* Simple Thread Object
__________________________________________________ ____________*/
#include <pthread.h>
extern "C" void* thread_entry(void*);

class thread_base {
pthread_t m_tid;
friend void* thread_entry(void*);
virtual void on_active() = 0;

public:
virtual ~thread_base() = 0;

void active_run() {
pthread_create(&m_tid, NULL, thread_entry, this);
}

void active_join() {
pthread_join(m_tid, NULL);
}
};

thread_base::~thread_base() {}

void* thread_entry(void* state) {
reinterpret_cast<thread_base*>(state)->on_active();
return 0;
}
template<typename T>
struct active : public T {
struct guard {
T& m_object;

guard(T& object) : m_object(object) {
m_object.active_run();
}

~guard() {
m_object.active_join();
}
};

active() : T() {
this->active_run();
}
<snip>

Hmm. is it ok to stay within the ctor for the whole
duration of the lifetime of the object?
IMO the ctor should be used only for initializing the object,
but not for executing or calling the "main loop" of the object
because the object is fully created only after the ctor has finished,
isn't it?

Oct 31 '08 #5
"Adem" <fo***********@alicewho.comwrote in message
news:ge**********@aioe.org...
"Chris M. Thomasson" wrote
[...]
>template<typename T>
struct active : public T {
struct guard {
T& m_object;

guard(T& object) : m_object(object) {
m_object.active_run();
}

~guard() {
m_object.active_join();
}
};

active() : T() {

// at this point, T is constructed.

> this->active_run();

// the procedure above only concerns T.
> }
<snip>

Hmm. is it ok to stay within the ctor for the whole
duration of the lifetime of the object?
In this case it is because the object `T' is fully constructed before its
`active_run' procedure is invoked.

IMO the ctor should be used only for initializing the object,
but not for executing or calling the "main loop" of the object
because the object is fully created only after the ctor has finished,
isn't it?
Normally your correct. However, the `active' object has nothing to do with
the "main loop" of the object it wraps. See following discussion for further
context:

http://groups.google.com/group/comp....cae1851bb5f215

The OP of that thread was trying to start the thread within the ctor of the
threading base-class. This invokes a race-condition such that the derived
class can be called without its ctor being completed. The `active' helper
template gets around that by automating the calls to `active_run/join' on a
completely formed object T.

Oct 31 '08 #6
On Oct 31, 3:40*pm, "Adem" <for-usenet...@alicewho.comwrote:
"Chris M. Thomasson" wrote
"Chris M. Thomasson" wrote
>I use the following technique in all of my C++ projects; here is the
>example code with error checking omitted for brevity:
[...]
Any suggestions on how I can improve this construct?
One addition I forgot to add would be creating an explict `guard' helper
object within the `active' helper object so that one can create objectsand
intervene between its ctor and when it actually gets ran... Here is full
example code showing this moment:
__________________________________________________ _______________
/* Simple Thread Object
__________________________________________________ ____________*/
#include <pthread.h>
extern "C" void* thread_entry(void*);
class thread_base {
* pthread_t m_tid;
* friend void* thread_entry(void*);
* virtual void on_active() = 0;
public:
* virtual ~thread_base() = 0;
* void active_run() {
* * pthread_create(&m_tid, NULL, thread_entry, this);
* }
* void active_join() {
* * pthread_join(m_tid, NULL);
* }
};
thread_base::~thread_base() {}
void* thread_entry(void* state) {
* reinterpret_cast<thread_base*>(state)->on_active();
* return 0;
}
template<typename T>
struct active : public T {
* struct guard {
* * T& m_object;
* * guard(T& object) : m_object(object) {
* * * m_object.active_run();
* * }
* * ~guard() {
* * * m_object.active_join();
* * }
* };
* active() : T() {
* * this->active_run();
* }

<snip>
--
Hmm. is it ok to stay within the ctor for the whole
duration of the lifetime of the object?
It does not stay within the constructor since the constructor
completes after starting a thread.
IMO the ctor should be used only for initializing the object,
That is what is happening. Part of the initialisation is launching a
thread.
but not for executing or calling the "main loop" of the object
The C++ model allows any method calls from the constructor.

12.6.2.9
"Member functions (including virtual member functions, 10.3) can be
called for an object under construction."
http://www.open-std.org/jtc1/sc22/wg...2008/n2798.pdf
because the object is fully created only after the ctor has finished,
isn't it?
Yes, it is. In this case the object is fully constructed since the
thread is started in the wrapper after the object has been fully
constructed.

If you are interested in the arguments and counter arguments, you can
check the discussion thread where this construction has emerged:

"What's the connection between objects and threads?"
http://groups.google.com/group/comp....cae1851bb5f215

Best Regards,
Szabolcs
Oct 31 '08 #7
On Oct 30, 10:39*pm, "Chris M. Thomasson" <n...@spam.invalidwrote:
Any suggestions on how I can improve this construct?
I think this construction is good enough for the moment. Now we should
turn to enhancing the communication part for shared objects in C++0x.

You know that Boost provides the scoped lock which has some advantage
over the explicit lock/unlock in Pthreads.

Furthermore, C++0x includes already some higher level wrapper
construct for making the wait for condition variables more natural or
user friendly. I refer to the wait wrapper, which expects the
predicate:

<quote>
template <class Predicate>
void wait(unique_lock<mutex>& lock, Predicate pred);
Effects:
As if:
while (!pred())
wait(lock);
</quote>
http://www.open-std.org/jtc1/sc22/wg...008/n2497.html
</quote>

Actually, this wrapper could be combined with the scoped lock to get a
very high level construction in C++0x.

Well, ideally if they would dear to introduce some keywords in C++0x,
a bounded buffer could be expressed something like this.

template< typename T >
monitor class BoundedBuffer {
const unsigned int m_max;
std::deque<Tb;
public:
BoundedBuffer(const int n) : m_max(n) {}
void put(const T n) {
when (b.size() < m_max) {
b.push_back(n);
}
}
T get() {
T aux;
when (!b.empty()) {
aux = b.front();
b.pop_front();
}
return aux;
}
}

However, neither the `monitor' nor the `when' is not going to be
introduced in C++0x. The `when' would have the semantics of the
Conditional Critical Region.

Actually, something similar could be achieved with higher level
wrapper constructions that would result in program fragments like this
(I am not sure about the syntax, I did a simple transformation there):

template< typename T >
class BoundedBuffer : public Monitor {
const unsigned int m_max;
std::deque<Tb;
public:
BoundedBuffer(const int n) : m_max(n) {}
void put(const T n) {
{ When guard(b.size() < m_max);
b.push_back(n);
}
}
T get() {
T aux;
{ When guard(!b.empty());
aux = b.front();
b.pop_front();
}
return aux;
}
}

Here the super class would contain the necessary harness (mutex,
condvar) and the RAII object named `guard' could be similar to the
Boost scoped lock, it would provide locking and unlocking but in
addition it would wait in the constructor until the predicate is
satisfied. Something like this:

class When {
....
public:
When(...) {
// lock
// cv.wait(&lock, pred);
}
~When() {
// cv.broadcast
// unlock
}
};

The question is how would you hack the classes `Monitor' and `When' so
that active objects could be combined with this kind of monitor data
structure to complete the canonical example of producers-consumers.
Forget about efficiency concerns for now.

Best Regards,
Szabolcs
Oct 31 '08 #8

"Szabolcs Ferenczi" <sz***************@gmail.comwrote in message
news:83**********************************@o4g2000p ra.googlegroups.com...
On Oct 30, 10:39 pm, "Chris M. Thomasson" <n...@spam.invalidwrote:
Any suggestions on how I can improve this construct?
I think this construction is good enough for the moment. Now we should
turn to enhancing the communication part for shared objects in C++0x.
Okay.

[...]

The question is how would you hack the classes `Monitor' and `When' so
that active objects could be combined with this kind of monitor data
structure to complete the canonical example of producers-consumers.
Forget about efficiency concerns for now.
Here is a heck of a hack, in the form of a fully working program, for you
take a look at Szabolcs:
__________________________________________________ _____________________
/* Simple Thread Object
__________________________________________________ ____________*/
#include <pthread.h>
extern "C" void* thread_entry(void*);

class thread_base {
pthread_t m_tid;
friend void* thread_entry(void*);
virtual void on_active() = 0;

public:
virtual ~thread_base() = 0;

void active_run() {
pthread_create(&m_tid, NULL, thread_entry, this);
}

void active_join() {
pthread_join(m_tid, NULL);
}
};

thread_base::~thread_base() {}

void* thread_entry(void* state) {
reinterpret_cast<thread_base*>(state)->on_active();
return 0;
}
template<typename T>
struct active : public T {
struct guard {
T& m_object;

guard(T& object) : m_object(object) {
m_object.active_run();
}

~guard() {
m_object.active_join();
}
};

active() : T() {
this->active_run();
}

~active() {
this->active_join();
}

template<typename T_p1>
active(T_p1 p1) : T(p1) {
this->active_run();
}

template<typename T_p1, typename T_p2>
active(T_p1 p1, T_p2 p2) : T(p1, p2) {
this->active_run();
}

// [and on and on for more params...]
};
/* Simple Moniter
__________________________________________________ ____________*/
class moniter {
pthread_mutex_t m_mutex;
pthread_cond_t m_cond;

public:
moniter() {
pthread_mutex_init(&m_mutex, NULL);
pthread_cond_init(&m_cond, NULL);
}

~moniter() {
pthread_cond_destroy(&m_cond);
pthread_mutex_destroy(&m_mutex);
}

struct lock_guard {
moniter& m_moniter;

lock_guard(moniter& moniter_) : m_moniter(moniter_) {
m_moniter.lock();
}

~lock_guard() {
m_moniter.unlock();
}
};

void lock() {
pthread_mutex_lock(&m_mutex);
}

void unlock() {
pthread_mutex_unlock(&m_mutex);
}

void wait() {
pthread_cond_wait(&m_cond, &m_mutex);
}

void signal() {
pthread_cond_signal(&m_cond);
}

void broadcast() {
pthread_cond_signal(&m_cond);
}
};
#define when_x(mp_pred, mp_line) \
lock_guard guard_##mp_line(*this); \
while (! (mp_pred)) this->wait();

#define when(mp_pred) when_x(mp_pred, __LINE__)




/* Simple Usage Example
__________________________________________________ ____________*/
#include <cstdio>
#include <deque>
#define PRODUCE 10000
#define BOUND 100
#define YIELD 2
template<typename T>
struct bounded_buffer : public moniter {
unsigned const m_max;
std::deque<Tm_buffer;

public:
bounded_buffer(unsigned const max_) : m_max(max_) {}

void push(T const& obj) {
when (m_buffer.size() < m_max) {
m_buffer.push_back(obj);
signal();
}
}

T pop() {
T obj;
when (! m_buffer.empty()) {
obj = m_buffer.front();
m_buffer.pop_front();
}
return obj;
}
};
class producer : public thread_base {
bounded_buffer<unsigned>& m_buffer;

void on_active() {
for (unsigned i = 0; i < PRODUCE; ++i) {
m_buffer.push(i + 1);
std::printf("produced %u\n", i + 1);
if (! (i % YIELD)) { sched_yield(); }
}
}

public:
producer(bounded_buffer<unsigned>* buffer) : m_buffer(*buffer) {}
};
struct consumer : public thread_base {
bounded_buffer<unsigned>& m_buffer;

void on_active() {
unsigned i;
do {
i = m_buffer.pop();
std::printf("consumed %u\n", i);
if (! (i % YIELD)) { sched_yield(); }
} while (i != PRODUCE);
}

public:
consumer(bounded_buffer<unsigned>* buffer) : m_buffer(*buffer) {}
};


int main(void) {
{
bounded_buffer<unsignedb(BOUND);
active<producerp(&b);
active<consumerc(&b);
}

std::puts("\n\n\n__________________\nhit <ENTERto exit...");
std::fflush(stdout);
std::getchar();
return 0;
}
__________________________________________________ _____________________



Please take notice of the following class, which compiles fine:
template<typename T>
struct bounded_buffer : public moniter {
unsigned const m_max;
std::deque<Tm_buffer;

public:
bounded_buffer(unsigned const max_) : m_max(max_) {}

void push(T const& obj) {
when (m_buffer.size() < m_max) {
m_buffer.push_back(obj);
signal();
}
}

T pop() {
T obj;
when (! m_buffer.empty()) {
obj = m_buffer.front();
m_buffer.pop_front();
}
return obj;
}
};


Well, is that kind of what you had in mind? I make this possible by hacking
the following macro together:
#define when_x(mp_pred, mp_line) \
lock_guard guard_##mp_line(*this); \
while (! (mp_pred)) this->wait();

#define when(mp_pred) when_x(mp_pred, __LINE__)


It works, but its a heck of a hack! ;^D

Anyway, what do you think of this approach? I add my own "keyword"... lol.

Oct 31 '08 #9
of course I have created a possible DEADLOCK condition! I totally forgot to
have the bounded_buffer signal/broadcast after it pops something! Here is
the FIXED version:


/* Simple Thread Object
__________________________________________________ ____________*/
#include <pthread.h>
extern "C" void* thread_entry(void*);

class thread_base {
pthread_t m_tid;
friend void* thread_entry(void*);
virtual void on_active() = 0;

public:
virtual ~thread_base() = 0;

void active_run() {
pthread_create(&m_tid, NULL, thread_entry, this);
}

void active_join() {
pthread_join(m_tid, NULL);
}
};

thread_base::~thread_base() {}

void* thread_entry(void* state) {
reinterpret_cast<thread_base*>(state)->on_active();
return 0;
}
template<typename T>
struct active : public T {
struct guard {
T& m_object;

guard(T& object) : m_object(object) {
m_object.active_run();
}

~guard() {
m_object.active_join();
}
};

active() : T() {
this->active_run();
}

~active() {
this->active_join();
}

template<typename T_p1>
active(T_p1 p1) : T(p1) {
this->active_run();
}

template<typename T_p1, typename T_p2>
active(T_p1 p1, T_p2 p2) : T(p1, p2) {
this->active_run();
}

// [and on and on for more params...]
};
/* Simple Moniter
__________________________________________________ ____________*/
class moniter {
pthread_mutex_t m_mutex;
pthread_cond_t m_cond;

public:
moniter() {
pthread_mutex_init(&m_mutex, NULL);
pthread_cond_init(&m_cond, NULL);
}

~moniter() {
pthread_cond_destroy(&m_cond);
pthread_mutex_destroy(&m_mutex);
}

struct lock_guard {
moniter& m_moniter;

lock_guard(moniter& moniter_) : m_moniter(moniter_) {
m_moniter.lock();
}

~lock_guard() {
m_moniter.unlock();
}
};

void lock() {
pthread_mutex_lock(&m_mutex);
}

void unlock() {
pthread_mutex_unlock(&m_mutex);
}

void wait() {
pthread_cond_wait(&m_cond, &m_mutex);
}

void signal() {
pthread_cond_signal(&m_cond);
}

void broadcast() {
pthread_cond_signal(&m_cond);
}
};
#define when_x(mp_pred, mp_line) \
lock_guard guard_##mp_line(*this); \
while (! (mp_pred)) this->wait();

#define when(mp_pred) when_x(mp_pred, __LINE__)




/* Simple Usage Example
__________________________________________________ ____________*/
#include <cstdio>
#include <deque>
#define PRODUCE 10000
#define BOUND 100
#define YIELD 2
template<typename T>
struct bounded_buffer : public moniter {
unsigned const m_max;
std::deque<Tm_buffer;

public:
bounded_buffer(unsigned const max_) : m_max(max_) {}

void push(T const& obj) {
when (m_buffer.size() < m_max) {
m_buffer.push_back(obj);
broadcast();
}
}

T pop() {
T obj;
when (! m_buffer.empty()) {
obj = m_buffer.front();
m_buffer.pop_front();
broadcast();
}
return obj;
}
};
class producer : public thread_base {
bounded_buffer<unsigned>& m_buffer;

void on_active() {
for (unsigned i = 0; i < PRODUCE; ++i) {
m_buffer.push(i + 1);
std::printf("produced %u\n", i + 1);
if (! (i % YIELD)) { sched_yield(); }
}
}

public:
producer(bounded_buffer<unsigned>* buffer) : m_buffer(*buffer) {}
};
struct consumer : public thread_base {
bounded_buffer<unsigned>& m_buffer;

void on_active() {
unsigned i;
do {
i = m_buffer.pop();
std::printf("consumed %u\n", i);
if (! (i % YIELD)) { sched_yield(); }
} while (i != PRODUCE);
}

public:
consumer(bounded_buffer<unsigned>* buffer) : m_buffer(*buffer) {}
};


int main(void) {
{
bounded_buffer<unsignedb(BOUND);
active<producerp(&b);
active<consumerc(&b);
}

std::puts("\n\n\n__________________\nhit <ENTERto exit...");
std::fflush(stdout);
std::getchar();
return 0;
}


I am VERY sorry for this boneheaded mistake!!! OUCH!!!! BTW, the reason I
broadcast is so the bounded_buffer class can be used by multiple producers
and consumers.

Oct 31 '08 #10
What exactly do you have in mind wrt integrating monitor, when keyword and
active object? How far off the corrected version of my example?

http://groups.google.com/group/comp....9f7980d9323edc
BTW, sorry for posting the broken version:

http://groups.google.com/group/comp....62e9a5d41a6ae6
I jumped the gun!

Oct 31 '08 #11
Given the fixed version I posted here:

http://groups.google.com/group/comp....9f7980d9323edc

one can easily create multiple producers and consumers like this:


int main(void) {
{
bounded_buffer<unsignedb(BOUND);
active<producerp[] = { &b, &b, &b, &b, &b, &b };
active<consumerc[] = { &b, &b, &b, &b, &b, &b };
}

std::puts("\n\n\n__________________\nhit <ENTERto exit...");
std::fflush(stdout);
std::getchar();
return 0;
}


I really do like the convenience of the `active' helper template. Anyway,
one caveat wrt the way these specific consumers are coded to act on data,
the number of producers and consumers must be equal.

Oct 31 '08 #12
On Oct 31, 10:14*pm, "Chris M. Thomasson" <n...@spam.invalidwrote:
What exactly do you have in mind wrt integrating monitor, when keyword and
active object?
I did not meant integrating all the three but it is an interesting
idea too.

I meant that if we have high level wrappers for easy coding of the
monitor construction, it is possible to put together applications
where there are active objects and passive ones for shared data
communication around. One such an example is the producers and
consumers example.
How far off the corrected version of my example?
It is promising. Better than I expected, however, the `broadcast();'
should be also included into the RAII (the `when' object).

I did not think of using the preprocessor for the task but rather some
pure C++ construction. That is why I thought that the RAII object
could not only be a simple `lock_guard' but something like a
`when_guard'. Then one does not have to explicitly place the
`broadcast();' and the `while (! (mp_pred)) this->wait();' can be part
of the constructor of the `when_guard'.

On the other hand, the preprocessor allows to keep the more natural
syntax.

Best Regards,
Szabolcs
Oct 31 '08 #13
[added: comp.lang.c++

here is link to Ulrich Eckhardt's full post because I snipped some of it:

http://groups.google.com/group/comp....190e3b9ac81a69

]

"Ulrich Eckhardt" <do******@knuut.dewrote in message
news:6n************@mid.uni-berlin.de...
Chris M. Thomasson wrote:
[C++ thread baseclass with virtual run() function]

Just one thing technical about the code: your use reinterpret_cast in a
place that actually calls for a static_cast. A static_cast is the right
tool to undo the implicit conversion from T* to void*.
>I personally like this technique better than Boost. I find it more
straight forward and perhaps more object oriented, the RAII nature of the
`active' helper class does not hurt either. Also, I really do think its
more "efficient" than Boost in the way it creates threads because it does
not copy anything...

There are two things that strike me here:
1. You mention "object oriented" as if that was a goal, but it isn't.
Rather, it is a means to achieve something, and the question is always
valid whether its use is justified. Java's decision to force an OO design
on you and then inviting other paradigms back in through the backdoor is
the prime example for misunderstood OO. Wrapping a thread into a class the
way you do it is another IMHO, ill explain below.
2. What exactly is the problem with the copying? I mean you're starting a
thread, which isn't actually a cheap operation either. Further, if you
want, you can optimise that by using transfer of ownership (auto_ptr) or
shared ownership (shared_ptr) in case you need. Boost doesn't make it
necessary to copy anything either (except under the hood it does some
dynamic allocation), but also allows you to chose if you want. However,
copying and thus avoiding shared data is the safer default, because any
shared data access requires care.
[...]

If find it unfortunate to be forced to use an smart pointers and dynamically
created objects just to be able to pass common shared data to a thread. I am
still not convinced that treating a thread as an object is a bad thing...
For instance...
How would I be able to create the following fully compliable program (please
refer to the section of code under the "Simple Example" heading, the rest is
simple impl detail for pthread abstraction) using Boost threads. The easy
way to find the example program is to go to the end of the entire program,
and start moving up until you hit the "Simple Example" comment... Here it
is:
__________________________________________________ ___________________

/* Simple Thread Object
__________________________________________________ ____________*/
#include <pthread.h>
extern "C" void* thread_entry(void*);

class thread_base {
pthread_t m_tid;
friend void* thread_entry(void*);
virtual void on_active() = 0;

public:
virtual ~thread_base() = 0;

void active_run() {
pthread_create(&m_tid, NULL, thread_entry, this);
}

void active_join() {
pthread_join(m_tid, NULL);
}
};

thread_base::~thread_base() {}

void* thread_entry(void* state) {
reinterpret_cast<thread_base*>(state)->on_active();
return 0;
}
template<typename T>
struct active : public T {
struct guard {
T& m_object;

guard(T& object) : m_object(object) {
m_object.active_run();
}

~guard() {
m_object.active_join();
}
};

active() : T() {
this->active_run();
}

~active() {
this->active_join();
}

template<typename T_p1>
active(T_p1 p1) : T(p1) {
this->active_run();
}

template<typename T_p1, typename T_p2>
active(T_p1 p1, T_p2 p2) : T(p1, p2) {
this->active_run();
}

template<typename T_p1, typename T_p2, typename T_p3>
active(T_p1 p1, T_p2 p2, T_p3 p3) : T(p1, p2, p3) {
this->active_run();
}

// [and on and on for more params...]
};


/* Simple Monitor
__________________________________________________ ____________*/
class monitor {
pthread_mutex_t m_mutex;
pthread_cond_t m_cond;

public:
monitor() {
pthread_mutex_init(&m_mutex, NULL);
pthread_cond_init(&m_cond, NULL);
}

~monitor() {
pthread_cond_destroy(&m_cond);
pthread_mutex_destroy(&m_mutex);
}

struct lock_guard {
monitor& m_monitor;

lock_guard(monitor& monitor_) : m_monitor(monitor_) {
m_monitor.lock();
}

~lock_guard() {
m_monitor.unlock();
}
};

struct signal_guard {
monitor& m_monitor;
bool const m_broadcast;

signal_guard(monitor& monitor_, bool broadcast = true)
: m_monitor(monitor_), m_broadcast(broadcast) {

}

~signal_guard() {
if (m_broadcast) {
m_monitor.broadcast();
} else {
m_monitor.signal();
}
}
};

void lock() {
pthread_mutex_lock(&m_mutex);
}

void unlock() {
pthread_mutex_unlock(&m_mutex);
}

void wait() {
pthread_cond_wait(&m_cond, &m_mutex);
}

void signal() {
pthread_cond_signal(&m_cond);
}

void broadcast() {
pthread_cond_broadcast(&m_cond);
}
};
#define when_xx(mp_pred, mp_line) \
monitor::lock_guard lock_guard_##mp_line(*this); \
monitor::signal_guard signal_guard_##mp_line(*this); \
while (! (mp_pred)) this->wait();

#define when_x(mp_pred, mp_line) when_xx(mp_pred, mp_line)
#define when(mp_pred) when_x(mp_pred, __LINE__)



/* Simple Example
__________________________________________________ ____________*/
#include <string>
#include <deque>
#include <cstdio>
template<typename T>
struct bounded_buffer : monitor {
unsigned const m_max;
std::deque<Tm_buffer;

public:
bounded_buffer(unsigned const max_) : m_max(max_) {}

void push(T const& obj) {
when (m_buffer.size() < m_max) {
m_buffer.push_back(obj);
}
}

T pop() {
when (! m_buffer.empty()) {
T obj = m_buffer.front();
m_buffer.pop_front();
return obj;
}
}
};
struct person : thread_base {
typedef bounded_buffer<std::stringqueue;
std::string const m_name;
queue& m_response;

public:
queue m_request;

void on_active() {
m_response.push(m_name + " is ready to receive some questions!");
for (unsigned i = 0 ;; ++i) {
std::string msg(m_request.pop());
if (msg == "QUIT") { break; }
std::printf("(Q)->%s: %s\n", m_name.c_str(), msg.c_str());
switch (i) {
case 0:
msg = "(A)->" + m_name + ": Well, I am okay";
break;

case 1:
msg = "(A)->" + m_name + ": I already told you!";
break;

default:
msg = "(A)->" + m_name + ": I am PISSED OFF NOW!";
}
m_response.push(msg);
}
std::printf("%s was asked to quit...\n", m_name.c_str());
m_response.push(m_name + " is FINISHED");
}

person(std::string const& name, queue* q, unsigned const bound)
: m_name(name), m_response(*q), m_request(bound) {}
};
#define BOUND 10
int main(void) {
{
person::queue response(BOUND);

active<personchris("Chris", &response, BOUND);
active<personamy("Amy", &response, BOUND);

std::printf("%s\n", response.pop().c_str());
std::printf("%s\n\n", response.pop().c_str());

chris.m_request.push("How are you doing?");
amy.m_request.push("How are you feeling?");
std::printf("%s\n", response.pop().c_str());
std::printf("%s\n\n", response.pop().c_str());

chris.m_request.push("Do you really feel that way?");
amy.m_request.push("Are you sure?");
std::printf("%s\n", response.pop().c_str());
std::printf("%s\n\n", response.pop().c_str());

chris.m_request.push("Why do you feel that way?");
amy.m_request.push("Can you share more of you feelings?");
std::printf("%s\n", response.pop().c_str());
std::printf("%s\n\n", response.pop().c_str());

chris.m_request.push("QUIT");
amy.m_request.push("QUIT");

std::printf("%s\n", response.pop().c_str());
std::printf("%s\n", response.pop().c_str());
}

std::puts("\n\n\n__________________\nhit <ENTERto exit...");
std::fflush(stdout);
std::getchar();
return 0;
}

__________________________________________________ ___________________


Please correct me if I am wrong, but Boost would force me to dynamically
create the `person::queue request' object in main right? AFAICT, this
example shows why is can be a good idea to treat a thread as an object. In
this case, a person object is a thread. Anyway, as of now, I am not entirely
convinced that Boost has a far superior method of creating threads...


Anyway, I really do need to think about the rest of your post; you raise
several interesting issues indeed.

Nov 2 '08 #14

"Chris M. Thomasson" <no@spam.invalidwrote in message
news:XR*****************@newsfe01.iad...
[...]
"Ulrich Eckhardt" <do******@knuut.dewrote in message
news:6n************@mid.uni-berlin.de...
>Chris M. Thomasson wrote:
[...]>
If find it unfortunate to be forced to use an smart pointers and
dynamically created objects just to be able to pass common shared data to
a thread. I am still not convinced that treating a thread as an object is
a bad thing... For instance...
How would I be able to create the following fully compliable program
(please refer to the section of code under the "Simple Example" heading,
the rest is simple impl detail for pthread abstraction) using Boost
threads. The easy way to find the example program is to go to the end of
the entire program, and start moving up until you hit the "Simple Example"
comment... Here it is:
__________________________________________________ ___________________
[...]
__________________________________________________ ___________________


Please correct me if I am wrong, but Boost would force me to dynamically
create the `person::queue request' object in main right? AFAICT, this
example shows why is can be a good idea to treat a thread as an object. In
this case, a person object is a thread. Anyway, as of now, I am not
entirely convinced that Boost has a far superior method of creating
threads...
[...]

I should have made the characters `Chris' and `Amy' completely separate
objects deriving from a `person' base-class . That way, their personalities
and therefore their responses would not be identical. Treating threads as
objects works very well for me perosnally...

Nov 2 '08 #15
Responding late, it was a rough week...

Chris M. Thomasson wrote:
"Ulrich Eckhardt" <do******@knuut.dewrote in message
news:6n************@mid.uni-berlin.de...
>Chris M. Thomasson wrote:
[C++ thread baseclass with virtual run() function]
>>I personally like this technique better than Boost. I find it more
straight forward and perhaps more object oriented, the RAII nature of
the `active' helper class does not hurt either. Also, I really do think
its more "efficient" than Boost in the way it creates threads because it
does not copy anything...

There are two things that strike me here:
1. You mention "object oriented" as if that was a goal, but it isn't.
Rather, it is a means to achieve something, and the question is always
valid whether its use is justified. Java's decision to force an OO design
on you and then inviting other paradigms back in through the backdoor is
the prime example for misunderstood OO. Wrapping a thread into a class
the way you do it is another IMHO, ill explain below.
2. What exactly is the problem with the copying? I mean you're starting a
thread, which isn't actually a cheap operation either. Further, if you
want, you can optimise that by using transfer of ownership (auto_ptr) or
shared ownership (shared_ptr) in case you need. Boost doesn't make it
necessary to copy anything either (except under the hood it does some
dynamic allocation), but also allows you to chose if you want. However,
copying and thus avoiding shared data is the safer default, because any
shared data access requires care.

[...]

If find it unfortunate to be forced to use an smart pointers and
dynamically created objects just to be able to pass common shared data to
a thread. I am still not convinced that treating a thread as an object is
a bad thing... For instance...
How would I be able to create the following fully compliable program
(please refer to the section of code under the "Simple Example" heading,
the rest is simple impl detail for pthread abstraction) using Boost
threads. The easy way to find the example program is to go to the end of
the entire program, and start moving up until you hit the "Simple Example"
comment...
If I get your code right, this program is creating two threads, each of
which models a person's behaviour. The main thread then asks them questions
which are stored in a queue to be handled asynchronously and prints the
answers which are received from another queue. Is that right?

Now, how would I rather design this? Firstly, I would define the behaviour
of the persons in a class. This class would be completely ignorant of any
threading going on in the background and only model the behaviour.

Then, I would create a type that combines a queue with a condition and a
mutex. This could then be used to communicate between threads in a safe
way. This would be pretty much the same as your version, both
pthread_cond_t and pthread_mutex_t translate easily to boost::mutex and
boost::condition.
Now, things get a bit more complicated, because first some questions need to
be answered. The first one is what the code should do in case of failures.
What happens if one person object fails to answer a question? What if
answering takes too long? What if the thread where the answers are
generated is terminated due to some error? What if the invoking thread
fails to queue a request, e.g. due to running out of memory?

The second question to answer is what kind of communication you are actually
modelling here. In the example, it seems as if you were making a request
and then receiving the response to that request, but that isn't actually
completely true. Rather, the code is sending a message and receiving a
message, but there is no correlation between the sent and received message.
If this correlation is required, I would actually return a cookie when I
make a request and retrieve the answer via that cookie.

In any case, you can write an equivalent program using Boost.Thread. If you
want, you can wrap stuff into a class, e.g. like this:

struct active_person
{
explicit active_person(queue& out_):
out(out_),
th( bind( &handle_requests, this))
{}
~active_person()
{ th.join(); }
void push_question( std::string const& str)
{ in.push(str); }
std::string pop_answer()
{ return out.pop(); }
private:
void handle_requests()
{
while(true)
{
std::string question = in.pop();
if(question=="QUIT")
return;
out.push(p.ask(question));
}
}
queue in;
queue& out;
person p;
// note: this one must come last, because its initialisation
// starts a thread using this object.
boost::thread th;
};

You could also write a simple function:

void async_quiz( person& p, queue& questions, queue& answers)
{
while(true)
{
std::string question = questions.pop();
if(question=="QUIT")
return;
answers.push(p.ask(question));
}
}

queue questions, answers;
person amy("amy");
thread th(bind( &async_quiz, ref(amy),
ref(questions), ref(answers)));
.... // ask questions
th.join();

Note the use of ref() to avoid the default copying of the argument. Using
pointers would work, too, but I find it ugly.

In any case, this is probably not something I would write that way. The
point is that I can not force a thread to shut down or handle a request. I
can ask it to and maybe wait for it (possibly infinitely), but I can not
force it. So, if there is any chance for failure, both on the grounds of
request handling or the request handling mechanism in general, this
approach becomes unusable. Therefore, I rather prepare for the case that a
thread becomes unresponsive by allowing it to run detached from the local
stack.

Writing about that, there is one thing that came somehow as a revelation to
me, and that was the Erlang programming language. It has concurrency built
in and its threads (called processes) communicate using messages. Using a
similar approach in C++ allows writing very clean programs that don't
suffer lock contention. In any case, I suggest taking a look at Erlang just
for the paradigms, I found some really good ideas to steal from there. ;)

#define when_xx(mp_pred, mp_line) \
monitor::lock_guard lock_guard_##mp_line(*this); \
monitor::signal_guard signal_guard_##mp_line(*this); \
while (! (mp_pred)) this->wait();

#define when_x(mp_pred, mp_line) when_xx(mp_pred, mp_line)
#define when(mp_pred) when_x(mp_pred, __LINE__)
Just one note on this one: When I read your example, I stumbled over the use
of a 'when' keyword where I would expect an 'if'. I find this here really
bad C++ for several reasons:
1. Macros should be UPPERCASE_ONLY. That way, people see that it's a macro
and they know that it may or may not behave like a function. It simply
avoids surprises.
2. It is used in a way that breaks its integration into the control flow
syntax. Just imagine that I would use it in the context of an 'if'
expression:

if(something)
when(some_condition)
do_something_else();

I'd rather write it like this:

if(something) {
LOCK_AND_WAIT_CONDITION(some_condition);
do_something_else();
}

Firstly, you see that these are separate statements and this then
automatically leads to you adding the necessary curly braces.
Please correct me if I am wrong, but Boost would force me to dynamically
create the `person::queue request' object in main right?
No. The argument when starting a thread is something that can be called,
like a function or functor, that's all. This thing is then copied before it
is used by the newly started thread. Typically, this thing is a function
bound to its context arguments using Boost.Bind or Lambda. If you want, you
can make this a simple structure containing a few pointers or references,
i.e. something dead easy to copy.

cheers

Uli

Nov 6 '08 #16
On Nov 6, 11:24*pm, Ulrich Eckhardt <dooms...@knuut.dewrote:
Responding late, it was a rough week...

Chris M. Thomasson wrote:
"Ulrich Eckhardt" <dooms...@knuut.dewrote in message
news:6n************@mid.uni-berlin.de...
Chris M. Thomasson wrote:
[...]
#define when_xx(mp_pred, mp_line) \
* monitor::lock_guard lock_guard_##mp_line(*this); \
* monitor::signal_guard signal_guard_##mp_line(*this); \
* while (! (mp_pred)) this->wait();
#define when_x(mp_pred, mp_line) when_xx(mp_pred, mp_line)
#define when(mp_pred) when_x(mp_pred, __LINE__)

Just one note on this one: When I read your example, I stumbled over the use
of a 'when' keyword where I would expect an 'if'.
The `when' is basically different from the `if'.

Since I have come up with this construction, I might give you an
answer. The `when' is not a genuine keyword since it is still C++ code
and the `when' is not a C++ keyword. However, here we are
experimenting whether a programming style could be applied in a C++
program that mimics the original Conditional Critical Region
programming proposal where the `when' is a keyword.

The `if' is a sequential control statement whereas the `when' is not
one. The semantics of the `if' and the `when' differs in that the `if'
fails when the condition does not hold at the time it is evaluated and
the statement takes the `else' branch if it is given. The `when'
keyword, on the other hand, delays the thread until the condition is
not true. In other words, the `when' specifies a guard.
I find this here really
bad C++ for several reasons:
1. Macros should be UPPERCASE_ONLY. That way, people see that it's a macro
and they know that it may or may not behave like a function. It simply
avoids surprises.
It is not necessarily meant to be a macro, the macro is just one
possible implementation of the construction.
2. It is used in a way that breaks its integration into the control flow
syntax. Just imagine that I would use it in the context of an 'if'
expression:

* if(something)
* * when(some_condition)
* * * do_something_else();

I'd rather write it like this:

* if(something) {
* * LOCK_AND_WAIT_CONDITION(some_condition);
* * do_something_else();
* }

Firstly, you see that these are separate statements and this then
automatically leads to you adding the necessary curly braces.
Please check out the post where the `when' construction is introduced
into this discussion thread:

http://groups.google.com/group/comp....476c1c7d91c008

If you have an idea how to solve the original problem I described,
your solution is welcome.

Best Regards,
Szabolcs
Nov 7 '08 #17

"Ulrich Eckhardt" <do******@knuut.dewrote in message
news:6n************@mid.uni-berlin.de...
Responding late, it was a rough week...
[...]
In any case, you can write an equivalent program using Boost.Thread. If
you
want, you can wrap stuff into a class, e.g. like this:

struct active_person
{
explicit active_person(queue& out_):
out(out_),
th( bind( &handle_requests, this))
{}
~active_person()
{ th.join(); }
void push_question( std::string const& str)
{ in.push(str); }
std::string pop_answer()
{ return out.pop(); }
private:
void handle_requests()
{
while(true)
{
std::string question = in.pop();
if(question=="QUIT")
return;
out.push(p.ask(question));
}
out.push("QUIT");
}
queue in;
queue& out;
person p;
// note: this one must come last, because its initialisation
// starts a thread using this object.
boost::thread th;
};
[...]
I would create the active template using Boost like:

// quick sketch - may have typo
__________________________________________________ _____________________
template<typename T>
class active : public T {
boost::thread m_active_handle;

public:
active() : T(), m_active_handle(bind(&on_active, (T*)this)) {}

~active() { m_active_handle.join(); }

template<typename P1>
active(P1 p1) : T(p1),
m_active_handle(bind(&on_active, (T*)this)) {}

template<typename P1, typename P2>
active() : T(P1, P2),
m_active_handle(bind(&on_active, (T*)this)) {}

// [on and on for more params...]
};
__________________________________________________ _____________________


Then I could use it like:
struct person {
void on_active() {
// [...]
}
};
int main() {
active<personp[10];
return 0;
}

I personally like this construction; I think its "cleaner" than using the
Boost interface "directly".

Nov 9 '08 #18

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: Dave Benjamin | last post by:
The recent conversation on prototype-based OOP and the Prothon project has been interesting. I've been playing around with the idea for awhile now, actually, since I do a lot of programming in...
26
by: DeMarcus | last post by:
Let's say we're gonna make a system library and have a class with two prerequisites. * It will be used _a_lot_, and therefore need to be really efficient. * It can have two different...
10
by: serge calderara | last post by:
Dear all, I need to build a web application which will contains articles (long or short) I was wondering on what is the correct way to retrive those article on web page. In orther words, when...
1
by: DartmanX | last post by:
I'm looking for any comparisons of PHP Database Abstraction Layers out there, with an eye towards speed/efficiency. I'm still using PEAR::DB, which obviously leaves something to be desired. I'm...
25
by: Colin McKinnon | last post by:
Hi all, There's lots of DB abstraction layers out there, but a quick look around them hasn't turned up anything which seems to met my requirements. Before I go off and write one I thought I'd...
3
by: S. Lorétan | last post by:
Hi guys, I'm coding an application connected to a database. I have some clients that use this program with 2 or 3 computers, and I use MsSQL Express for them. But if the client needs more...
1
by: rickycornell | last post by:
Greetings All, On past projects in PHP4 I had always just written my own libraries to deal with database interaction. Somehow I was operating in the dark that there were all these database...
3
by: TamusJRoyce | last post by:
Hello. This is my first thread here. My problem has probably been came across by a lot of people, but tutorials and things I've seen don't address it (usually too basic). My problem is that I...
6
by: pythoNewbie | last post by:
Dear experts, I am trying to understand a simple semaphore program written in C, and trying to insert some printf statements in the code , but there is no output at all. #include<stdio.h>...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.