473,748 Members | 7,217 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Request for comments about synchronized queue using boost

I am currently designing a synchronized queue used to communicate
between threads. Is the code given below a good solution? Am I
using mutex lock/unlock more than needed?

Are there any resources out there on the Internet on how to design
*thread-safe* *efficient* data-
structures?

/Nordlöw

The file synched_queue.h pp follows:

#ifndef PNW__SYNCHED_QU EUE_HPP
#define PNW__SYNCHED_QU EUE_HPP

/*!
* @file synched_queue.h pp
* @brief Synchronized (Thread Safe) Container Wrapper on std:queue
* using Boost::Thread.
*/

#include <queue>
#include <iostream>

#include <boost/bind.hpp>
#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/condition.hpp>

//
=============== =============== =============== =============== =============== =

template <typename T>
class synched_queue
{
std::queue<Tq; ///< Queue.
boost::mutex m; ///< Mutex.
public:
/*!
* Push @p value.
*/
void push(const T & value) {
boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
q.push(value);
}

/*!
* Try and pop into @p value, returning directly in any case.
* @return true if pop was success, false otherwise.
*/
bool try_pop(T & value) {
boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
if (q.size()) {
value = q.front();
q.pop();
return true;
}
return false;
}

/// Pop and return value, possibly waiting forever.
T wait_pop() {
boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
// wait until queue has at least on element()
c.wait(sl, boost::bind(&st d::queue<T>::si ze, q));
T value = q.front();
q.pop();
return value;
}

size_type size() const {
boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
return q.size();
}

bool empty() const {
boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
return q.empty();
}

};

//
=============== =============== =============== =============== =============== =

#endif
Oct 15 '08 #1
19 3396
Nordlöw wrote:
I am currently designing a synchronized queue used to communicate
between threads. Is the code given below a good solution? Am I
using mutex lock/unlock more than needed?

Are there any resources out there on the Internet on how to design
*thread-safe* *efficient* data-
structures?
comp.programmin g.threads?
/Nordlöw

[...]

/// Pop and return value, possibly waiting forever.
T wait_pop() {
boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
// wait until queue has at least on element()
c.wait(sl, boost::bind(&st d::queue<T>::si ze, q));
T value = q.front();
q.pop();
return value;
}
I haven't done any threading in a decade or so, but I wonder how
in the above code anything could be put into the locked queue.
What am I missing?
Oh, and I wonder what 'c' is.
[...]
Schobi
Oct 15 '08 #2
On Oct 15, 2:36*pm, Nordlöw <per.nord...@gm ail.comwrote:
I am currently designing a synchronized queue used to communicate
between threads. Is the code given below a good solution? Am I
using mutex lock/unlock more than needed?
/Nordlöw

The file synched_queue.h pp follows:

#ifndef PNW__SYNCHED_QU EUE_HPP
#define PNW__SYNCHED_QU EUE_HPP

/*!
** @file synched_queue.h pp
** @brief Synchronized (Thread Safe) Container Wrapper on std:queue
** * * * *using Boost::Thread.
**/

#include <queue>
#include <iostream>

#include <boost/bind.hpp>
#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/condition.hpp>

//
=============== =============== =============== =============== =============== =

template <typename T>
class synched_queue
{
* * std::queue<Tq; * * * * * * *///< Queue.
* * boost::mutex m; * * * * * * ///< Mutex.
A member variable is missing here:

boost::conditio n c;
public:
* * /*!
* * ** Push @p value.
* * **/
* * void push(const T & value) {
* * * * boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
* * * * q.push(value);
You need to notify other threads waiting on the queue:

c.notify_one();
* * }

* * /*!
* * ** Try and pop into @p value, returning directly in any case.
* * ** @return true if pop was success, false otherwise.
* * **/
* * bool try_pop(T & value) {
* * * * boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
* * * * if (q.size()) {
* * * * * * value = q.front();
* * * * * * q.pop();
* * * * * * return true;
* * * * }
* * * * return false;
* * }

* * /// Pop and return value, possibly waiting forever.
* * T wait_pop() {
* * * * boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
* * * * // wait until queue has at least on element()
The following line:
* * * * c.wait(sl, boost::bind(&st d::queue<T>::si ze, q));
boost::bind(&st d::queue<T>::si ze, q) stores a copy of the queue in the
object created by boost::bind, so that the wait never finishes if the
queue is empty (and if the condition variable is not notified (see
above)).

It should be as simple as:

while(q.empty() )
c.wait(sl);
* * * * T value = q.front();
* * * * q.pop();
* * * * return value;
* * }

* * size_type size() const {
* * * * boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
* * * * return q.size();
* * }

* * bool empty() const {
* * * * boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
* * * * return q.empty();
* * }

};

//
=============== =============== =============== =============== =============== =

#endif

The other thing is that the queue does not support destruction: the
destructor does not unblock any threads blocked in wait.

Apart from that, the mutex is held for too long. You don't really need
to hold the lock when allocating memory for elements and when invoking
the copy constructor of the elements.

Here is an improved version (although a bit simplified):

#include <boost/thread/mutex.hpp>
#include <boost/thread/condition.hpp>
#include <boost/function.hpp>
#include <list>

template<class T>
class atomic_queue : private boost::noncopya ble
{
private:
boost::mutex mtx_;
boost::conditio n cnd_;
bool cancel_;
unsigned waiting_;

// use list as a queue because it allows for splicing:
// moving elements between lists without any memory allocation and
copying
typedef std::list<Tqueu e_type;
queue_type q_;

public:
struct cancelled : std::logic_erro r
{
cancelled() : std::logic_erro r("cancelled" ) {}
};

atomic_queue()
: cancel_()
, waiting_()
{}

~atomic_queue()
{
// cancel all waiting threads
this->cancel();
}

void cancel()
{
// cancel all waiting threads
boost::mutex::s coped_lock l(mtx_);
cancel_ = true;
cnd_.notify_all ();
// and wait till they are done
while(waiting_)
cnd_.wait(l);
}

void push(T const& t)
{
// this avoids an allocation inside the critical section
bellow
queue_type tmp(&t, &t + 1);
{
boost::mutex::s coped_lock l(mtx_);
q_.splice(q_.en d(), tmp);
}
cnd_.notify_one ();
}

// this function provides only basic exception safety if T's copy
ctor can
// throw or strong exception safety if T's copy ctor is nothrow
T pop()
{
// this avoids copying T inside the critical section bellow
queue_type tmp;
{
boost::mutex::s coped_lock l(mtx_);
++waiting_;
while(!cancel_ && q_.empty())
cnd_.wait(l);
--waiting_;
if(cancel_)
{
cnd_.notify_all ();
throw cancelled();
}
tmp.splice(tmp. end(), q_, q_.begin());
}
return tmp.front();
}
};

typedef boost::function <void()unit_of_ work;
typedef atomic_queue<un it_of_workwork_ queue;

void typical_thread_ pool_working_th read(work_queue * q)
try
{
for(;;)
q->pop()();
}
catch(work_queu e::cancelled&)
{
// time to terminate the thread
}
Are there any resources out there on the Internet on how to design
*thread-safe* *efficient* data-structures?
I would recommend "Programmin g with POSIX Threads" book by by David R.
Butenhof.

--
Max
Oct 15 '08 #3
Hendrik Schober wrote:
Nordlöw wrote:
>I am currently designing a synchronized queue used to communicate
between threads. Is the code given below a good solution? Am I
using mutex lock/unlock more than needed?

Are there any resources out there on the Internet on how to design
*thread-safe* *efficient* data-
structures?

comp.programmin g.threads?
>/Nordlöw

[...]

/// Pop and return value, possibly waiting forever.
T wait_pop() {
boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
// wait until queue has at least on element()
c.wait(sl, boost::bind(&st d::queue<T>::si ze, q));
T value = q.front();
q.pop();
return value;
}

I haven't done any threading in a decade or so, but I wonder how
in the above code anything could be put into the locked queue.
What am I missing?
Oh, and I wonder what 'c' is.
c is a condition variable:
http://www.boost.org/doc/libs/1_36_0...on.condvar_ref

You lock the mutex, then wait for a condition, which (automatically)
unlocks the mutex, and locks it again if the condition occurs.

--
Thomas
Oct 15 '08 #4
Thomas J. Gritzan wrote:
[...]
> I haven't done any threading in a decade or so, but I wonder how
in the above code anything could be put into the locked queue.
What am I missing?
Oh, and I wonder what 'c' is.

c is a condition variable:
http://www.boost.org/doc/libs/1_36_0...on.condvar_ref

You lock the mutex, then wait for a condition, which (automatically)
unlocks the mutex, and locks it again if the condition occurs.
Ah, thanks. I haven't looked at boost's threads yet.

Schobi
Oct 15 '08 #5
On 15 Okt, 18:02, Maxim Yegorushkin <maxim.yegorush ...@gmail.com>
wrote:
On Oct 15, 2:36*pm, Nordlöw <per.nord...@gm ail.comwrote:
I am currently designing a synchronized queue used to communicate
between threads. Is the code given below a good solution? Am I
using mutex lock/unlock more than needed?
/Nordlöw
The file synched_queue.h pp follows:
#ifndef PNW__SYNCHED_QU EUE_HPP
#define PNW__SYNCHED_QU EUE_HPP
/*!
** @file synched_queue.h pp
** @brief Synchronized (Thread Safe) Container Wrapper on std:queue
** * * * *using Boost::Thread.
**/
#include <queue>
#include <iostream>
#include <boost/bind.hpp>
#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/condition.hpp>
//
=============== =============== =============== =============== =============== =
template <typename T>
class synched_queue
{
* * std::queue<Tq; * * * * * * *///< Queue.
* * boost::mutex m; * * * * * * ///< Mutex.

A member variable is missing here:

* * boost::conditio n c;
public:
* * /*!
* * ** Push @p value.
* * **/
* * void push(const T & value) {
* * * * boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
* * * * q.push(value);

You need to notify other threads waiting on the queue:

* * c.notify_one();
* * }
* * /*!
* * ** Try and pop into @p value, returning directly in any case.
* * ** @return true if pop was success, false otherwise.
* * **/
* * bool try_pop(T & value) {
* * * * boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
* * * * if (q.size()) {
* * * * * * value = q.front();
* * * * * * q.pop();
* * * * * * return true;
* * * * }
* * * * return false;
* * }
* * /// Pop and return value, possibly waiting forever.
* * T wait_pop() {
* * * * boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
* * * * // wait until queue has at least on element()

The following line:
* * * * c.wait(sl, boost::bind(&st d::queue<T>::si ze, q));

boost::bind(&st d::queue<T>::si ze, q) stores a copy of the queue in the
object created by boost::bind, so that the wait never finishes if the
queue is empty (and if the condition variable is not notified (see
above)).

It should be as simple as:

* * while(q.empty() )
* * * * c.wait(sl);
* * * * T value = q.front();
* * * * q.pop();
* * * * return value;
* * }
* * size_type size() const {
* * * * boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
* * * * return q.size();
* * }
* * bool empty() const {
* * * * boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
* * * * return q.empty();
* * }
};
//
=============== =============== =============== =============== =============== =
#endif

The other thing is that the queue does not support destruction: the
destructor does not unblock any threads blocked in wait.

Apart from that, the mutex is held for too long. You don't really need
to hold the lock when allocating memory for elements and when invoking
the copy constructor of the elements.

Here is an improved version (although a bit simplified):

#include <boost/thread/mutex.hpp>
#include <boost/thread/condition.hpp>
#include <boost/function.hpp>
#include <list>

template<class T>
class atomic_queue : private boost::noncopya ble
{
private:
* * boost::mutex mtx_;
* * boost::conditio n cnd_;
* * bool cancel_;
* * unsigned waiting_;

* * // use list as a queue because it allows for splicing:
* * // moving elements between lists without any memory allocation and
copying
* * typedef std::list<Tqueu e_type;
* * queue_type q_;

public:
* * struct cancelled : std::logic_erro r
* * {
* * * * cancelled() : std::logic_erro r("cancelled" ) {}
* * };

* * atomic_queue()
* * * * : cancel_()
* * * * , waiting_()
* * {}

* * ~atomic_queue()
* * {
* * * * // cancel all waiting threads
* * * * this->cancel();
* * }

* * void cancel()
* * {
* * * * // cancel all waiting threads
* * * * boost::mutex::s coped_lock l(mtx_);
* * * * cancel_ = true;
* * * * cnd_.notify_all ();
* * * * // and wait till they are done
* * * * while(waiting_)
* * * * * * cnd_.wait(l);
* * }

* * void push(T const& t)
* * {
* * * * // this avoids an allocation inside the critical section
bellow
* * * * queue_type tmp(&t, &t + 1);
* * * * {
* * * * * * boost::mutex::s coped_lock l(mtx_);
* * * * * * q_.splice(q_.en d(), tmp);
* * * * }
* * * * cnd_.notify_one ();
* * }

* * // this function provides only basic exception safety if T's copy
ctor can
* * // throw or strong exception safety if T's copy ctor is nothrow
* * T pop()
* * {
* * * * // this avoids copying T inside the critical section bellow
* * * * queue_type tmp;
* * * * {
* * * * * * boost::mutex::s coped_lock l(mtx_);
* * * * * * ++waiting_;
* * * * * * while(!cancel_ && q_.empty())
* * * * * * * * cnd_.wait(l);
* * * * * * --waiting_;
* * * * * * if(cancel_)
* * * * * * {
* * * * * * * * cnd_.notify_all ();
* * * * * * * * throw cancelled();
* * * * * * }
* * * * * * tmp.splice(tmp. end(), q_, q_.begin());
* * * * }
* * * * return tmp.front();
* * }

};

typedef boost::function <void()unit_of_ work;
typedef atomic_queue<un it_of_workwork_ queue;

void typical_thread_ pool_working_th read(work_queue * q)
try
{
* * for(;;)
* * * * q->pop()();}

catch(work_queu e::cancelled&)
{
* * // time to terminate the thread

}
Are there any resources out there on the Internet on how to design
*thread-safe* *efficient* data-structures?

I would recommend "Programmin g with POSIX Threads" book by by David R.
Butenhof.

--
Max

Doesn't the push-argument "T const & t" instead of my version "const T
& t" mean that we don't copy at all here? I believe &t evaluates to
the memory pointer of t:

void push(T const& t)
{
// this avoids an allocation inside the critical section
bellow
queue_type tmp(&t, &t + 1);
{
boost::mutex::s coped_lock l(mtx_);
q_.splice(q_.en d(), tmp);
}
cnd_.notify_one ();
}

/Nordlöw
Oct 16 '08 #6
On 15 Okt, 20:16, Hendrik Schober <spamt...@gmx.d ewrote:
Thomas J. Gritzan wrote:
[...]
*I haven't done any threading in a decade or so, but I wonder how
*in the above code anything could be put into the locked queue.
*What am I missing?
*Oh, and I wonder what 'c' is.
c is a condition variable:
http://www.boost.org/doc/libs/1_36_0...ynchronization....
You lock the mutex, then wait for a condition, which (automatically)
unlocks the mutex, and locks it again if the condition occurs.

* Ah, thanks. I haven't looked at boost's threads yet.

* Schobi
How can I your queue structure in the following code example:
#include "../synched_queue.h pp"
#include "../threadpool/include/threadpool.hpp"
#include <iostream>

using namespace boost::threadpo ol;

template <typename T>
void produce(synched _queue<T& q, size_t n)
{
for (size_t i = 0; i < n; i++) {
T x = i;
q.push(x);
std::cout << "i:" << i << " produced: " << x << std::endl;
}
}

template <typename T>
void consume(synched _queue<T& q, size_t n)
{
for (size_t i = 0; i < n; i++) {
T x = q.wait_pop();
std::cout << "i:" << i << " consumed: " << x << std::endl;
}
}

int main()
{
typedef float Elm;
synched_queue<f loatq;
// boost::thread pt(boost::bind( produce<Elm>, q, 10));
// boost::thread ct(boost::bind( consume<Elm>, q, 10));
// pt.join();
// ct.join();
return 0;
}
Thanks in advance,
/Nordlöw
Oct 16 '08 #7
On Oct 16, 3:44*pm, Nordlöw <per.nord...@gm ail.comwrote:
On 15 Okt, 18:02, Maxim Yegorushkin <maxim.yegorush ...@gmail.com>
wrote:
On Oct 15, 2:36*pm, Nordlöw <per.nord...@gm ail.comwrote:
I am currently designing a synchronized queue used to communicate
between threads. Is the code given below a good solution? Am I
using mutex lock/unlock more than needed?
/Nordlöw
The file synched_queue.h pp follows:
#ifndef PNW__SYNCHED_QU EUE_HPP
#define PNW__SYNCHED_QU EUE_HPP
/*!
** @file synched_queue.h pp
** @brief Synchronized (Thread Safe) Container Wrapper on std:queue
** * * * *using Boost::Thread.
**/
#include <queue>
#include <iostream>
#include <boost/bind.hpp>
#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/condition.hpp>
//
=============== =============== =============== =============== =============== =
template <typename T>
class synched_queue
{
* * std::queue<Tq; * * * * * * *///< Queue.
* * boost::mutex m; * * * * * * ///< Mutex.
A member variable is missing here:
* * boost::conditio n c;
public:
* * /*!
* * ** Push @p value.
* * **/
* * void push(const T & value) {
* * * * boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
* * * * q.push(value);
You need to notify other threads waiting on the queue:
* * c.notify_one();
* * }
* * /*!
* * ** Try and pop into @p value, returning directly in any case.
* * ** @return true if pop was success, false otherwise.
* * **/
* * bool try_pop(T & value) {
* * * * boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
* * * * if (q.size()) {
* * * * * * value = q.front();
* * * * * * q.pop();
* * * * * * return true;
* * * * }
* * * * return false;
* * }
* * /// Pop and return value, possibly waiting forever.
* * T wait_pop() {
* * * * boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
* * * * // wait until queue has at least on element()
The following line:
* * * * c.wait(sl, boost::bind(&st d::queue<T>::si ze, q));
boost::bind(&st d::queue<T>::si ze, q) stores a copy of the queue in the
object created by boost::bind, so that the wait never finishes if the
queue is empty (and if the condition variable is not notified (see
above)).
It should be as simple as:
* * while(q.empty() )
* * * * c.wait(sl);
* * * * T value = q.front();
* * * * q.pop();
* * * * return value;
* * }
* * size_type size() const {
* * * * boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
* * * * return q.size();
* * }
* * bool empty() const {
* * * * boost::mutex::s coped_lock sl(m); // NOTE: lock mutex
* * * * return q.empty();
* * }
};
//
=============== =============== =============== =============== =============== =
#endif
The other thing is that the queue does not support destruction: the
destructor does not unblock any threads blocked in wait.
Apart from that, the mutex is held for too long. You don't really need
to hold the lock when allocating memory for elements and when invoking
the copy constructor of the elements.
Here is an improved version (although a bit simplified):
#include <boost/thread/mutex.hpp>
#include <boost/thread/condition.hpp>
#include <boost/function.hpp>
#include <list>
template<class T>
class atomic_queue : private boost::noncopya ble
{
private:
* * boost::mutex mtx_;
* * boost::conditio n cnd_;
* * bool cancel_;
* * unsigned waiting_;
* * // use list as a queue because it allows for splicing:
* * // moving elements between lists without any memory allocation and
copying
* * typedef std::list<Tqueu e_type;
* * queue_type q_;
public:
* * struct cancelled : std::logic_erro r
* * {
* * * * cancelled() : std::logic_erro r("cancelled" ) {}
* * };
* * atomic_queue()
* * * * : cancel_()
* * * * , waiting_()
* * {}
* * ~atomic_queue()
* * {
* * * * // cancel all waiting threads
* * * * this->cancel();
* * }
* * void cancel()
* * {
* * * * // cancel all waiting threads
* * * * boost::mutex::s coped_lock l(mtx_);
* * * * cancel_ = true;
* * * * cnd_.notify_all ();
* * * * // and wait till they are done
* * * * while(waiting_)
* * * * * * cnd_.wait(l);
* * }
* * void push(T const& t)
* * {
* * * * // this avoids an allocation inside the critical section
bellow
* * * * queue_type tmp(&t, &t + 1);
* * * * {
* * * * * * boost::mutex::s coped_lock l(mtx_);
* * * * * * q_.splice(q_.en d(), tmp);
* * * * }
* * * * cnd_.notify_one ();
* * }
* * // this function provides only basic exception safety if T's copy
ctor can
* * // throw or strong exception safety if T's copy ctor is nothrow
* * T pop()
* * {
* * * * // this avoids copying T inside the critical section bellow
* * * * queue_type tmp;
* * * * {
* * * * * * boost::mutex::s coped_lock l(mtx_);
* * * * * * ++waiting_;
* * * * * * while(!cancel_ && q_.empty())
* * * * * * * * cnd_.wait(l);
* * * * * * --waiting_;
* * * * * * if(cancel_)
* * * * * * {
* * * * * * * * cnd_.notify_all ();
* * * * * * * * throw cancelled();
* * * * * * }
* * * * * * tmp.splice(tmp. end(), q_, q_.begin());
* * * * }
* * * * return tmp.front();
* * }
};
typedef boost::function <void()unit_of_ work;
typedef atomic_queue<un it_of_workwork_ queue;
void typical_thread_ pool_working_th read(work_queue * q)
try
{
* * for(;;)
* * * * q->pop()();}
catch(work_queu e::cancelled&)
{
* * // time to terminate the thread
}
Are there any resources out there on the Internet on how to design
*thread-safe* *efficient* data-structures?
I would recommend "Programmin g with POSIX Threads" book by by David R.
Butenhof.

Doesn't the push-argument "T const & t" instead of my version "const T
& t" mean that we don't copy at all here?
No, T const& and const T& is the same thing: a reference to a constant
T.
I believe &t evaluates to
the memory pointer of t:

* * void push(T const& t)
* * {
* * * * // this avoids an allocation inside the critical section
bellow
* * * * queue_type tmp(&t, &t + 1);
* * * * {
* * * * * * boost::mutex::s coped_lock l(mtx_);
* * * * * * q_.splice(q_.en d(), tmp);
* * * * }
* * * * cnd_.notify_one ();
* * }
The trick here is that element t is first inserted in a temporary list
tmp on the stack.

queue_type tmp(&t, &t + 1); // create a list with a copy of t

This involves allocating memory and copying t. And here it is done
without holding the lock because allocating memory may be expensive
(might cause the system to do swapping) and as you hold the lock all
the worker threads won't be able to pop elements from the queue during
such time. Next, the lock is acquired and the element is moved from
list tmp into q_:

q_.splice(q_.en d(), tmp);

This operation does not involve any memory allocation or copying
elements (because you can do so easily with the nodes of doubly-linked
lists), which make your critical section of code execute really fast
without stalling the worked threads for too long.

--
Max

Oct 16 '08 #8
On Oct 15, 3:36*pm, Nordlöw <per.nord...@gm ail.comwrote:
I am currently designing a synchronized queue used to communicate
between threads. Is the code given below a good solution?
Not really.
[...]
Are there any resources out there on the Internet on how to design
*thread-safe* *efficient* data-
structures?
Sure.
http://www.google.nl/search?q=boost+thread+safe+queue=

Best Regards,
Szabolcs
Oct 16 '08 #9
On Oct 15, 2:36*pm, Nordlöw <per.nord...@gm ail.comwrote:
I am currently designing a synchronized queue used to communicate
between threads. Is the code given below a good solution? Am I
using mutex lock/unlock more than needed?

Are there any resources out there on the Internet on how to design
*thread-safe* *efficient* data-
structures?
You can also try concurrent_queu e from
http://www.threadingbuildingblocks.o...ncurrent_queue

Scout around that link for more documentation.

--
Max
Oct 16 '08 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
8057
by: Rich Sienkiewicz | last post by:
Some classes, like Queue and SortedList, have a Synchronized method which gives a thread safe wrapper object for these classes. But the lock() statement does the same thing. Is there any rules as to when to use one and not the other? For instance, if I wanted to remove an item in a SortedList, would it be better to lock() it or do it via the Synchronized wrapper? Why choose one over the other SortedList sl lock(sl sl.Remove(item) O
6
6070
by: Richard Berg | last post by:
Hello, I need implementation advice: I am writing a program that should call some API functions in a specific order. The APIs are for another application, not Windows APIs. The order and type of the calls is determined at runtime. There are a couple of hundred of those APIs that can be called, and they all take different parameters. What I need is somehow queue the calls with their parameters and then dequeue and execute them one by...
7
4432
by: Shailesh Humbad | last post by:
I wrote a simple, but proprietary queue class that efficiently enqueues and dequeues arbitrary length byte arrays. I would like to replace it with an STL container if the performance overhead is not too high. Currently, I am thinking of replacing it like shown below, but it is not efficient due to repeated function calls to "push". Can anyone with more STL experience show a better way to use it, if there is one? // CUSTOM QUEUE //...
9
1712
by: Adam Monsen | last post by:
I kindly request a code review. If this is not an appropriate place for my request, where might be? Specific questions are in the QUESTIONS section of the code. ========================================================================== #include <stdio.h> #include <stdlib.h> #include <string.h>
4
10870
by: chrisben | last post by:
Hi I often use Queue.Synchronized method to create a queue for multithread writing. I also know I could use SyncRoot and lock to write Queue. Could anyone here please explain to me the pros and cons between those two approaches Thanks a lo Chris
6
3735
by: rmunson8 | last post by:
I have a derived class from the Queue base class. I need it to be thread-safe, so I am using the Synchronized method (among other things out of scope of this issue). The code below compiles, but at runtime, the cast of "Queue.Synchronized(new EventQueue())" to an EventQueue object is failing stating invalid cast. Why? public class EventQueue : System.Collections.Queue { ... }
0
1622
by: Dave Coate | last post by:
I am working on a generic way to launch multiple similar processes (threads) at once, but limit the number of threads running at any one time to a number I set. As I understand it the following line makes a Queue "thread safe", so I do not need to explicitly lock and unlock it when multiple threads are working with it. 'Thread safe queue Private IndexQueue As Queue = Queue.Synchronized(New Queue)
4
8190
by: nhmark64 | last post by:
Hi, Does System.Collections.Generic.Queue not have a Synchronized method because it is already in effect synchronized, or is the Synchronized functionality missing from System.Collections.Generic.Queue? Putting it another way can I safely replace a System.Collections.Queue.Synchronized(myUnSynchronizedQueue) with a System.Collections.Generic.Queue while porting a working 2003 project? Thanks,
0
8991
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8830
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9544
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
9372
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
9324
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9247
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8243
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
4606
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
3
2215
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.