473,756 Members | 2,117 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

std::deque Thread Saftey Situtation

I've read a bit online seeing that two writes are not safe, which I
understand, but would 1 thread push()'ing and 1 thread pop()'ing be
thread-safe? Basically my situation is the follows:

--Thread 1--
1. Reads TCPIP Buffer
2. Adds Buffer to Queue (q.size() to check queue isn't full, and
q.push_back(... ))
3. Signals Reading Thread Event & Goes Back to Wait for More Messages
on TCPIP

--Thread 2--
1. Waits for Signal
2. while(q.size() 0) myStruct = q.front();
2a. processes myStruct (doesn't modify it but does copy some
information for another queue).
2b. delete[] myStruct.Buffer
2c. q.pop_front();
3. Goes back to waiting for single

I've run the app for a few hours (basically saturating the TCPIP
traffic) and there were no apparent problems. The "processes
myStruct" takes a while and I can't have the push_back(...) thread
locked while processing is working.

I can add Critical Section locks around the ".size()", ".front()" and/
or ".pop_front ()/.push_back()", but my first inclination is that this
is thread safe?

My worry is that say if a .push_back() starts on a deque with 1 node
in it. It sees the 1st node and modifies something in it to point to
the, to be added, new node, and then the ".pop_front ()" occurs while
that’s happening. I have no idea how the queue is implemented
internally so unsure if this is a valid worry. Performance is very
important and I would rather absolutely no blocking unless it's needed
which is why I ask here :)

If Critical Sections are needed, would it just be fore the
".pop_front ()/.push_back()"? or all member functions I'm using
(.size()/.front()).

Thanks. Any information would be greatly appreciated. These are the
only two threads/functions accessing the queue. I currently
implemented it with locking on all just to be safe, but would like to
remove the locking if it is not needed, or fine tune it.
Aug 29 '08
29 9131
On 31 Aug., 02:45, NvrBst <nvr...@gmail.c omwrote:
Thank you all for your information :) *I think I have what I need now.
This is not correct, and your C# code will not work unless there is
more action involved when reading the Int32 size. The reason is that
even though the size gets written after the data, it might not be seen
this way by another thread. Memory writes undergo a lot of steps
involving caches of different kinds, and the cacheline with the size
might be written to memory before the cache-line containing the data.

Only if there were, say, two threads poping. *In my case I only have 1
thread poping, and 1 thread pushing. *Even if the poping thread reads
0 elements (because say the size cache wasn't updated in time, but
there is really 1 element), that is safe, it'll ignore untill size
says 1 element. *If it reads 1+ element then there is definally 1
element to remove (since no other thread is removing items). *Same
logic works for reading when there is a max size you can't go over.

Thanks again for all the information, and enjoy the weekend ;)
No - you are not correct. Imagine a situation where the pushing thread
pushes the object to the dequeue and then increases the counter.
However, the size get written through the cache whereas the object
stays (partly) in the cache. Now the reader will see that there is an
object in the dequeue, but in reality it was never written to a
location that allows the other thread to see it (it is on another
core), so it reads rubbish where the object should be.

/Peter
Aug 31 '08 #21
On Aug 31, 8:56 am, Paavo Helde <nob...@ebi.eew rote:
"Thomas J. Gritzan" <phygon_antis.. .@gmx.dekirjuta s:
Sorry, the code is correct. condition::wait will unlock the
mutex while waiting on the condition variable.
Yes, that's one thing I like about boost::thread library that
it makes some kind of errors impossible (like forgetting to
unlock the mutex before wait, or forgetting to relock it after
the wait).
Except that that's not a property of the boost::thread library,
but the way conditions work in general. All Boost provides here
is a very low level (but portable) wrapper for the Posix
interface, plus RAII for the locks held at the application
level (which is, in itself, already a good thing).

--
James Kanze (GABI Software) email:ja******* **@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientier ter Datenverarbeitu ng
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Aug 31 '08 #22
On Aug 30, 7:06 pm, Jerry Coffin <jcof...@taeus. comwrote:
In article <0bbce4f3-4ab7-47fd-8be1-
7d1b39e7f...@n3 8g2000prl.googl egroups.com>, nvr...@gmail.co m says...
I've read a bit online seeing that two writes are not safe, which I
understand, but would 1 thread push()'ing and 1 thread pop()'ing be
thread-safe? Basically my situation is the follows:
Generally speaking, no, it's not safe.
My advice would be to avoid std::deque in such a situation --
in a multi-threaded situation, it places an undue burden on
the client code.
Avoid it, or wrap it? I use it regularly for communicating
between threads; my Queue class is based on it.
This is a case where it's quite reasonable to incorporate the
locking into the data structure itself to simplify the client
code (a lot).
For one example, I've used code like this under Windows for
quite a while:
template<class T, unsigned max = 256>
class queue {
HANDLE space_avail; // signaled =at least one slot empty
HANDLE data_avail; // signaled =at least one slot full
CRITICAL_SECTIO N mutex; // protect buffer, in_pos, out_pos

T buffer[max];
long in_pos, out_pos;
And if you replace buffer, in_pos and out_pos with
std::deque<T>, where's the problem.

[...]
Exception safety depends on assignment of T being nothrow, but
(IIRC) not much else is needed. This uses value semantics, so
if you're dealing with something where copying is expensive,
you're expected to use some sort of smart pointer to avoid
copying. Of course, a reference counted smart pointer will
usually have some locking of its own on incrementing and
decrementing the reference count, but that's a separate issue.
My own queues use std::auto_ptr at the interface. This imposes
the requirement that all of the contained objects be dynamically
allocated, and adds some clean-up code in the queue itself
(since you can't put the auto_ptr in the deque), but ensures
that once the message has been posted, the originating thread
won't continue to access it.

--
James Kanze (GABI Software) email:ja******* **@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientier ter Datenverarbeitu ng
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Aug 31 '08 #23
No - you are not correct. Imagine a situation where the pushing thread
pushes the object to the dequeue and then increases the counter.
However, the size get written through the cache whereas the object
stays (partly) in the cache. Now the reader will see that there is an
object in the dequeue, but in reality it was never written to a
location that allows the other thread to see it (it is on another
core), so it reads rubbish where the object should be.
I don't quite understand what you say ("on another core" part), but
I'm pretty sure it is safe. The pop'ing thread is only taking the
first element, and the push'ing thread is appending to the end.

The size isn't incremented untill after the object is written, so when
the poping thread reads 1 element that element definally exsists (it
can't be in cache and not already written to the internal array).
Furthermore, there is a "version" number incremented internally
whenever a change is made; this forces the cache to re-validate an
element. If the size reads 0 then there can't be any problem, and if
the size reads 2+ then there can't be any problem with the first
element, so when the size reads "1" should be the only possible error
case; which, as stated, item is avaible before size is 1, and version
is incremented as well.

No way the popping thread can atempt to access the 1st element when it
actually isn't avaliable because it'd read size as "0". I think what
you were talking about is that an old cached _array[0] was there that
was poped, however, "push thread" added a new element, incremented
size, then pop thread took the cached "_array[0]". Again this
situation can't happen because the version is incremented and any
request for _array[0] would be re-validated when it seen version
doesn't match.

There might be a possible problem if the capacity is reached and it
atempts to auto incremented the internal array, however, as long as
your push thread checks capacity before adding, there shouldn't be a
problem.
Sep 1 '08 #24
In article <638bd1ab-a5e5-4567-aa53-
90**********@k3 7g2000hsf.googl egroups.com>, ja*********@gma il.com
says...

[ ... ]
My advice would be to avoid std::deque in such a situation --
in a multi-threaded situation, it places an undue burden on
the client code.

Avoid it, or wrap it? I use it regularly for communicating
between threads; my Queue class is based on it.
I'd generally avoid it. Wrapping it works, but presents trade offs I
rarely find useful.

The problem with std::deque is that it can and will resize itself when
you add an item and there's not currently room to store that item. Doing
so, however, normally involves allocating a block of memory from the
free store. As such, this operation can be fairly slow. That translates
to a fairly substantial amount of time that's being spent just in
storing the data rather than processing the data. It only really makes
sense when your work items are sufficiently coarse-grained that the
dynamic allocation will be substantially faster than processing a single
work item.

I generally prefer to keep the processing sufficiently fine grained that
when/if the queue is full, it's likely to be faster to wait for an
existing item to be processed than to allocate more memory for the
queue. This devotes nearly all CPU time to processing the data instead
of allocating memory to store it.

Now, my own work with multiple threads has mostly been fairly easy to
break up into relatively small "chunks", and has been mostly quite
processor intensive. Those mean that 1) giving up processor time means
the consumer side can run faster, and 2) the time it would take to
allocate memory for queue would exceed the time taken to remove (at
least) one item from the queue.

OTOH, I can certainly imagine situations where those weren't true. An
obvious example would be something like a network router. In this case,
the "consumer" side is basically sending out network packets. More CPU
time won't speed up network transmissions, and transmitting a single
network packet undoubtedly takes longer than allocating a block of
memory.
And if you replace buffer, in_pos and out_pos with
std::deque<T>, where's the problem.
There isn't necessarily a problem -- but I think the choice between the
two is one of some real trade offs rather than just a matter of whether
you manipulate the pointers into the queue yourself, or use some pre-
written code to do the job.

Of course, you can/could also use reserve() and size() on the deque to
manage it as a fixed-size container, but I don't think at that point
you're really gaining much by using a deque at all. OTOH, it might still
make sense when/if you had both fixed- and variable-sized queues, and
this allowed you to share most code between the two.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Sep 1 '08 #25
On Sep 1, 2:34 am, NvrBst <nvr...@gmail.c omwrote:
No - you are not correct. Imagine a situation where the pushing thread
pushes the object to the dequeue and then increases the counter.
However, the size get written through the cache whereas the object
stays (partly) in the cache. Now the reader will see that there is an
object in the dequeue, but in reality it was never written to a
location that allows the other thread to see it (it is on another
core), so it reads rubbish where the object should be.

I don't quite understand what you say ("on another core" part), but
I'm pretty sure it is safe. The pop'ing thread is only taking the
first element, and the push'ing thread is appending to the end.
You're assuming that std::deque has its own internal mutex (or other
compiler trickery reacting to volatile) so that each individual member
function call has its own lock-unlock against that mutex. This is not
required by the C++ standard, but can be remediated by a template
wrapper class at noticeable inefficiency.

A quick check on Google suggests this is not true (MSDN Developer
Network search hit) even for a C# Queue; in C# you would have to use
the Queue.Synchroni zed method to obtain a wrapper that was moderately
thread-safe.

Without that hidden mutex, the problem without synchronization in C++
(with a size field as an implementation detail, that is incremented
before actually doing the push; translate calls to C# as required):

| Core push-thread #1 cache, q.empty() | --- | main RAM | --- | Core
pop-thread cache, processing former last record |
| Core push-thread #1 cache, 1==q.size(), main queue inconsistent |
--- | main RAM, q.empty() | --- | Core pop-thread cache, processing
former last record |
| Core push-thread #1 cache, 1==q.size(), main queue inconsistent |
--- | main RAM, 1==q.size(), main queue inconsistent | --- | Core pop-
thread cache, processing former last record |
| Core push-thread #1 cache, 1==q.size(), main queue inconsistent |
--- | main RAM, 1==q.size(), main queue inconsistent | --- | Core pop-
thread cache, wants to test !q.empty(), chooses to refresh cache from
main RAM |
| Core push-thread #1 cache, 1==q.size(), main queue inconsistent |
--- | main RAM, 1==q.size(), main queue inconsistent | --- | Core pop-
thread cache, 1==q.size(), main queue inconsistent; !q.empty() so
invokes q.front() | UNDEFINED BEHAVIOR

Worst-case scenario leaves the queue in a permanently inconsistent
state from the push and pop both trying to execute at once. All of
the above should translate into C# fairly directly for an
unsynchronized Queue.

Without an internal-detail size field, 0 < q.size() will have
undefined behavior in the pop-thread (from inconsistent state). !
q.empty() might be well-defined even then.

With a hidden mutex providing per-method synchronization , the sole pop-
thread is forced to wait until the push-thread completed and testing
q.empty() / 0 < q.size() works exactly as long as there is exactly one
pop-thread, no critical sections at all needed.
Sep 1 '08 #26
On Sep 1, 11:45 am, zaim...@zaimoni .com wrote:
On Sep 1, 2:34 am, NvrBst <nvr...@gmail.c omwrote:
No - you are not correct. Imagine a situation where the
pushing thread pushes the object to the dequeue and then
increases the counter. However, the size get written
through the cache whereas the object stays (partly) in the
cache. Now the reader will see that there is an object in
the dequeue, but in reality it was never written to a
location that allows the other thread to see it (it is on
another core), so it reads rubbish where the object should
be.
I don't quite understand what you say ("on another core"
part), but I'm pretty sure it is safe. The pop'ing thread
is only taking the first element, and the push'ing thread is
appending to the end.
It's absolutely safe only if the std::deque has been specially
implemented so that each member function has "enough"
synchronization .
I don't think that that's possible, given the interface.

--
James Kanze (GABI Software) email:ja******* **@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientier ter Datenverarbeitu ng
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Sep 1 '08 #27
On Sep 1, 6:49 pm, Jerry Coffin <jcof...@taeus. comwrote:
In article <638bd1ab-a5e5-4567-aa53-
9032bd487...@k3 7g2000hsf.googl egroups.com>, james.ka...@gma il.com
says...
[ ... ]
My advice would be to avoid std::deque in such a situation
-- in a multi-threaded situation, it places an undue
burden on the client code.
Avoid it, or wrap it? I use it regularly for communicating
between threads; my Queue class is based on it.
I'd generally avoid it. Wrapping it works, but presents trade
offs I rarely find useful.
I find the trade off of using tried and tested code, rather than
having to write it myself, useful. (Of course, if you already
had your basic queue working before STL came along...)
The problem with std::deque is that it can and will resize
itself when you add an item and there's not currently room to
store that item. Doing so, however, normally involves
allocating a block of memory from the free store. As such,
this operation can be fairly slow. That translates to a fairly
substantial amount of time that's being spent just in storing
the data rather than processing the data. It only really makes
sense when your work items are sufficiently coarse-grained
that the dynamic allocation will be substantially faster than
processing a single work item.
I suppose that that could be an issue in some cases. For the
most part, my queues never contain very many entries at one
time, although there is a lot of pushing and popping, and the
applications run for very long times (sometimes years without
stopping). Which means that the queue effectively reaches its
largest size very quickly, and then there is no more dynamic
allocation from it. (On the other hand, my messages tend to be
polymorphic, and dynamically allocated.)
I generally prefer to keep the processing sufficiently fine
grained that when/if the queue is full, it's likely to be
faster to wait for an existing item to be processed than to
allocate more memory for the queue. This devotes nearly all
CPU time to processing the data instead of allocating memory
to store it.
I'll admit that I've never had any performance problems with
deque in this regard, although I can easily imagine applications
where it might be the case.
Now, my own work with multiple threads has mostly been fairly
easy to break up into relatively small "chunks", and has been
mostly quite processor intensive. Those mean that 1) giving up
processor time means the consumer side can run faster, and 2)
the time it would take to allocate memory for queue would
exceed the time taken to remove (at least) one item from the
queue.
OTOH, I can certainly imagine situations where those weren't
true. An obvious example would be something like a network
router. In this case, the "consumer" side is basically sending
out network packets. More CPU time won't speed up network
transmissions, and transmitting a single network packet
undoubtedly takes longer than allocating a block of memory.
Yes. One of the advantages of forwarding to a queue, and
letting another process handle it, is that you can get back to
listening at the socket that much faster. And since the system
buffers tend to have a fairly small, fixed size...
And if you replace buffer, in_pos and out_pos with
std::deque<T>, where's the problem.
There isn't necessarily a problem -- but I think the choice
between the two is one of some real trade offs rather than
just a matter of whether you manipulate the pointers into the
queue yourself, or use some pre-written code to do the job.
Of course, you can/could also use reserve() and size() on the
deque to manage it as a fixed-size container, but I don't
think at that point you're really gaining much by using a
deque at all. OTOH, it might still make sense when/if you had
both fixed- and variable-sized queues, and this allowed you to
share most code between the two.
In the end, the main gain was that I didn't have to write any
code to manage the queue. And that other people, maintaining
the software, didn't have to understand code that I'd written.

--
James Kanze (GABI Software) email:ja******* **@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientier ter Datenverarbeitu ng
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Sep 1 '08 #28
In article <e78577f0-da15-42c9-96ad-63e4bc2c7638
@w7g2000hsa.goo glegroups.com>, ja*********@gma il.com says...

[ ... ]
I'd generally avoid it. Wrapping it works, but presents trade
offs I rarely find useful.

I find the trade off of using tried and tested code, rather than
having to write it myself, useful. (Of course, if you already
had your basic queue working before STL came along...)
The problem is that the part that's tried and tested is downright
trivial compared to what's not -- and at least from the looks of things
in this thread, using that tried and tested code makes it even more
difficult to get the hard part right, so I suspect it's a net loss.

....and yes, if memory serves, my queue code was originally written under
OS/2 1.2 or thereabouts (originally in C, which still shows to some
degree).

[ ... ]
I suppose that that could be an issue in some cases. For the
most part, my queues never contain very many entries at one
time, although there is a lot of pushing and popping, and the
applications run for very long times (sometimes years without
stopping). Which means that the queue effectively reaches its
largest size very quickly, and then there is no more dynamic
allocation from it.
In fairness, though I hadn't really thought about it previously, it's
likely that when I've profiled it, I've tended to use relatively short
runs, which will tend to place an undue emphasis on the initialization,
so it's likely to make less difference overall than it may have looked
like. In any case, I'd agree that it's rarely likely to be a huge factor
in any case.

[ ... ]
In the end, the main gain was that I didn't have to write any
code to manage the queue. And that other people, maintaining
the software, didn't have to understand code that I'd written.
I suppose it might make a difference, but I have a hard time believing
that anybody who could understand the semaphore part would have to pause
for even a moment to understand the queue part. Duplicating it without
any fence-post errors might take a little longer, but even that's still
not exactly rocket science.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Sep 3 '08 #29
On Sep 3, 3:00 am, Jerry Coffin <jcof...@taeus. comwrote:
In article <e78577f0-da15-42c9-96ad-63e4bc2c7638
@w7g2000hsa.goo glegroups.com>, james.ka...@gma il.com says...
[ ... ]
I'd generally avoid it. Wrapping it works, but presents
trade offs I rarely find useful.
I find the trade off of using tried and tested code, rather
than having to write it myself, useful. (Of course, if you
already had your basic queue working before STL came
along...)
The problem is that the part that's tried and tested is
downright trivial compared to what's not -- and at least from
the looks of things in this thread, using that tried and
tested code makes it even more difficult to get the hard part
right, so I suspect it's a net loss.
I don't see where the actual implementation of the queue would
affect the rest, unless you're doing some tricky optimizations
to make it lock-free. (std::deque is obvious not lock-free; you
need locks around the accesses to it.) And your code didn't
seem to be trying to implement a lock free queue.

On the other hand, since I work under Unix, my queues used a
condition rather than semaphores, so the issues are somewhat
different, but basically, all I did was copy the code for the
wait and send from Butenhof, and stick an std::deque in as the
queue itself. You can't get much simpler.
...and yes, if memory serves, my queue code was originally
written under OS/2 1.2 or thereabouts (originally in C, which
still shows to some degree).
Which explains why you find the implementation of the queue so
simple:-). Most things are simple when you've been maintaining
them for so many years. My experience is that unless you
implement the queue trivially, using something like std::list,
getting the border conditions right isn't always that obvious
(but once you've got them right, the code is trivial).
[ ... ]
In the end, the main gain was that I didn't have to write any
code to manage the queue. And that other people, maintaining
the software, didn't have to understand code that I'd written.
I suppose it might make a difference, but I have a hard time
believing that anybody who could understand the semaphore part
would have to pause for even a moment to understand the queue
part. Duplicating it without any fence-post errors might take
a little longer, but even that's still not exactly rocket
science.
Well, my own implementation uses condition variables, rather
than semaphores. But it's really, really simple. FWIW
(modified to use Boost, rather than my home build classes, and
with support for handling timeouts removed):

class QueueBase
{
public:

QueueBase() ;
~QueueBase() ;

void send( void* message ) ;
void* receive() ;

bool isEmpty() const ;

private:
boost::mutex mutex ;
boost::conditio n_variable
cond ;
std::deque< void* queue ;
} ;

template< typename T >
class Queue : public QueueBase
{
public:

~Queue()
{
while ( ! isEmpty() ) {
receive() ;
}
}

void send( std::auto_ptr< T message )
{
QueueBase::send ( message.get() ) ;
message.release () ;
}

std::auto_ptr< T receive()
{
return std::auto_ptr< T >(
static_cast< T* >( QueueBase::rece ive() ) ) ;
}
} ;

QueueBase::Queu eBase()
{
}

QueueBase::~Que ueBase()
{
assert( queue.empty() ) ;
}

void
QueueBase::send (
void* message )
{
boost::unique_l ock< boost::mutex >
lock( mutex ) ;
queue.push_back ( message ) ;
cond.notify_one () ;
}

void*
QueueBase::rece ive()
{
boost::unique_l ock< boost::mutex >
lock( mutex ) ;
while ( queue.empty() ) {
cond.wait( lock ) ;
}
void* result = queue.front() ;
queue.pop_front () ;
return result ;
}

bool
QueueBase::isEm pty() const
{
boost::unique_l ock< boost::mutex >
lock( mutex ) ;
return queue.empty() ;
}

--
James Kanze (GABI Software) email:ja******* **@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientier ter Datenverarbeitu ng
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Sep 3 '08 #30

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
1528
by: Dan Trowbridge | last post by:
He everyone, I just getting started with .NET and I am having a porting problem. I get and error in code that lookes like this (really stripped down but you get the idea)... class dt { std::deque< class dt > dtdq; };
7
3479
by: Dan Trowbridge | last post by:
He everyone, I am just getting started with .NET and I am having a porting problem. I get and error in code that lookssomething like this (really stripped down but you get the idea)... class dt { std::deque< class dt > dtdq; };
8
2968
by: Gernot Frisch | last post by:
std::deque<intd; d.push_front(13); int* p13 = &d.begin(); d.push_front(1); d.push_back(2); Can I be safe, that p13 points to 13? I mean, unless I erase the 13, of course.
7
7232
by: DevNull | last post by:
Hello everyone, I decided to pick c++ back up after not really having used it much in 10 years. Just as a point of refference, there was no standard C++ last time I used regularly. Anyways coming back, I've fallen in love with all the improvements in the language such as the STL. Congrats to anyone who worked on it, you did an excellent job.
0
9287
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9886
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
9857
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9722
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8723
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7259
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5318
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
3817
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
3369
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.