Note that I have added comp.programming.threads to this post.
Dave Stallard wrote:
Pardon if this is the wrong newsgroup for this question, and/or if this
question is naive.
I have a multi-threaded Windows application in which certain
variables/object fields are shared: one thread may write the variable,
and the other thread read it. The variables in question may have int or
int* types. Question: Is it safe to do this? Or is it possible a read
that happens at the same time as a write may retrieve a scrambled value,
in which, say, two of the bytes are from the old value and two of the
bytes from the new value?
comp.programming.threads is probably the right place to ask such questions.
Sharing variables like this depends on "visibility". Modern CPU's
heavily depend on cache and out of order execution for performance.
Normally, these effects are undetectable in a single threaded program
but they become critical when you're dealing with multi-threaded code.
So, once, a long long time ago if you wrote :
char ring_buffer[1024];
int wloc;
int rloc;
....writer...
ring_buffer[wloc] = ch; // S1
++wloc; // S2
....reader...
if ( wloc rloc )
{
val = ring_buffer[rloc]; //S3
++rloc; //S4
}
you could be guarenteed that effects of S1 would be visible before S2 to
the other thread. This is not the case now. S2 MAY be visible before
S1 which means that S3 may not see the value in "ch".
There are two reasons this may happen, one is the hardware (CPU) and the
other is the optimizing compiler using the volatile keyword. The second
is to make sure that the hardware compiles. This is totally hardware
dependant - no standard exists yet.
First thing to do is tell the compiler to not mess with the order:
volatile char ring_buffer[1024];
volatile int wloc;
volatile int rloc;
....writer...
ring_buffer[wloc] = ch; // S1
FENCE(); // make sure that S1 is visible
++wloc; // S2
....reader...
FENCE(); // make sure we see everything first
if ( wloc rloc )
{
val = ring_buffer[rloc]; //S3
++rloc; //S4
}
An excellent paper on the issues involved in standardizing a solution:
http://www.hpl.hp.com/techreports/20...-2004-209.html
Wikipedia gives an excellent high level perspective.
http://en.wikipedia.org/wiki/Memory_barrier
So, most code uses mutexes and these generally guarentee a memory
barrier, so if you use mutexes, you don't run into sequencing issues,
however they can be a significant performance hit. Having said that,
many and quite probably most applications will never need to worry about
the performance hit of mutexes.
>
In Java I know that such atomicity is guaranteed for ints, but not for
longs or doubles. So, perhaps I shouldn't assume it in a less
well-standardized environment
The application, btw, is a circular buffer with a read pointer and a
write pointer, which are moved forward by their respective threads. The
read thread needs to check to make sure that the read pointer is not on
the write pointer before reading. Using mutexes right now, but I worry
about their overhead...
Dave