473,288 Members | 1,794 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,288 software developers and data experts.

When is "volatile" used instead of "lock" ?


When is it appropriate to use "volatile" keyword? The docs simply
state:

"
The volatile modifier is usually used for a field that is accessed by
multiple threads without using the lock Statement (C# Reference)
statement to serialize access.
"

But when is it better to use "volatile" instead of "lock" ?

Thanks,

Sam

------------------------------------------------------------
We're hiring! B-Line Medical is seeking .NET
Developers for exciting positions in medical product
development in MD/DC. Work with a variety of technologies
in a relaxed team environment. See ads on Dice.com.
May 21 '07 #1
94 30105
On May 21, 3:35 pm, Samuel R. Neff <samueln...@nomail.comwrote:
When is it appropriate to use "volatile" keyword? The docs simply
state:

"
The volatile modifier is usually used for a field that is accessed by
multiple threads without using the lock Statement (C# Reference)
statement to serialize access.
"

But when is it better to use "volatile" instead of "lock" ?

Thanks,

Sam

------------------------------------------------------------
We're hiring! B-Line Medical is seeking .NET
Developers for exciting positions in medical product
development in MD/DC. Work with a variety of technologies
in a relaxed team environment. See ads on Dice.com.
You can also the System.Threading.Interlocked class which maintains
volatile semantics.

Seealso: http://www.albahari.com/threading/part4.html

May 21 '07 #2
"Samuel R. Neff" <sa********@nomail.comschrieb im Newsbeitrag
news:ec********************************@4ax.com...
>
When is it appropriate to use "volatile" keyword? The docs simply
state:

"
The volatile modifier is usually used for a field that is accessed by
multiple threads without using the lock Statement (C# Reference)
statement to serialize access.
"
For a volatile field the reodering of the memory access by the optimizer is
restricted.
A write to a volatile field is always done after all other memory accesses
which precede in the instruction sequence.
A read from a volatile field is always done before all other memory accesses
wich occur after it in the instruction sequence.

A volatile field as a simple way to flag, that memorymanipulations are over.

following an example from the specs:

using System;
using System.Threading;
class Test
{
public static int result;
public static volatile bool finished;
static void Thread2() {
result = 143;
finished = true;
}

static void Main() {
finished = false;
// Run Thread2() in a new thread
new Thread(new ThreadStart(Thread2)).Start();
// Wait for Thread2 to signal that it has a result by setting
// finished to true.
for (;;) {
if (finished) {
Console.WriteLine("result = {0}", result);
return;
}
}
}
}

Since finished is volatile, in method Thread2 the write to result will
allways occur before the write to finished and in method Main the read from
finished will allways occur before the read from result, so the read from
result in Main can't occur before the write in Thread2.

HTH

Christof
May 21 '07 #3
On May 21, 10:35 am, Samuel R. Neff <samueln...@nomail.comwrote:
When is it appropriate to use "volatile" keyword? The docs simply
state:
Often, if just one thread is writing to the object (and other
threads just reading it),you can get away with using just volatile.

Generally, the shared object would need to be an atomic value, so the
reader may see it sudden change from state A to state B, but would
never see it half-way between A & B.

May 21 '07 #4
be************@gmail.com <be************@gmail.comwrote:
You can also the System.Threading.Interlocked class which maintains
volatile semantics.

Seealso: http://www.albahari.com/threading/part4.html
But only if you use it for both the writing *and* the reading, which
isn't terribly obvious from the docs.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
May 21 '07 #5
On May 21, 10:58 am, "Christof Nordiek" <c...@nospam.dewrote:
For a volatile field the reodering of the memory access by the optimizer is
restricted.
A write to a volatile field is always done after all other memory accesses
which precede in the instruction sequence.
A read from a volatile field is always done before all other memory accesses
wich occur after it in the instruction sequence.

A volatile field as a simple way to flag, that memorymanipulations are over.

following an example from the specs:

using System;
using System.Threading;
class Test
{
public static int result;
public static volatile bool finished;
static void Thread2() {
result = 143;
finished = true;
}

static void Main() {
finished = false;
// Run Thread2() in a new thread
new Thread(new ThreadStart(Thread2)).Start();
// Wait for Thread2 to signal that it has a result by setting
// finished to true.
for (;;) {
if (finished) {
Console.WriteLine("result = {0}", result);
return;
}
}
}

}

Since finished is volatile, in method Thread2 the write to result will
allways occur before the write to finished and in method Main the read from
finished will allways occur before the read from result, so the read from
result in Main can't occur before the write in Thread2.

HTH

Christof
One other important behavior that is demonstrated in your example is
that it guarentees that writes to finished are seen from other
threads. That prevents the infinite loop in Main().

Brian

May 21 '07 #6
"Samuel R. Neff" <sa********@nomail.comwrote:
When is it appropriate to use "volatile" keyword? The docs simply
state:
"The volatile modifier is usually used for a field that is accessed by
multiple threads without using the lock Statement (C# Reference)
statement to serialize access. "

But when is it better to use "volatile" instead of "lock" ?
I would recommend using locks and properties, rather than volatile variables
or Interlocked Methods.

Locking is easier and more straight forward, and has fewer subtle issues,
than do the other two methods.

--
Chris Mullins, MCSD.NET, MCPD:Enterprise, Microsoft C# MVP
http://www.coversant.com/blogs/cmullins
May 21 '07 #7

<be************@gmail.comwrote in message
news:11*********************@z28g2000prd.googlegro ups.com...
On May 21, 3:35 pm, Samuel R. Neff <samueln...@nomail.comwrote:
>When is it appropriate to use "volatile" keyword? The docs simply
state:

"
The volatile modifier is usually used for a field that is accessed by
multiple threads without using the lock Statement (C# Reference)
statement to serialize access.
"

But when is it better to use "volatile" instead of "lock" ?

Thanks,

Sam

------------------------------------------------------------
We're hiring! B-Line Medical is seeking .NET
Developers for exciting positions in medical product
development in MD/DC. Work with a variety of technologies
in a relaxed team environment. See ads on Dice.com.

You can also the System.Threading.Interlocked class which maintains
volatile semantics.
You should use volatile and Interlocked together, neither fully replaces the
other.
>
Seealso: http://www.albahari.com/threading/part4.html

May 22 '07 #8
"Ben Voigt" <rb*@nospam.nospamwrote in message
news:ul**************@TK2MSFTNGP06.phx.gbl...
>
<be************@gmail.comwrote in message
news:11*********************@z28g2000prd.googlegro ups.com...
>On May 21, 3:35 pm, Samuel R. Neff <samueln...@nomail.comwrote:
>>When is it appropriate to use "volatile" keyword? The docs simply
state:

"
The volatile modifier is usually used for a field that is accessed by
multiple threads without using the lock Statement (C# Reference)
statement to serialize access.
"

But when is it better to use "volatile" instead of "lock" ?

Thanks,

Sam

------------------------------------------------------------
We're hiring! B-Line Medical is seeking .NET
Developers for exciting positions in medical product
development in MD/DC. Work with a variety of technologies
in a relaxed team environment. See ads on Dice.com.

You can also the System.Threading.Interlocked class which maintains
volatile semantics.

You should use volatile and Interlocked together, neither fully replaces
the other.
Not necessarily, there is no need for volatile, as long you Interlock
consistently across all threads in the process. This means that once you
access a shared variable using Interlock, all threads should use Interlock.

Willy.

May 23 '07 #9

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:1B**********************************@microsof t.com...
"Ben Voigt" <rb*@nospam.nospamwrote in message
news:ul**************@TK2MSFTNGP06.phx.gbl...
>>
<be************@gmail.comwrote in message
news:11*********************@z28g2000prd.googlegr oups.com...
>>On May 21, 3:35 pm, Samuel R. Neff <samueln...@nomail.comwrote:
When is it appropriate to use "volatile" keyword? The docs simply
state:

"
The volatile modifier is usually used for a field that is accessed by
multiple threads without using the lock Statement (C# Reference)
statement to serialize access.
"

But when is it better to use "volatile" instead of "lock" ?

Thanks,

Sam

------------------------------------------------------------
We're hiring! B-Line Medical is seeking .NET
Developers for exciting positions in medical product
development in MD/DC. Work with a variety of technologies
in a relaxed team environment. See ads on Dice.com.

You can also the System.Threading.Interlocked class which maintains
volatile semantics.

You should use volatile and Interlocked together, neither fully replaces
the other.

Not necessarily, there is no need for volatile, as long you Interlock
consistently across all threads in the process. This means that once you
access a shared variable using Interlock, all threads should use
Interlock.
I don't think so, actually. Without volatile semantics, the compiler is
free to cache the value of any parameter, including in/out parameters. Say
you are calling an Interlocked method in a loop. If the variable is not
volatile, the compiler can actually call Interlocked on a local copy, and
then write the value to the real variable once, at the end of the loop (and
worse, it can do so in a non-atomic way). Anything that maintains correct
operation from the perspective of the calling thread is permissible for
non-volatile variable access. Why would a compiler do this? For optimal
use of cache. By using a local copy of a variable passed byref, locality of
reference is improved, and additionally, a thread's stack (almost) never
incurs cache coherency costs.

Note that this is not a problem for pass-by-pointer, which must use the true
address of the referenced variable in order to enable pointer arithmetic.
But pointer arithmetic isn't allowed for tracking handles, a handle is an
opaque value anyway.

For lockless data structures, always use volatile. And then stick that
volatile variable close in memory to what it is protecting, because CPU
cache has to load and flush an entire cache line at once, and volatile write
semantics require flushing all pending writes.
>
Willy.

May 23 '07 #10
Ben Voigt <rb*@nospam.nospamwrote:
Not necessarily, there is no need for volatile, as long you Interlock
consistently across all threads in the process. This means that once you
access a shared variable using Interlock, all threads should use
Interlock.

I don't think so, actually. Without volatile semantics, the compiler is
free to cache the value of any parameter, including in/out parameters. Say
you are calling an Interlocked method in a loop. If the variable is not
volatile, the compiler can actually call Interlocked on a local copy, and
then write the value to the real variable once, at the end of the loop (and
worse, it can do so in a non-atomic way).
No - the CLI spec *particularly* mentions Interlocked operations, and
that they perform implicit acquire/release operations. In other words,
the JIT can't move stuff around in this particular case. Interlocked
would be pretty pointless without this.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
May 23 '07 #11
"Ben Voigt" <rb*@nospam.nospamwrote in message
news:uX**************@TK2MSFTNGP03.phx.gbl...
>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:1B**********************************@microsof t.com...
>"Ben Voigt" <rb*@nospam.nospamwrote in message
news:ul**************@TK2MSFTNGP06.phx.gbl...
>>>
<be************@gmail.comwrote in message
news:11*********************@z28g2000prd.googleg roups.com...
On May 21, 3:35 pm, Samuel R. Neff <samueln...@nomail.comwrote:
When is it appropriate to use "volatile" keyword? The docs simply
state:
>
"
The volatile modifier is usually used for a field that is accessed by
multiple threads without using the lock Statement (C# Reference)
statement to serialize access.
"
>
But when is it better to use "volatile" instead of "lock" ?
>
Thanks,
>
Sam
>
------------------------------------------------------------
We're hiring! B-Line Medical is seeking .NET
Developers for exciting positions in medical product
development in MD/DC. Work with a variety of technologies
in a relaxed team environment. See ads on Dice.com.

You can also the System.Threading.Interlocked class which maintains
volatile semantics.

You should use volatile and Interlocked together, neither fully replaces
the other.

Not necessarily, there is no need for volatile, as long you Interlock
consistently across all threads in the process. This means that once you
access a shared variable using Interlock, all threads should use
Interlock.

I don't think so, actually. Without volatile semantics, the compiler is
free to cache the value of any parameter, including in/out parameters.
Say you are calling an Interlocked method in a loop. If the variable is
not volatile, the compiler can actually call Interlocked on a local copy,
and then write the value to the real variable once, at the end of the loop
(and worse, it can do so in a non-atomic way). Anything that maintains
correct operation from the perspective of the calling thread is
permissible for non-volatile variable access. Why would a compiler do
this? For optimal use of cache. By using a local copy of a variable
passed byref, locality of reference is improved, and additionally, a
thread's stack (almost) never incurs cache coherency costs.

Note that this is not a problem for pass-by-pointer, which must use the
true address of the referenced variable in order to enable pointer
arithmetic. But pointer arithmetic isn't allowed for tracking handles, a
handle is an opaque value anyway.

For lockless data structures, always use volatile. And then stick that
volatile variable close in memory to what it is protecting, because CPU
cache has to load and flush an entire cache line at once, and volatile
write semantics require flushing all pending writes.
>>
Willy.



No, not at all. Interlocked operations imply a full fence, that is, reads
have acquire and writes have release semantics. That means that the JIT may
not register these variables nor store them locally and cannot move stuff
around them.
Think of this, what would be the use of Interlocked operation when used in
languages that don't support volatile (like VB.NET) or good old C/C++
(except VC7 and up).
I also don't agree with your statement that you should *always* use volatile
in lock free or low lock scenario's. IMO, you should almost never use
volatile, unless you perfectly understand the semantics of the memory model
of the CLR/CLI (ECMA differs from V1.X differs from V2 for instance) and the
memory model of the CPU (IA32 vs. IA64). The last year I was involved in the
resolution of a number of nasty bugs , all of them where the result of
people trying to out-smart the system by applying lock free or low lock
techniques using volatile, since then whenever I see volatile I'm getting
very suspicious, really.......
Willy.
May 23 '07 #12
Willy Denoyette [MVP] wrote:
I also don't agree with your statement that you should *always* use volatile
in lock free or low lock scenario's.
As far as I can see from the rest of your post, I think you've made a
mis-statement here. I think what you mean to say is that you shouldn't
use lock-free or low-locking unless there's no alternative, not that
volatile shouldn't be used - because volatile is usually very necessary
in order to get memory barriers right in those circumstances.
IMO, you should almost never use
volatile, unless you perfectly understand the semantics of the memory model
of the CLR/CLI (ECMA differs from V1.X differs from V2 for instance) and the
memory model of the CPU (IA32 vs. IA64). The last year I was involved in the
resolution of a number of nasty bugs , all of them where the result of
people trying to out-smart the system by applying lock free or low lock
techniques using volatile, since then whenever I see volatile I'm getting
very suspicious, really.......
I agree with you about seeing 'volatile' and it raising red flags, but
the cure is to use proper locking if possible, and careful reasoning
(rather than shotgun 'volatile' and guesswork), rather than simply
omitting 'volatile'.

-- Barry

--
http://barrkel.blogspot.com/
May 23 '07 #13
"Barry Kelly" <ba***********@gmail.comwrote in message
news:be********************************@4ax.com...
Willy Denoyette [MVP] wrote:
>I also don't agree with your statement that you should *always* use
volatile
in lock free or low lock scenario's.

As far as I can see from the rest of your post, I think you've made a
mis-statement here. I think what you mean to say is that you shouldn't
use lock-free or low-locking unless there's no alternative, not that
volatile shouldn't be used - because volatile is usually very necessary
in order to get memory barriers right in those circumstances.
>IMO, you should almost never use
volatile, unless you perfectly understand the semantics of the memory
model
of the CLR/CLI (ECMA differs from V1.X differs from V2 for instance) and
the
memory model of the CPU (IA32 vs. IA64). The last year I was involved in
the
resolution of a number of nasty bugs , all of them where the result of
people trying to out-smart the system by applying lock free or low lock
techniques using volatile, since then whenever I see volatile I'm getting
very suspicious, really.......

I agree with you about seeing 'volatile' and it raising red flags, but
the cure is to use proper locking if possible, and careful reasoning
(rather than shotgun 'volatile' and guesswork), rather than simply
omitting 'volatile'.
Well, I wasn't suggesting to omit 'volatile, sorry hif I gave this
impression. What I meant was, that you should be very if when looking for
lock-free or low locking alternatives, and if you do, that you should not
"always" use volatile.
Note that there are alternatives to volatile fields, there are
Thread.MemoryBarrier, Thread.VolatileRead, Thread.VolatileWrite and the
Interlocked API's, and these alternatives have IMO the (slight) advantages
that they "forces" developers to reason about their usage, something which
is less the case (from what I've learned when talking with other devs.
across several teams) with volatile.
But here also, you need to be very careful, (the red flag should be raised
whenever you see any of these too). You need to reason about their usage and
that's the major problem when writing threaded code, even experienced
developer have a hard time when reasoning about multithreading using locks,
programming models that require to reason about how and when to use explicit
fences or barriers are IMO too difficult, even for experts, to use reliably
in mainstream computing, and this is what .NET is all about isn't it?.

Willy.
Willy.

May 23 '07 #14

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:BF**********************************@microsof t.com...
"Ben Voigt" <rb*@nospam.nospamwrote in message
news:uX**************@TK2MSFTNGP03.phx.gbl...
>>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:1B**********************************@microso ft.com...
>>"Ben Voigt" <rb*@nospam.nospamwrote in message
news:ul**************@TK2MSFTNGP06.phx.gbl...

<be************@gmail.comwrote in message
news:11*********************@z28g2000prd.google groups.com...
On May 21, 3:35 pm, Samuel R. Neff <samueln...@nomail.comwrote:
>When is it appropriate to use "volatile" keyword? The docs simply
>state:
>>
>"
>The volatile modifier is usually used for a field that is accessed by
>multiple threads without using the lock Statement (C# Reference)
>statement to serialize access.
>"
>>
>But when is it better to use "volatile" instead of "lock" ?
>>
>Thanks,
>>
>Sam
>>
>------------------------------------------------------------
>We're hiring! B-Line Medical is seeking .NET
>Developers for exciting positions in medical product
>development in MD/DC. Work with a variety of technologies
>in a relaxed team environment. See ads on Dice.com.
>
You can also the System.Threading.Interlocked class which maintains
volatile semantics.

You should use volatile and Interlocked together, neither fully
replaces the other.
Not necessarily, there is no need for volatile, as long you Interlock
consistently across all threads in the process. This means that once you
access a shared variable using Interlock, all threads should use
Interlock.

I don't think so, actually. Without volatile semantics, the compiler is
free to cache the value of any parameter, including in/out parameters.
Say you are calling an Interlocked method in a loop. If the variable is
not volatile, the compiler can actually call Interlocked on a local copy,
and then write the value to the real variable once, at the end of the
loop (and worse, it can do so in a non-atomic way). Anything that
maintains correct operation from the perspective of the calling thread is
permissible for non-volatile variable access. Why would a compiler do
this? For optimal use of cache. By using a local copy of a variable
passed byref, locality of reference is improved, and additionally, a
thread's stack (almost) never incurs cache coherency costs.

Note that this is not a problem for pass-by-pointer, which must use the
true address of the referenced variable in order to enable pointer
arithmetic. But pointer arithmetic isn't allowed for tracking handles, a
handle is an opaque value anyway.

For lockless data structures, always use volatile. And then stick that
volatile variable close in memory to what it is protecting, because CPU
cache has to load and flush an entire cache line at once, and volatile
write semantics require flushing all pending writes.
>>>
Willy.


No, not at all. Interlocked operations imply a full fence, that is, reads
have acquire and writes have release semantics. That means that the JIT
may not register these variables nor store them locally and cannot move
stuff around them.
Let's look at the Win32 declaration for an Interlocked function:

LONG InterlockedExchange(
LONG volatile* Target,
LONG Value
);Clearly, Target is intended to be the address of a volatile variable.
Sure, you can pass a non-volatile pointer, and there is an implicit
conversion, but if you do *the variable will be treated as volatile only
inside InterlockedExchange*. The compiler can still do anything outside
InterlockedExchange, because it is dealing with a non-volatile variable.
And, it can't possibly change behavior when InterlockedExchange is called,
because the call could be made from a different library, potentially not yet
loaded.

Consider this:

/* compilation unit one */
void DoIt(LONG *target)
{
LONG value = /* some long calculation here */;
if (value != InterlockedExchange(target, value))
{
/* some complex operation here */
}
}

/* compilation unit two */

extern void DoIt(LONG * target);
extern LONG shared;

void outer(void)
{
for( int i = 0; i < 1000; i++ )
{
DoIt(&shared);
}
}

Now, clearly, the compiler has no way of telling that DoIt uses Interlocked
access, since DoIt didn't declare volatile semantics on the pointer passed
in. So the compiler can, if desired, transform outer thusly:

void outer(void)
{
LONG goodLocalityOfReference = shared;
for( int i = 0; i < 1000; i++ )
{
DoIt(&goodLocalityOfReference);
}
shared = goodLocalityOfReference;
}

Except for one thing. In native code, pointers have values that can be
compared, subtracted, etc. So the compiler has to honestly pass the address
of shared. In managed code, with tracking handles, the compiler doesn't
have to preserve the address of the variable (that would, after all, defeat
compacting garbage collection). Oh, sure, the JIT has a lot more
information about what is being called than a native compiler does, it
almost gets rid of separate compilation units.... but not quite. With
dynamically loaded assemblies and reflection in the mix, it is just a
helpless as a "compile-time" compiler.

I'm fairly sure that the current .NET runtime doesn't actually do any such
optimization as I've described. But I wouldn't bet against such things
being added in the future, when NUMA architectures become so widespread that
the compiler has to optimize for them.

Be safe, use volatile on every variable you want to act volatile, which
includes every variable passed to Interlocked.
Think of this, what would be the use of Interlocked operation when used in
languages that don't support volatile (like VB.NET) or good old C/C++
(except VC7 and up).
VC++, all versions, and all other PC compilers that I'm aware of (as in, not
embedded), support volatile to the extent needed to invoke an interlocked
operation. That is, the real variable is always accessed at the time
specified by the compiler. The memory fences are provided by the
implementation of Interlocked*, independent of the compiler version.
I also don't agree with your statement that you should *always* use
volatile in lock free or low lock scenario's. IMO, you should almost never
use volatile, unless you perfectly understand the semantics of the memory
model of the CLR/CLI (ECMA differs from V1.X differs from V2 for instance)
and the memory model of the CPU (IA32 vs. IA64). The last year I was
involved in the resolution of a number of nasty bugs , all of them where
the result of people trying to out-smart the system by applying lock free
or low lock techniques using volatile, since then whenever I see volatile
I'm getting very suspicious, really.......
You are claiming that you should almost never use lock free techniques, and
thus volatile should be rare. This hardly contradicts my statement that
volatile should always be used in lock free programming.
May 24 '07 #15
"Ben Voigt" <rb*@nospam.nospamwrote in message
news:%2****************@TK2MSFTNGP06.phx.gbl...
>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:BF**********************************@microsof t.com...
>"Ben Voigt" <rb*@nospam.nospamwrote in message
news:uX**************@TK2MSFTNGP03.phx.gbl...
>>>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:1B**********************************@micros oft.com...
"Ben Voigt" <rb*@nospam.nospamwrote in message
news:ul**************@TK2MSFTNGP06.phx.gbl...
>
<be************@gmail.comwrote in message
news:11*********************@z28g2000prd.googl egroups.com...
>On May 21, 3:35 pm, Samuel R. Neff <samueln...@nomail.comwrote:
>>When is it appropriate to use "volatile" keyword? The docs simply
>>state:
>>>
>>"
>>The volatile modifier is usually used for a field that is accessed
>>by
>>multiple threads without using the lock Statement (C# Reference)
>>statement to serialize access.
>>"
>>>
>>But when is it better to use "volatile" instead of "lock" ?
>>>
>>Thanks,
>>>
>>Sam
>>>
>>------------------------------------------------------------
>>We're hiring! B-Line Medical is seeking .NET
>>Developers for exciting positions in medical product
>>development in MD/DC. Work with a variety of technologies
>>in a relaxed team environment. See ads on Dice.com.
>>
>You can also the System.Threading.Interlocked class which maintains
>volatile semantics.
>
You should use volatile and Interlocked together, neither fully
replaces the other.
>

Not necessarily, there is no need for volatile, as long you Interlock
consistently across all threads in the process. This means that once
you access a shared variable using Interlock, all threads should use
Interlock.

I don't think so, actually. Without volatile semantics, the compiler is
free to cache the value of any parameter, including in/out parameters.
Say you are calling an Interlocked method in a loop. If the variable is
not volatile, the compiler can actually call Interlocked on a local
copy, and then write the value to the real variable once, at the end of
the loop (and worse, it can do so in a non-atomic way). Anything that
maintains correct operation from the perspective of the calling thread
is permissible for non-volatile variable access. Why would a compiler
do this? For optimal use of cache. By using a local copy of a variable
passed byref, locality of reference is improved, and additionally, a
thread's stack (almost) never incurs cache coherency costs.

Note that this is not a problem for pass-by-pointer, which must use the
true address of the referenced variable in order to enable pointer
arithmetic. But pointer arithmetic isn't allowed for tracking handles, a
handle is an opaque value anyway.

For lockless data structures, always use volatile. And then stick that
volatile variable close in memory to what it is protecting, because CPU
cache has to load and flush an entire cache line at once, and volatile
write semantics require flushing all pending writes.
Willy.

No, not at all. Interlocked operations imply a full fence, that is, reads
have acquire and writes have release semantics. That means that the JIT
may not register these variables nor store them locally and cannot move
stuff around them.

Let's look at the Win32 declaration for an Interlocked function:

LONG InterlockedExchange(
LONG volatile* Target,
LONG Value
);Clearly, Target is intended to be the address of a volatile variable.
Sure, you can pass a non-volatile pointer, and there is an implicit
conversion, but if you do *the variable will be treated as volatile only
inside InterlockedExchange*. The compiler can still do anything outside
InterlockedExchange, because it is dealing with a non-volatile variable.
Sure, but this was not my point, the point is that Interlocked operations
imply barriers, all or not full. "volatile" implies full barriers, so they
both imply barriers, but they serve different purposes. One does not exclude
the other, but that doesn't mean they should always be used in tandem, all
depends on what you want to achieve in your code, what guarantees you want.
Anyway, the docs do not impose it, the C# docs on Interlocked don't even
mention volatile, and the Win32 docs (Interlocked API's) don't spend a word
on the volatile argument. (note that the volatile was added to the
signature after NT4 SP1).
And, it can't possibly change behavior when InterlockedExchange is called,
because the call could be made from a different library, potentially not
yet loaded.
Sorry but you are mixing native code and managed code semantics. What I
mean, is that the semantics of the C (native) volatile is not the same as
the semantics of C# 'volatile'. So when I refered to C++ supporting
"volatile" I was refering to managed dialects (VC7.x and VC8) who's volatile
semantics are obviously the same as all other languages
I don't wanna discuss the semantics of volatile in standard C/c++ here, they
are so imprecise that IMO it will lead to an endless dicussion, not relevant
to C#.
Also I don't wanna discuss the semantics of Win32 Interlocked either, "Win32
interlocked API's" do accept pointers to volatile items, while .NET does
accept "volatile pointers" (in unsafe context) as arguments of a method
call, but treats the item as non volatile. Also, C#, will issue a warning
when passing a volatile field (passed by ref is required by Interlocked
operations), that means that the item will be treated as volatile, but the
reference itself will not.

Consider this:

/* compilation unit one */
void DoIt(LONG *target)
{
LONG value = /* some long calculation here */;
if (value != InterlockedExchange(target, value))
{
/* some complex operation here */
}
}

/* compilation unit two */

extern void DoIt(LONG * target);
extern LONG shared;

void outer(void)
{
for( int i = 0; i < 1000; i++ )
{
DoIt(&shared);
}
}

Now, clearly, the compiler has no way of telling that DoIt uses
Interlocked access, since DoIt didn't declare volatile semantics on the
pointer passed in. So the compiler can, if desired, transform outer
thusly:

void outer(void)
{
LONG goodLocalityOfReference = shared;
for( int i = 0; i < 1000; i++ )
{
DoIt(&goodLocalityOfReference);
}
shared = goodLocalityOfReference;
}

Except for one thing. In native code, pointers have values that can be
compared, subtracted, etc. So the compiler has to honestly pass the
address of shared. In managed code, with tracking handles, the compiler
doesn't have to preserve the address of the variable (that would, after
all, defeat compacting garbage collection). Oh, sure, the JIT has a lot
more information about what is being called than a native compiler does,
it almost gets rid of separate compilation units.... but not quite. With
dynamically loaded assemblies and reflection in the mix, it is just a
helpless as a "compile-time" compiler.

I'm fairly sure that the current .NET runtime doesn't actually do any such
optimization as I've described. But I wouldn't bet against such things
being added in the future, when NUMA architectures become so widespread
that the compiler has to optimize for them.

Be safe, use volatile on every variable you want to act volatile, which
includes every variable passed to Interlocked.
>Think of this, what would be the use of Interlocked operation when used
in languages that don't support volatile (like VB.NET) or good old C/C++
(except VC7 and up).

VC++, all versions, and all other PC compilers that I'm aware of (as in,
not embedded), support volatile to the extent needed to invoke an
interlocked operation. That is, the real variable is always accessed at
the time specified by the compiler. The memory fences are provided by the
implementation of Interlocked*, independent of the compiler version.
Where in the docs (MSDN Platform SDK etc..) do they state that Interlocked
should always be on volatile items?

>I also don't agree with your statement that you should *always* use
volatile in lock free or low lock scenario's. IMO, you should almost
never use volatile, unless you perfectly understand the semantics of the
memory model of the CLR/CLI (ECMA differs from V1.X differs from V2 for
instance) and the memory model of the CPU (IA32 vs. IA64). The last year
I was involved in the resolution of a number of nasty bugs , all of them
where the result of people trying to out-smart the system by applying
lock free or low lock techniques using volatile, since then whenever I
see volatile I'm getting very suspicious, really.......

You are claiming that you should almost never use lock free techniques,
and thus volatile should be rare. This hardly contradicts my statement
that volatile should always be used in lock free programming.
Kind of, I'm claiming that you should rarely use lock-free techniques when
using C# in mainstream applications, I've seen too many people trying to
implement lock free code, and if you ask "why", the answer is mostly
"performance", and if you asked if the measured their "locked "
implementation, the answer is mostly, well I have no 'locked'
implementation, this is what I call "premature optimization" without any
guarantees, other than probably producing unrealiable code, which is (IMO)
more important than performant code .
IMO the use of volatile should be rare in the sense that you better use
locks and only use volatile for the most simple cases (which doesn't imply
'rare'), for instance when you need to guarantee that all possible observers
of a field (of type accepted by volatile) see the same value when that value
has been written to by another observer.
Remember "volatile" is something taken care of by the JIT, all it does is
eliminate some of the possible optimizations like (but not restricted to):
- volatile items cannot be registered...
- multiple stores cannot be suppressed...
- re-ordering is restricted.
- ...
But keep in mind that, 'volatile' suppresses optimizations for all possible
accesses, even when not subject to multiple observers (threads), and that
volatile fields accesses can move, some people think they can't....

Willy.


May 24 '07 #16
Sorry, coming in late; but this are some poor implications with respect to
"volatile" and "lock" in this thread (other statements like "..there is no
need for volatile [when] you Interlock consistently across all threads in the
process." are valid).

"lock" and "volatile" are two different things. You may not always need
"lock" with a type that can be declared volatile; but you should always use
volatile with a member that is accessed by multiple threads (an optimization
would be that you wouldn't need "volatile" if Interlocked were always used
with the member in question, if applicable--as has been noted). For example,
why would anyone assume the the line commented with "// *" was thread-safe
simply because "i" was declared with "volatile":

volatile int i;
static Random random = new Random();
static int Transmogrify(int value)
{
return value *= random.Next();
}

void Method()
{
i = Transmogrify(i); // *
}

"volatile" doesn't make a member thread-safe, the above operation still
requires at least two instructions (likely four), which are entirely likely
to be separated by preemption to another thread that modifies i.

By the same token, the lock statement surrounding access to a member doesn't
stop the compiler from having optimized use of a member by caching it to a
register especially if that member is declared in a different assembly that
was compiled for this code was written:

lock(lockObject)
{
i = i + 1;
}

....yes, the compiler *could* assume that all members within the lock
statement block are likely accessible by multiple threads (implicit
volatile); but that's not its intention and it's certainly not documented as
doing that (and it would be pointless, other code knows nothing about this
block and could have optimized use of i by changing its order of access or
caching to a registry).

volatile and lock should be used in conjunction, one is not a replacement
for the other.

--
Browse http://connect.microsoft.com/VisualStudio/feedback/ and vote.
http://www.peterRitchie.com/blog/
Microsoft MVP, Visual Developer - Visual C#

Jun 14 '07 #17
Peter Ritchie [C# MVP] <PR****@newsgroups.nospamwrote:

<snip>
By the same token, the lock statement surrounding access to a member doesn't
stop the compiler from having optimized use of a member by caching it to a
register especially if that member is declared in a different assembly that
was compiled for this code was written:

lock(lockObject)
{
i = i + 1;
}
Acquiring a lock has acquire semantics, and releasing a lock has
release semantics. You don't need any volatility if all access to any
particular item of shared data is always made having acquired a certain
lock.

If different locks are used, you could be in trouble, but if you always
lock on the same reference (when accessing the same shared data) you're
guaranteed to be okay.
...yes, the compiler *could* assume that all members within the lock
statement block are likely accessible by multiple threads (implicit
volatile); but that's not its intention and it's certainly not documented as
doing that (and it would be pointless, other code knows nothing about this
block and could have optimized use of i by changing its order of access or
caching to a registry).
It certainly *is* documented. ECMA 335, section 12.6.5:

<quote>
Acquiring a lock (System.Threading.Monitor.Enter or entering a
synchronized method) shall implicitly
perform a volatile read operation, and releasing a lock
(System.Threading.Monitor.Exit or leaving a
synchronized method) shall implicitly perform a volatile write
operation.
</quote>
volatile and lock should be used in conjunction, one is not a replacement
for the other.
If you lock appropriately, you never need to use volatile.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jun 14 '07 #18
"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP*********************@msnews.microsoft.com. ..
Peter Ritchie [C# MVP] <PR****@newsgroups.nospamwrote:

<snip>
>By the same token, the lock statement surrounding access to a member
doesn't
stop the compiler from having optimized use of a member by caching it to
a
register especially if that member is declared in a different assembly
that
was compiled for this code was written:

lock(lockObject)
{
i = i + 1;
}

Acquiring a lock has acquire semantics, and releasing a lock has
release semantics. You don't need any volatility if all access to any
particular item of shared data is always made having acquired a certain
lock.

If different locks are used, you could be in trouble, but if you always
lock on the same reference (when accessing the same shared data) you're
guaranteed to be okay.
>...yes, the compiler *could* assume that all members within the lock
statement block are likely accessible by multiple threads (implicit
volatile); but that's not its intention and it's certainly not documented
as
doing that (and it would be pointless, other code knows nothing about
this
block and could have optimized use of i by changing its order of access
or
caching to a registry).

It certainly *is* documented. ECMA 335, section 12.6.5:

<quote>
Acquiring a lock (System.Threading.Monitor.Enter or entering a
synchronized method) shall implicitly
perform a volatile read operation, and releasing a lock
(System.Threading.Monitor.Exit or leaving a
synchronized method) shall implicitly perform a volatile write
operation.
</quote>
>volatile and lock should be used in conjunction, one is not a replacement
for the other.

If you lock appropriately, you never need to use volatile.

True, when using locks, make sure you do it consistently. And that's exactly
why I said that I'm getting suspicious when I see a "volatile" field. Most
of the time this modifier is used because the author doesn't understand the
semantics of "volatile", or he's not sure about his own locking policy or he
has no locking policy at all. Also some may think that volatile implies a
fence, which is not the case, it only tells the JIT to turn off some of the
optimizations like register allocation and load/store reordering, but it
doesn't prevent possible re-ordering and write buffering done by the CPU,
note, that this is a non issue on X86 and X64 like CPU's , given the memory
model enforced by the CLR, but it is an issue on IA64.

Willy.

Jun 14 '07 #19
Acquiring a lock has acquire semantics, and releasing a lock has
release semantics. You don't need any volatility if all access to any
particular item of shared data is always made having acquired a certain
lock.
....which only applies to reference types. Most of this discussion has been
revolving around value types (by virtue of Interlocked.Increment), for which
"lock" cannot not apply. e.g. you can't switch from using lock on a member
to using Interlocked.Increment on that member, one works with references and
the other with value types (specifically Int32 and Int64). This is what
raised my concern.
It certainly *is* documented. ECMA 335, section 12.6.5:

<quote>
Acquiring a lock (System.Threading.Monitor.Enter or entering a
synchronized method) shall implicitly
perform a volatile read operation, and releasing a lock
(System.Threading.Monitor.Exit or leaving a
synchronized method) shall implicitly perform a volatile write
operation.
</quote>
....still doesn't document anything about the members/variables within the
locked block (please read my example). That quote applies only to the
reference used as the parameter for the lock.

There can be no lock acquire semantics for value members. Suggesting
"locking appropriately" cannot apply here and can be misconstrued by some
people by creating something like "lock(myLocker){intMember = SomeMethod();}"
which does not do the same thing as making intMember volatile, increases
overhead needlessly, and still leaves a potential bug.
>
volatile and lock should be used in conjunction, one is not a replacement
for the other.

If you lock appropriately, you never need to use volatile.
Even if the discussion hasn't been about value types, a dangerous statement;
because it could only apply to reference types (i.e. if myObject is wrapped
with lock(myObject) in every thread, yes I don't need to declare it with
volatile--but that's probably not why I'm using lock). In the context of
reference types, volatile only applies to the pointer (reference) not
anything within the object it references. Reference assignment is atomic,
there's no need to use lock to guard that sort of thing. You use lock to
guard a non-atomic invariant, volatile has nothing to do with that--it has to
do with the optimization (ordering, caching) of pointer/value reads and
writes.

Calling Monitor.Enter/Minitor.Exit is a pretty heavy-weight means of
ensuring acquire semantics; at least 5 times slower if volatile is all you
need.

-- Peter
Jun 14 '07 #20
Do you think the following is suspicous:?

volatile int intMember;

....assumes you didn't read my last post, I suppose :-)

-- Peter

--
Browse http://connect.microsoft.com/VisualStudio/feedback/ and vote.
http://www.peterRitchie.com/blog/
Microsoft MVP, Visual Developer - Visual C#
"Willy Denoyette [MVP]" wrote:
"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP*********************@msnews.microsoft.com. ..
Peter Ritchie [C# MVP] <PR****@newsgroups.nospamwrote:

<snip>
By the same token, the lock statement surrounding access to a member
doesn't
stop the compiler from having optimized use of a member by caching it to
a
register especially if that member is declared in a different assembly
that
was compiled for this code was written:

lock(lockObject)
{
i = i + 1;
}
Acquiring a lock has acquire semantics, and releasing a lock has
release semantics. You don't need any volatility if all access to any
particular item of shared data is always made having acquired a certain
lock.

If different locks are used, you could be in trouble, but if you always
lock on the same reference (when accessing the same shared data) you're
guaranteed to be okay.
...yes, the compiler *could* assume that all members within the lock
statement block are likely accessible by multiple threads (implicit
volatile); but that's not its intention and it's certainly not documented
as
doing that (and it would be pointless, other code knows nothing about
this
block and could have optimized use of i by changing its order of access
or
caching to a registry).
It certainly *is* documented. ECMA 335, section 12.6.5:

<quote>
Acquiring a lock (System.Threading.Monitor.Enter or entering a
synchronized method) shall implicitly
perform a volatile read operation, and releasing a lock
(System.Threading.Monitor.Exit or leaving a
synchronized method) shall implicitly perform a volatile write
operation.
</quote>
volatile and lock should be used in conjunction, one is not a replacement
for the other.
If you lock appropriately, you never need to use volatile.


True, when using locks, make sure you do it consistently. And that's exactly
why I said that I'm getting suspicious when I see a "volatile" field. Most
of the time this modifier is used because the author doesn't understand the
semantics of "volatile", or he's not sure about his own locking policy or he
has no locking policy at all. Also some may think that volatile implies a
fence, which is not the case, it only tells the JIT to turn off some of the
optimizations like register allocation and load/store reordering, but it
doesn't prevent possible re-ordering and write buffering done by the CPU,
note, that this is a non issue on X86 and X64 like CPU's , given the memory
model enforced by the CLR, but it is an issue on IA64.

Willy.

Jun 14 '07 #21
Peter Ritchie [C# MVP] <PR****@newsgroups.nospamwrote:
Acquiring a lock has acquire semantics, and releasing a lock has
release semantics. You don't need any volatility if all access to any
particular item of shared data is always made having acquired a certain
lock.

...which only applies to reference types. Most of this discussion has been
revolving around value types (by virtue of Interlocked.Increment), for which
"lock" cannot not apply. e.g. you can't switch from using lock on a member
to using Interlocked.Increment on that member, one works with references and
the other with value types (specifically Int32 and Int64). This is what
raised my concern.
It's not a case of using a lock on a particular value - taking the lock
out creates a memory barrier beyond which *no* reads can pass, not just
reads on the locked expression.
It certainly *is* documented. ECMA 335, section 12.6.5:

<quote>
Acquiring a lock (System.Threading.Monitor.Enter or entering a
synchronized method) shall implicitly
perform a volatile read operation, and releasing a lock
(System.Threading.Monitor.Exit or leaving a
synchronized method) shall implicitly perform a volatile write
operation.
</quote>

...still doesn't document anything about the members/variables within the
locked block (please read my example). That quote applies only to the
reference used as the parameter for the lock.

There can be no lock acquire semantics for value members. Suggesting
"locking appropriately" cannot apply here and can be misconstrued by some
people by creating something like "lock(myLocker){intMember = SomeMethod();}"
which does not do the same thing as making intMember volatile, increases
overhead needlessly, and still leaves a potential bug.
No, it *doesn't* leave a bug - you've misunderstood the effect of lock
having acquire semantics.
volatile and lock should be used in conjunction, one is not a replacement
for the other.
If you lock appropriately, you never need to use volatile.

Even if the discussion hasn't been about value types, a dangerous statement;
because it could only apply to reference types (i.e. if myObject is wrapped
with lock(myObject) in every thread, yes I don't need to declare it with
volatile--but that's probably not why I'm using lock). In the context of
reference types, volatile only applies to the pointer (reference) not
anything within the object it references. Reference assignment is atomic,
there's no need to use lock to guard that sort of thing. You use lock to
guard a non-atomic invariant, volatile has nothing to do with that--it has to
do with the optimization (ordering, caching) of pointer/value reads and
writes.
Atomicity and volatility are very different things, and shouldn't be
confused.

Locks do more than just guarding non-atomic invariants though - they
have the acquire/release semantics which make volatility unnecessary.

To be absolutely clear on this, if I have:

int someValue;
object myLock;

....

lock (myLock)
{
int x = someValue;
someValue = x+1;
}

then the read of someValue *cannot* be from a cache - it *must* occur
after the lock has been taken out. Likewise before the lock is
released, the write back to someValue *must* have been made effectively
flushed (it can't occur later than the release in the logical memory
model).

Here's how that's guaranteed by the spec:

"Acquiring a lock (System.Threading.Monitor.Enter or entering a
synchronized method) shall implicitly perform a volatile read
operation"

and

"A volatile read has =3Facquire semantics=3F meaning that the read is
guaranteed to occur prior to any references to memory that occur after
the read instruction in the CIL instruction sequence."

That means that the volatile read due to the lock is guaranteed to
occur prior to the "reference to memory" (reading someValue) which
occurs later in the CIL instruction sequence.

The same thing happens the other way round for releasing the lock.
Calling Monitor.Enter/Minitor.Exit is a pretty heavy-weight means of
ensuring acquire semantics; at least 5 times slower if volatile is all you
need.
But still fast enough for almost everything I've ever needed to do, and
I find it a lot easier to reason about a single way of doing things
than having multiple ways for multiple situations. Just a personal
preference - but it definitely *is* safe, without ever needing to
declare anything volatile.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jun 14 '07 #22
"Peter Ritchie [C# MVP]" <PR****@newsgroups.nospamwrote in message
news:53**********************************@microsof t.com...
Do you think the following is suspicous:?

volatile int intMember;

...assumes you didn't read my last post, I suppose :-)

-- Peter
Yes, I do, maybe it's a sign that someone is trying to write lock free
code....

But , I get even more suspicious is when I see this:

....
volatile int intMember;
....
void Foo()
{
lock(myLock)
{
// use intMember here and protect it's shared state by preventing
other threads to touch intMember
// for the duration of the critical section
}
...
}

In above case, when you apply a consistent locking policy to protect your
invariants, there is no need for a volatile intMember. Else, it can be an
indication that some one is trying to play smart, by not taking a lock to
access intMember.
Willy.
Jun 14 '07 #23
"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP*********************@msnews.microsoft.com. ..
Peter Ritchie [C# MVP] <PR****@newsgroups.nospamwrote:
Acquiring a lock has acquire semantics, and releasing a lock has
release semantics. You don't need any volatility if all access to any
particular item of shared data is always made having acquired a certain
lock.

...which only applies to reference types. Most of this discussion has
been
revolving around value types (by virtue of Interlocked.Increment), for
which
"lock" cannot not apply. e.g. you can't switch from using lock on a
member
to using Interlocked.Increment on that member, one works with references
and
the other with value types (specifically Int32 and Int64). This is what
raised my concern.

It's not a case of using a lock on a particular value - taking the lock
out creates a memory barrier beyond which *no* reads can pass, not just
reads on the locked expression.
It certainly *is* documented. ECMA 335, section 12.6.5:

<quote>
Acquiring a lock (System.Threading.Monitor.Enter or entering a
synchronized method) shall implicitly
perform a volatile read operation, and releasing a lock
(System.Threading.Monitor.Exit or leaving a
synchronized method) shall implicitly perform a volatile write
operation.
</quote>

...still doesn't document anything about the members/variables within the
locked block (please read my example). That quote applies only to the
reference used as the parameter for the lock.

There can be no lock acquire semantics for value members. Suggesting
"locking appropriately" cannot apply here and can be misconstrued by some
people by creating something like "lock(myLocker){intMember =
SomeMethod();}"
which does not do the same thing as making intMember volatile, increases
overhead needlessly, and still leaves a potential bug.

No, it *doesn't* leave a bug - you've misunderstood the effect of lock
having acquire semantics.
volatile and lock should be used in conjunction, one is not a
replacement
for the other.

If you lock appropriately, you never need to use volatile.

Even if the discussion hasn't been about value types, a dangerous
statement;
because it could only apply to reference types (i.e. if myObject is
wrapped
with lock(myObject) in every thread, yes I don't need to declare it with
volatile--but that's probably not why I'm using lock). In the context of
reference types, volatile only applies to the pointer (reference) not
anything within the object it references. Reference assignment is
atomic,
there's no need to use lock to guard that sort of thing. You use lock to
guard a non-atomic invariant, volatile has nothing to do with that--it
has to
do with the optimization (ordering, caching) of pointer/value reads and
writes.

Atomicity and volatility are very different things, and shouldn't be
confused.

Locks do more than just guarding non-atomic invariants though - they
have the acquire/release semantics which make volatility unnecessary.

To be absolutely clear on this, if I have:

int someValue;
object myLock;

...

lock (myLock)
{
int x = someValue;
someValue = x+1;
}

then the read of someValue *cannot* be from a cache - it *must* occur
after the lock has been taken out. Likewise before the lock is
released, the write back to someValue *must* have been made effectively
flushed (it can't occur later than the release in the logical memory
model).
Actually on modern processors (others aren't supported anyway, unless you
are running W98 on a 80386) , the read and writes will come/go from/to the
cache (L1, L2 ..), the cache coherency protocol will guarantee consistency
across the cache lines holding the variable has changed. That way, the
"software" has a uniform view of what is called the "memory" irrespective
the number of HW threads (not talking about NUMA here!).

Here's how that's guaranteed by the spec:

"Acquiring a lock (System.Threading.Monitor.Enter or entering a
synchronized method) shall implicitly perform a volatile read
operation"

and

"A volatile read has =3Facquire semantics=3F meaning that the read is
guaranteed to occur prior to any references to memory that occur after
the read instruction in the CIL instruction sequence."

That means that the volatile read due to the lock is guaranteed to
occur prior to the "reference to memory" (reading someValue) which
occurs later in the CIL instruction sequence.

The same thing happens the other way round for releasing the lock.
>Calling Monitor.Enter/Minitor.Exit is a pretty heavy-weight means of
ensuring acquire semantics; at least 5 times slower if volatile is all
you
need.

But still fast enough for almost everything I've ever needed to do, and
I find it a lot easier to reason about a single way of doing things
than having multiple ways for multiple situations. Just a personal
preference - but it definitely *is* safe, without ever needing to
declare anything volatile.
Probably one of the reasons why I've never seen a volatile modifier on a
field in the FCL.
And to repeat myself, volatile is not a guarantee against re-ordering and
write buffering by CPU's implementing a weak memory model, like the IA64.
Volatile serves only one thing, that is, prevent optimizations like
re-registering and re-ordering as there would be done by the JIT compiler.

Willy.

Jun 14 '07 #24
Willy Denoyette [MVP] <wi*************@telenet.bewrote:

<snip>
then the read of someValue *cannot* be from a cache - it *must* occur
after the lock has been taken out. Likewise before the lock is
released, the write back to someValue *must* have been made effectively
flushed (it can't occur later than the release in the logical memory
model).

Actually on modern processors (others aren't supported anyway, unless you
are running W98 on a 80386) , the read and writes will come/go from/to the
cache (L1, L2 ..), the cache coherency protocol will guarantee consistency
across the cache lines holding the variable has changed. That way, the
"software" has a uniform view of what is called the "memory" irrespective
the number of HW threads (not talking about NUMA here!).
Yes - I've been using "cache" here somewhat naughtily (because it's the
terminology Peter was using). The sensible way to talk about it is in
terms of the .NET memory model, which is
But still fast enough for almost everything I've ever needed to do, and
I find it a lot easier to reason about a single way of doing things
than having multiple ways for multiple situations. Just a personal
preference - but it definitely *is* safe, without ever needing to
declare anything volatile.

Probably one of the reasons why I've never seen a volatile modifier on a
field in the FCL.
And to repeat myself, volatile is not a guarantee against re-ordering and
write buffering by CPU's implementing a weak memory model, like the IA64.
Volatile serves only one thing, that is, prevent optimizations like
re-registering and re-ordering as there would be done by the JIT compiler.
No, I disagree with that. Volatile *does* prevent (some) reordering and
write buffering as far as the visible effect to the code is concerned,
whether the effect comes from the JIT or the CPU. Suppose variables a
and b are volatile, then:

int c = a;
int d = b;

will guarantee that the visible effect is the value of "a" being read
before the value of "b" (which wouldn't be the case if they weren't
volatile). In particular, if the variables both start out at 0, then we
do:

b = 1;
a = 1;

in parallel with the previous code, then you might get c=d=1, or c=d=0,
or c=0, d=1, but you're guaranteed *not* to get c=1, d=0.

Whether that involves the JIT doing extra work to get round a weak CPU
memory model is unimportant - if it doesn't prevent that last
situation, it's failed to meet the spec.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jun 14 '07 #25
"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP*********************@msnews.microsoft.com. ..
Willy Denoyette [MVP] <wi*************@telenet.bewrote:

<snip>
then the read of someValue *cannot* be from a cache - it *must* occur
after the lock has been taken out. Likewise before the lock is
released, the write back to someValue *must* have been made effectively
flushed (it can't occur later than the release in the logical memory
model).

Actually on modern processors (others aren't supported anyway, unless you
are running W98 on a 80386) , the read and writes will come/go from/to
the
cache (L1, L2 ..), the cache coherency protocol will guarantee
consistency
across the cache lines holding the variable has changed. That way, the
"software" has a uniform view of what is called the "memory" irrespective
the number of HW threads (not talking about NUMA here!).

Yes - I've been using "cache" here somewhat naughtily (because it's the
terminology Peter was using). The sensible way to talk about it is in
terms of the .NET memory model, which is
But still fast enough for almost everything I've ever needed to do, and
I find it a lot easier to reason about a single way of doing things
than having multiple ways for multiple situations. Just a personal
preference - but it definitely *is* safe, without ever needing to
declare anything volatile.

Probably one of the reasons why I've never seen a volatile modifier on a
field in the FCL.
And to repeat myself, volatile is not a guarantee against re-ordering and
write buffering by CPU's implementing a weak memory model, like the IA64.
Volatile serves only one thing, that is, prevent optimizations like
re-registering and re-ordering as there would be done by the JIT
compiler.

No, I disagree with that. Volatile *does* prevent (some) reordering and
write buffering as far as the visible effect to the code is concerned,
whether the effect comes from the JIT or the CPU. Suppose variables a
and b are volatile, then:

int c = a;
int d = b;

will guarantee that the visible effect is the value of "a" being read
before the value of "b" (which wouldn't be the case if they weren't
volatile). In particular, if the variables both start out at 0, then we
do:

b = 1;
a = 1;

in parallel with the previous code, then you might get c=d=1, or c=d=0,
or c=0, d=1, but you're guaranteed *not* to get c=1, d=0.

Whether that involves the JIT doing extra work to get round a weak CPU
memory model is unimportant - if it doesn't prevent that last
situation, it's failed to meet the spec.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too

Jun 14 '07 #26
For the record, I've been talking about the compiler re-organizing the code
during optimization. And I thought I was pretty clear about the compiler
"caching" values to a register, not the CPUs caches.
It's not a case of using a lock on a particular value - taking the lock
out creates a memory barrier beyond which *no* reads can pass, not just
reads on the locked expression.
I don't see you how you get that from:
"Acquiring a lock (System.Threading.Monitor.Enter or entering a
synchronized method) shall implicitly perform a volatile read
operation"

and

"A volatile read has "acquire semantics" meaning that the read is
guaranteed to occur prior to any references to memory that occur after
the read instruction in the CIL instruction sequence."
I would agree that a volatile read/write is performed on the parameter for
Monitor.Enter and Monitor.Exit.
To be absolutely clear on this, if I have:

int someValue;
object myLock;

....

lock (myLock)
{
int x = someValue;
someValue = x+1;
}

then the read of someValue *cannot* be from a cache - it *must* occur
after the lock has been taken out. Likewise before the lock is
released, the write back to someValue *must* have been made effectively
flushed (it can't occur later than the release in the logical memory
model).
You're talking about CPU re-organizations and CPU cachings, I've been
talking about compiler optimizations.

None of the quotes affect code already optimized by the compiler. If the
compiler decides writing code that doesn't write a temporary value directly
back to the member/variable because it's faster and it doesn't know it's
volatile, nothing you've quoted will have a bearing on that.

Monitor.Enter may create memory barrier for the current thread, it's unclear
from 335; but it could not have affected code that accesses members outside
of a lock block.

335 says nothing about what the compiler does with code within a locked block.

Jun 15 '07 #27
"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP*********************@msnews.microsoft.com. ..
Willy Denoyette [MVP] <wi*************@telenet.bewrote:

<snip>
then the read of someValue *cannot* be from a cache - it *must* occur
after the lock has been taken out. Likewise before the lock is
released, the write back to someValue *must* have been made effectively
flushed (it can't occur later than the release in the logical memory
model).

Actually on modern processors (others aren't supported anyway, unless you
are running W98 on a 80386) , the read and writes will come/go from/to
the
cache (L1, L2 ..), the cache coherency protocol will guarantee
consistency
across the cache lines holding the variable has changed. That way, the
"software" has a uniform view of what is called the "memory" irrespective
the number of HW threads (not talking about NUMA here!).

Yes - I've been using "cache" here somewhat naughtily (because it's the
terminology Peter was using). The sensible way to talk about it is in
terms of the .NET memory model, which is
But still fast enough for almost everything I've ever needed to do, and
I find it a lot easier to reason about a single way of doing things
than having multiple ways for multiple situations. Just a personal
preference - but it definitely *is* safe, without ever needing to
declare anything volatile.

Probably one of the reasons why I've never seen a volatile modifier on a
field in the FCL.
And to repeat myself, volatile is not a guarantee against re-ordering and
write buffering by CPU's implementing a weak memory model, like the IA64.
Volatile serves only one thing, that is, prevent optimizations like
re-registering and re-ordering as there would be done by the JIT
compiler.

No, I disagree with that. Volatile *does* prevent (some) reordering and
write buffering as far as the visible effect to the code is concerned,
whether the effect comes from the JIT or the CPU. Suppose variables a
and b are volatile, then:

int c = a;
int d = b;

will guarantee that the visible effect is the value of "a" being read
before the value of "b" (which wouldn't be the case if they weren't
volatile). In particular, if the variables both start out at 0, then we
do:

b = 1;
a = 1;

in parallel with the previous code, then you might get c=d=1, or c=d=0,
or c=0, d=1, but you're guaranteed *not* to get c=1, d=0.

Whether that involves the JIT doing extra work to get round a weak CPU
memory model is unimportant - if it doesn't prevent that last
situation, it's failed to meet the spec.

Agreed, reads (all or not volatile) cannot move before a volatile read, and
writes cannot move after a volatile write.

But this is not my point, what I'm referring to is the following (assuming a
and b are volatile):

a = 5;
int d = b;

here it's allowed for the write to move after the read, they are referring
to different locations and they have no (visible) dependencies).
Willy.

Jun 15 '07 #28
Peter Ritchie [C# MVP] <PR****@newsgroups.nospamwrote:
For the record, I've been talking about the compiler re-organizing the code
during optimization. And I thought I was pretty clear about the compiler
"caching" values to a register, not the CPUs caches.
That's all irrelevant - the important thing is the visible effect.

<snip>
then the read of someValue *cannot* be from a cache - it *must* occur
after the lock has been taken out. Likewise before the lock is
released, the write back to someValue *must* have been made effectively
flushed (it can't occur later than the release in the logical memory
model).

You're talking about CPU re-organizations and CPU cachings, I've been
talking about compiler optimizations.
As I said to Willy, I shouldn't have used the word "cache". Quite what
could make things appear to be out of order is irrelevant - they're all
forbidden by the spec in this case.
None of the quotes affect code already optimized by the compiler. If the
compiler decides writing code that doesn't write a temporary value directly
back to the member/variable because it's faster and it doesn't know it's
volatile, nothing you've quoted will have a bearing on that.
So here are you talking about the C# compiler rather than the JIT
compiler?

If so, I agree there appears to be a hole in the C# spec. I don't
believe the C# compiler *will* move any reads/writes around, however.
For the rest of the post, however, I'll assume you were actually still
talking about the JIT.
Monitor.Enter may create memory barrier for the current thread, it's unclear
from 335; but it could not have affected code that accesses members outside
of a lock block.
Agreed, but irrelevant.
335 says nothing about what the compiler does with code within a locked block.
Agreed, but irrelevant.

The situation I've been talking about is where a particular variable is
only referenced *inside* lock blocks, and where all the lock blocks
which refer to that variable are all locking against the same
reference.

At that point, there is an absolute ordering in terms of the execution
of those lock blocks - only one can execute at a time, because that's
the main point of locking.

Furthermore, while the ordering *within* the lock can be moved, none of
the reads which are inside the lock can be moved to before the lock is
acquired (in terms of the memory model, however that is achieved) and
none of the writes which are inside the lock can be moved to after the
lock is released.

Therefore any change to the variable is seen by each thread, with no
"stale" values being involved.

Now I totally agree that *if* you start accessing the variable from
outside a lock block, all bets are off - but so long as you keep
everything within locked sections of code, all locked with the same
lock, you're fine.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jun 15 '07 #29
Willy Denoyette [MVP] <wi*************@telenet.bewrote:

<snip>
Agreed, reads (all or not volatile) cannot move before a volatile read, and
writes cannot move after a volatile write.

But this is not my point, what I'm referring to is the following (assuming a
and b are volatile):

a = 5;
int d = b;

here it's allowed for the write to move after the read, they are referring
to different locations and they have no (visible) dependencies).
Assuming they're not volatile, you're absolutely right - but I thought
you were talking about what could happen with *volatile* variables,
given that you said:

<quote>
And to repeat myself, volatile is not a guarantee against re-ordering
and write buffering by CPU's implementing a weak memory model, like the
IA64.
</quote>

I believe volatile *is* a guarantee against the reordering of volatile
operations. Volatile isn't a guarantee against the reordering of two
non-volatile operations with no volatile operation between them, but
that's the case for the JIT as well as the CPU.

I don't believe it's necessary to talk about the JIT separately from
the CPU when thinking on a purely spec-based level. If we were looking
at generated code we'd need to consider the platform etc, but at a
higher level than that we can just talk about the memory model that the
CLR provides, however it provides it.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jun 15 '07 #30
"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP********************@msnews.microsoft.com.. .
Willy Denoyette [MVP] <wi*************@telenet.bewrote:

<snip>
>Agreed, reads (all or not volatile) cannot move before a volatile read,
and
writes cannot move after a volatile write.

But this is not my point, what I'm referring to is the following
(assuming a
and b are volatile):

a = 5;
int d = b;

here it's allowed for the write to move after the read, they are
referring
to different locations and they have no (visible) dependencies).

Assuming they're not volatile, you're absolutely right - but I thought
you were talking about what could happen with *volatile* variables,
given that you said:

<quote>
And to repeat myself, volatile is not a guarantee against re-ordering
and write buffering by CPU's implementing a weak memory model, like the
IA64.
</quote>

I believe volatile *is* a guarantee against the reordering of volatile
operations. Volatile isn't a guarantee against the reordering of two
non-volatile operations with no volatile operation between them, but
that's the case for the JIT as well as the CPU.

I don't believe it's necessary to talk about the JIT separately from
the CPU when thinking on a purely spec-based level. If we were looking
at generated code we'd need to consider the platform etc, but at a
higher level than that we can just talk about the memory model that the
CLR provides, however it provides it.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too

Jun 15 '07 #31
"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP********************@msnews.microsoft.com.. .
Willy Denoyette [MVP] <wi*************@telenet.bewrote:

<snip>
>Agreed, reads (all or not volatile) cannot move before a volatile read,
and
writes cannot move after a volatile write.

But this is not my point, what I'm referring to is the following
(assuming a
and b are volatile):

a = 5;
int d = b;

here it's allowed for the write to move after the read, they are
referring
to different locations and they have no (visible) dependencies).

Assuming they're not volatile, you're absolutely right - but I thought
you were talking about what could happen with *volatile* variables,
given that you said:
Note really, I'm talking about volatile field b
The (ECMA) rules for volatile state that:
- reads and writes cannot move before a *volatile* read
- reads and writes cannot move after a *volatile* write.
As I see it, this means that ordinary writes can move after a volatile read.
So, in the above, the write to 'a' can move after the volatile read from
'b', agree?
However, above rules are not clear on the case where 'a' and 'b' are
volatile, do the rules prohibit a volatile write to move after a volatile
read? IMO they don't.

<quote>
And to repeat myself, volatile is not a guarantee against re-ordering
and write buffering by CPU's implementing a weak memory model, like the
IA64.
</quote>

I believe volatile *is* a guarantee against the reordering of volatile
operations. Volatile isn't a guarantee against the reordering of two
non-volatile operations with no volatile operation between them, but
that's the case for the JIT as well as the CPU.

I don't believe it's necessary to talk about the JIT separately from
the CPU when thinking on a purely spec-based level. If we were looking
at generated code we'd need to consider the platform etc, but at a
higher level than that we can just talk about the memory model that the
CLR provides, however it provides it.

--
Jun 15 '07 #32
Willy Denoyette [MVP] <wi*************@telenet.bewrote:
But this is not my point, what I'm referring to is the following
(assuming a
and b are volatile):

a = 5;
int d = b;

here it's allowed for the write to move after the read, they are
referring
to different locations and they have no (visible) dependencies).
Assuming they're not volatile, you're absolutely right - but I thought
you were talking about what could happen with *volatile* variables,
given that you said:

Note really, I'm talking about volatile field b
Sorry - I stupidly misread "are volatile" as "are not volatile". Doh!
The (ECMA) rules for volatile state that:
- reads and writes cannot move before a *volatile* read
- reads and writes cannot move after a *volatile* write.
As I see it, this means that ordinary writes can move after a volatile read.
So, in the above, the write to 'a' can move after the volatile read from
'b', agree?
However, above rules are not clear on the case where 'a' and 'b' are
volatile, do the rules prohibit a volatile write to move after a volatile
read? IMO they don't.
Yup, I think you're right.

I basically think of volatile as pretty much *solely* a way to make
sure you always see the latest value of the variable in any thread.
When it comes to interactions like that, while they're interesting to
reason about, I'd rather use a lock in situations where I really care
:)

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jun 15 '07 #33
Sorry , but previous message went out before being finished.

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:en****************@TK2MSFTNGP05.phx.gbl...
"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP********************@msnews.microsoft.com.. .
>Willy Denoyette [MVP] <wi*************@telenet.bewrote:

<snip>
>>Agreed, reads (all or not volatile) cannot move before a volatile read,
and
writes cannot move after a volatile write.

But this is not my point, what I'm referring to is the following
(assuming a
and b are volatile):

a = 5;
int d = b;

here it's allowed for the write to move after the read, they are
referring
to different locations and they have no (visible) dependencies).

Assuming they're not volatile, you're absolutely right - but I thought
you were talking about what could happen with *volatile* variables,
given that you said:

Note really, I'm talking about volatile field b
The (ECMA) rules for volatile state that:
- reads and writes cannot move before a *volatile* read
- reads and writes cannot move after a *volatile* write.
As I see it, this means that ordinary writes can move after a volatile
read.
So, in the above, the write to 'a' can move after the volatile read from
'b', agree?
However, above rules are not clear on the case where 'a' and 'b' are
volatile, do the rules prohibit a volatile write to move after a volatile
read? IMO they don't.
However, the memory model as implemented by V2 of the CLR, also defines an
explicit rule that states that :
- All shared writes shall have release semantics.
which could be restated as : "writes cannot be reordered, point". That means
that on the current platforms, emitting every write with release semantics
is sufficient to:
1) perform each processor's stores in order, and
2) make them visible to other processors in that order
That makes the execution environment Processor Consistent (PC), great, that
would mean that the above optimization (move of the write after the volatile
read) is excluded. The problem however is, that notably the JIT64 on IA64,
does not enforce that rule consistently, it appears to enable such
optimizations in violation of the "managed memory model". MSFT is aware of
this, but as of today, I have no idea whether they are addressing this or
that they consider this to be acceptable on the IA64 platform.

Willy.


Jun 15 '07 #34
Willy Denoyette [MVP] <wi*************@telenet.bewrote:

<snip>
However, the memory model as implemented by V2 of the CLR, also defines an
explicit rule that states that :
- All shared writes shall have release semantics.
which could be restated as : "writes cannot be reordered, point". That means
that on the current platforms, emitting every write with release semantics
is sufficient to:
1) perform each processor's stores in order, and
2) make them visible to other processors in that order
That makes the execution environment Processor Consistent (PC), great, that
would mean that the above optimization (move of the write after the volatile
read) is excluded. The problem however is, that notably the JIT64 on IA64,
does not enforce that rule consistently, it appears to enable such
optimizations in violation of the "managed memory model". MSFT is aware of
this, but as of today, I have no idea whether they are addressing this or
that they consider this to be acceptable on the IA64 platform.
I *thought* (though I could well be wrong) that before release, the
IA64 JIT was indeed very lax, but that it had been tightened up close
to release. I wouldn't like to try to find any evidence of that though
;)

Just another reason to stick to "simple" thread safety via locks, IMO.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jun 15 '07 #35
"Jon Skeet [C# MVP]" wrote:
So here are you talking about the C# compiler rather than the JIT
compiler?

If so, I agree there appears to be a hole in the C# spec. I don't
believe the C# compiler *will* move any reads/writes around, however.
For the rest of the post, however, I'll assume you were actually still
talking about the JIT.
Could be either, I suppose. I don't think the spec. is clear at all in this
respect. With regard to compiler-level optimzations, 12.6.4 details: "...are
visible in the order specified in the CIL.". Which suggests to me that the
C#-to-IL compiler doesn't optimize other than potential reorganizations. The
detail before that seems concerning: "guarentees, within a single thread of
execution, that side-effects ... are visble in the order specified by the
CIL". Sounds like memory barriers are set up within Monitor.Enter and
Monitor.Exit to ensure CPU-level re-ordering is limited; but, unless there's
a modreq(volatile) on a member the JIT can't know not to introduce
cross-thread visible side-effects unless it looks for calls to Monitor.Enter
and Monitor.Exit.

>
Monitor.Enter may create memory barrier for the current thread, it's unclear
from 335; but it could not have affected code that accesses members outside
of a lock block.

Agreed, but irrelevant.
335 says nothing about what the compiler does with code within a locked block.

Agreed, but irrelevant.

The situation I've been talking about is where a particular variable is
only referenced *inside* lock blocks, and where all the lock blocks
which refer to that variable are all locking against the same
reference.

At that point, there is an absolute ordering in terms of the execution
of those lock blocks - only one can execute at a time, because that's
the main point of locking.

Furthermore, while the ordering *within* the lock can be moved, none of
the reads which are inside the lock can be moved to before the lock is
acquired (in terms of the memory model, however that is achieved) and
none of the writes which are inside the lock can be moved to after the
lock is released.

Therefore any change to the variable is seen by each thread, with no
"stale" values being involved.

Now I totally agree that *if* you start accessing the variable from
outside a lock block, all bets are off - but so long as you keep
everything within locked sections of code, all locked with the same
lock, you're fine.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jun 17 '07 #36
Peter Ritchie [C# MVP] <PR****@newsgroups.nospamwrote:
"Jon Skeet [C# MVP]" wrote:
So here are you talking about the C# compiler rather than the JIT
compiler?

If so, I agree there appears to be a hole in the C# spec. I don't
believe the C# compiler *will* move any reads/writes around, however.
For the rest of the post, however, I'll assume you were actually still
talking about the JIT.

Could be either, I suppose. I don't think the spec. is clear at all in this
respect. With regard to compiler-level optimzations, 12.6.4 details: "...are
visible in the order specified in the CIL.". Which suggests to me that the
C#-to-IL compiler doesn't optimize other than potential reorganizations.
The CIL spec can't determine what the C# compiler is allowed to do. I
haven't seen anything in the C# spec which says it won't reorder things
- although I hope and believe that it wohn't.
The detail before that seems concerning: "guarentees, within a single thread of
execution, that side-effects ... are visble in the order specified by the
CIL". Sounds like memory barriers are set up within Monitor.Enter and
Monitor.Exit to ensure CPU-level re-ordering is limited; but, unless there's
a modreq(volatile) on a member the JIT can't know not to introduce
cross-thread visible side-effects unless it looks for calls to Monitor.Enter
and Monitor.Exit.
I believe it *actually* avoids any reordering around *any* method
calls. I don't think 12.6.7 leaves much wiggle-room though - acquiring
the lock counts as a volatile read, and releasing the lock counts as a
volatile write, and the reordering prohibitions therefore apply - and
apply to *all* reads and writes, not just reads and writes of that
variable.

Note that 12.6.4 says: "(Note that while only volatile operations
constitute visible side-effects, volatile operations also affect the
visibility of non-volatile references.)" It's that affect non-volatile
visibility which I'm talking about.

Would it be worth me coming up with a small sample problem which shares
data without using any volatile variables? I claim that (assuming I
write it correctly) there won't be a bug - if you can suggest a failure
mode, we could try to reason about a concrete case rather than talking
in the abstract.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jun 18 '07 #37
"Jon Skeet [C# MVP]" wrote:
I believe it *actually* avoids any reordering around *any* method
calls. I don't think 12.6.7 leaves much wiggle-room though - acquiring
the lock counts as a volatile read, and releasing the lock counts as a
volatile write, and the reordering prohibitions therefore apply - and
apply to *all* reads and writes, not just reads and writes of that
variable.
Yes, acquiring the lock is a run-time operation. Just as MemoryBarrier,
volatile read, and volatile write are run-time operations and only ensures
the CPU has flushed any new values to RAM and won't reorder side-effects
between the acquire and release semantics.

I'm talking about compiler optimizations (specifically JIT--okay, I should
have been calling it the IL compiler--because the C# to IL compiler doesn't
really have the concept of registers, as I've made reference to).
>
Note that 12.6.4 says: "(Note that while only volatile operations
constitute visible side-effects, volatile operations also affect the
visibility of non-volatile references.)" It's that affect non-volatile
visibility which I'm talking about.
Again, run-time operations.
>
Would it be worth me coming up with a small sample problem which shares
data without using any volatile variables? I claim that (assuming I
write it correctly) there won't be a bug - if you can suggest a failure
mode, we could try to reason about a concrete case rather than talking
in the abstract.
The issue I'm talking about will only occur if the JIT optimizes in a
certain way. Let's take an academic example:
internal class Tester {
private Object locker = new Object();
private int number;
private Random random = new Random();

private void UpdateNumber ( ) {
int count = random.Next();
for (int i = 0; i < count; ++i) {
number++;
Trace.WriteLine(number);
}
}
public void DoSomething() {
lock(locker) {
Trace.WriteLine(number);
}
}
}

*if* the JIT optimized the incrementation of number as follows (example x86,
it's been a while; I may have screwed up the offsets...):
for (int i = 0; i < count; ++i)
00000020 xor ebx,ebx
00000022 test ebp,ebp
00000024 jle 00000033
{
number++;
00000026 add edi,1
00000029 add ebx,1
0000002C cmp ebx,ebp
0000002E jl 00000026
00000030 mov dword ptr [esi+0Ch],edi
00000033 pop ebp

....where it's optimized the calculations on number to use a register (edi)
during the loop and assigned that result to number at the end of the loop.
Within a single thread of execution, very valid (because we haven't told it
otherwise with "volatile"); and within the native world: been done for
decades.

Clearly another thread accessing Tester.number isn't going to see any of
those incremental changes.

Even if you wrap that with a lock statement, create a MemoryBarrier, etc.
those are all still run-time operations, it does not give any information to
the JIT about anything within the lock block (which is what I was referring
to by my original comment about certainly not documented...). By the time
the code is loaded into memory (let alone when Monitor.Enter is called) the
compiler has already done its optimizations.

The only thing that could tell the compiler anything about volatility with
respect to compile-time optimizations is something declarative, like
volatile. Yes, writes to fields declared as volatile also get volatile
reads/writes and acquire/release semantics just like Monitor.Enter and
Monitor.Exit; but that's the run-time aspect of it (for the too-smart
processors).

Jun 19 '07 #38
Peter Ritchie [C# MVP] <PR****@newsgroups.nospamwrote:
"Jon Skeet [C# MVP]" wrote:
I believe it *actually* avoids any reordering around *any* method
calls. I don't think 12.6.7 leaves much wiggle-room though - acquiring
the lock counts as a volatile read, and releasing the lock counts as a
volatile write, and the reordering prohibitions therefore apply - and
apply to *all* reads and writes, not just reads and writes of that
variable.
Yes, acquiring the lock is a run-time operation. Just as MemoryBarrier,
volatile read, and volatile write are run-time operations and only ensures
the CPU has flushed any new values to RAM and won't reorder side-effects
between the acquire and release semantics.

I'm talking about compiler optimizations (specifically JIT--okay, I should
have been calling it the IL compiler--because the C# to IL compiler doesn't
really have the concept of registers, as I've made reference to).
That's irrelevant though - the spec just says what will happen, not
which bit is responsible for making sure it happens.
Note that 12.6.4 says: "(Note that while only volatile operations
constitute visible side-effects, volatile operations also affect the
visibility of non-volatile references.)" It's that affect non-volatile
visibility which I'm talking about.
Again, run-time operations.
But affected by compilation decisions.
Would it be worth me coming up with a small sample problem which shares
data without using any volatile variables? I claim that (assuming I
write it correctly) there won't be a bug - if you can suggest a failure
mode, we could try to reason about a concrete case rather than talking
in the abstract.

The issue I'm talking about will only occur if the JIT optimizes in a
certain way. Let's take an academic example:
internal class Tester {
private Object locker = new Object();
private int number;
private Random random = new Random();

private void UpdateNumber ( ) {
int count = random.Next();
for (int i = 0; i < count; ++i) {
number++;
Trace.WriteLine(number);
}
}
public void DoSomething() {
lock(locker) {
Trace.WriteLine(number);
}
}
}
That is indeed buggy code - you're accessing number without locking.
That's not the situation I've been describing. If you change your code
to:

int count = random.Next();
for (int i = 0; i < count; ++i) {
lock (locker)
{
number++;
Trace.WriteLine(number);
}
}

then the code is okay. That's the situation I've been consistently
describing.

<snip>
The only thing that could tell the compiler anything about volatility with
respect to compile-time optimizations is something declarative, like
volatile. Yes, writes to fields declared as volatile also get volatile
reads/writes and acquire/release semantics just like Monitor.Enter and
Monitor.Exit; but that's the run-time aspect of it (for the too-smart
processors).
Again, you're making assumptions about which bit of the spec applies to
CPU optimisations and which bit applies to JIT compilation
optimisations. The spec doesn't say anything about that - it just makes
guarantees about what will be visible when. With the corrected code
above, there is no bug, because the JIT must know that it *must*
freshly read number after acquiring the lock, and *must* "flush" number
to main memory before releasing the lock.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jun 19 '07 #39
"Jon Skeet [C# MVP]" wrote:
<snip>

I guess we'll just have to disagree on a few things, for the reasons I've
already stated. I don't see much point in going back and forth saying the
same things...

With regard to runtime volatile read/writes and acquire/release semantics of
Monitor.Enter and Monitor.Exit we can agree.

I don't agree that anything specified in either 334 or 335 covers all levels
of potential compile-time class member JIT/IL compiler optimizations.

I don't agree that "int number; void UpdateNumber(){lock(locker){
number++;}}" is equally as safe as "volatile int number; void UpdateNumber(){
number++; }"

With the following Monitor.Enter/Exit IL, for example:
..field private int32 number
..method private hidebysig instance void UpdateNumber() cil managed
{
.maxstack 3
.locals init (
[0] int32 count,
[1] int32 i)
L_0000: ldarg.0
L_0001: ldfld class [mscorlib]System.Random Tester::random
L_0006: callvirt instance int32 [mscorlib]System.Random::Next()
L_000b: stloc.0
L_000c: ldarg.0 // *
L_000d: ldfld object Tester::locker //*
L_0012: call void [mscorlib]System.Threading.Monitor::Enter(object) //*
L_0017: ldc.i4.0
L_0018: stloc.1
L_0019: br.s L_003d
L_001b: ldarg.0
L_001c: dup
L_001d: ldfld int32 Tester::number
L_0022: ldc.i4.1
L_0023: add
L_0024: stfld int32 Tester::number
L_0029: ldarg.0
L_002a: ldfld int32 Tester::number
L_002f: box int32
L_0034: call void [System]System.Diagnostics.Trace::WriteLine(object)
L_0039: ldloc.1
L_003a: ldc.i4.1
L_003b: add
L_003c: stloc.1
L_003d: ldloc.1
L_003e: ldloc.0
L_003f: blt.s L_001b
L_0041: leave.s L_004f
L_0043: ldarg.0 // *
L_0044: ldfld object Tester::locker // *
L_0049: call void [mscorlib]System.Threading.Monitor::Exit(object) //*
L_004e: endfinally
L_004f: ret
.try L_0017 to L_0043 finally handler L_0043 to L_004f
}

....what part of that IL tells the JIT/IL compiler that Tester.number
specifically should be treated differently--where lines commented // * are
the only lines distinct to usage of Monitor.Enter/Exit?

Compared to use of volatile:
..field private int32
modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile) number
..method private hidebysig instance void UpdateNumber() cil managed
{
.maxstack 3
.locals init (
[0] int32 count,
[1] int32 i)
L_0000: ldarg.0
L_0001: ldfld class [mscorlib]System.Random One.Tester::random
L_0006: callvirt instance int32 [mscorlib]System.Random::Next()
L_000b: stloc.0
L_000c: ldc.i4.0
L_000d: stloc.1
L_000e: br.s L_0038
L_0010: ldarg.0
L_0011: dup
L_0012: volatile
L_0014: ldfld int32
modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile)
One.Tester::number
L_0019: ldc.i4.1
L_001a: add
L_001b: volatile
L_001d: stfld int32
modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile)
One.Tester::number
L_0022: ldarg.0
L_0023: volatile
L_0025: ldfld int32
modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile)
One.Tester::number
L_002a: box int32
L_002f: call void [System]System.Diagnostics.Trace::WriteLine(object)
L_0034: ldloc.1
L_0035: ldc.i4.1
L_0036: add
L_0037: stloc.1
L_0038: ldloc.1
L_0039: ldloc.0
L_003a: blt.s L_0010
L_003c: ret
}

....where an IL compiler is given ample amounts of information that
Tester.number should be treated differently.

I don't think it's safe, readable, or future friendly to utilize syntax
strictly for their secondary consequences (using Monitor.Enter/Exit not for
synchronization but for acquire/release semantics. As in the above line
where modification of an int is already atomic; "synchronization" is
irrelevant), even if they were effectively identical to another syntax. Yes,
if you've got a non-atomic invariant you still have to synchronize (with
lock, etc.)... but volatility is different and needs to be accounted for
equally as much as thread-safety.

-- Peter
Jun 19 '07 #40
On Jun 19, 3:11 pm, Peter Ritchie [C# MVP] <PRS...@newsgroups.nospam>
wrote:
I guess we'll just have to disagree on a few things, for the reasons I've
already stated. I don't see much point in going back and forth saying the
same things...
I should say (and I've only just remembered) that a few years ago I
was unsure where the safety came from, and I mailed someone (Vance
Morrison? Chris Brumme?) who gave me the explanation I've been giving
you.
With regard to runtime volatile read/writes and acquire/release semantics of
Monitor.Enter and Monitor.Exit we can agree.

I don't agree that anything specified in either 334 or 335 covers all levels
of potential compile-time class member JIT/IL compiler optimizations.
It specifies how the system as a whole must behave: given a certain
piece of IL, there are
I don't agree that "int number; void UpdateNumber(){lock(locker){
number++;}}" is equally as safe as "volatile int number; void UpdateNumber(){
number++; }"
I agree - the version without the lock is *unsafe*. Two threads could
both read, then both increment, then both store in the latter case.
With the lock, everything is guaranteed to work.
With the following Monitor.Enter/Exit IL, for example:
<snip>
...what part of that IL tells the JIT/IL compiler that Tester.number
specifically should be treated differently--where lines commented // * are
the only lines distinct to usage of Monitor.Enter/Exit?
The fact that it knows Monitor.Enter is called, so the load (in the
logical memory model) cannot occur before Monitor.Enter. Likewise it
knows that Monitor.Exit is called, so the store can't occur after
Monitor.Exit. If it calls another method which *might* call
Monitor.Enter/Exit, it likewise can't move the reads/writes as that
would violate the spec.
...where an IL compiler is given ample amounts of information that
Tester.number should be treated differently.
It's being given ample
I don't think it's safe, readable, or future friendly to utilize syntax
strictly for their secondary consequences (using Monitor.Enter/Exit not for
synchronization but for acquire/release semantics. As in the above line
where modification of an int is already atomic; "synchronization" is
irrelevant), even if they were effectively identical to another syntax. Yes,
if you've got a non-atomic invariant you still have to synchronize (with
lock, etc.)... but volatility is different and needs to be accounted for
equally as much as thread-safety.
Again you're treating atomicity as almost interchangeable with
volatility, when they're certainly not. Synchronization is certainly
relevant whether or not writes are atomic. Atomicity just states that
you won't see a "half way" state; volatility state that you will see
the "most recent" value. That's a huge difference.

The volatility is certainly not just a "secondary consequence" - it's
vital to the usefulness of locking.

Consider a type which isn't thread-aware - in other words, nothing is
marked as volatile, but it also has no thread-affinity. That should be
the most common kind of type, IMO. You can't retrospectively mark the
fields as being volatile, but you *do* want to ensure that if you use
objects of the type carefully (i.e. always within a consistent lock)
you won't get any unexpected behaviour. Due to the guarantees of
locking, you're safe. Otherwise, you wouldn't be. Without that
guarantee, you'd be entirely at the mercy of type authors for *all*
types that *might* be used in a multi-threaded environment making all
their fields volatile.

Further evidence that it's not just a secondary effect, but one which
certainly *can* be relied on: there's no other thread-safe way of
using doubles. They *can't* be marked as volatile - do you really
believe that MS would build .NET in such a way that wouldn't let you
write correct code to guarantee that you see the most recent value of
a double, rather than one cached in a register somewhere?

This *is* guaranteed, it's the normal way of working in the framework
(as Willy said, look for volatile fields in the framework itself) and
it's perfectly fine to rely on it.

Jon

Jun 19 '07 #41
On Jun 19, 4:12 pm, "Jon Skeet [C# MVP]" <s...@pobox.comwrote:

<snip - looks like I didn't finish all of this>
I don't agree that anything specified in either 334 or 335 covers all levels
of potential compile-time class member JIT/IL compiler optimizations.

It specifies how the system as a whole must behave: given a certain
piece of IL, there are
It specifies how the system as a whole must behave: given a certain
piece of IL, there are valid behaviours and invalid behaviours. If you
can observe that a variable has been read before a lock has been
acquired and that value has then been used (without rereading) after
the lock has been acquired, then the CLR has a bug, pure and simple.
It violates the spec in a pretty clear-cut manner.

Jon

Jun 19 '07 #42
Consider a type which isn't thread-aware - in other words, nothing is
marked as volatile, but it also has no thread-affinity. That should be
the most common kind of type, IMO. You can't retrospectively mark the
fields as being volatile, but you *do* want to ensure that if you use
You don't need to modify the type definition, you would need a volatile
variable of that type.
Jun 19 '07 #43
It specifies how the system as a whole must behave: given a certain
piece of IL, there are valid behaviours and invalid behaviours. If you
can observe that a variable has been read before a lock has been
acquired and that value has then been used (without rereading) after
the lock has been acquired, then the CLR has a bug, pure and simple.
It violates the spec in a pretty clear-cut manner.
That's not the same thing as saying use of Monitor.Enter and Monitor.Exit
are what are used to maintain that behaviour.

In 335 section 12.6.5 has "[calling Monitor.Enter]...shall implicitly
perform a volatile read operation..." says to me that one volatile operation
is performed. And "[calling Monitor.Exit]...shall implicitly perform a
volatile write operation." A write to what? As in this snippet:
Monitor.Enter(this.locker)
Trace.WriteLine(this.number);
Monitor.Exit(this.locker)

It only casually mentions "See [section] 12.6.7" which discussions acquire
and release semantics in the context of the volatile prefix (assuming the C#
volatile keyword is what causes generation of this prefix). 12.6.7 only
mentions "the read" or "the write" it does not mention anything about a set
or block of read/writes. I think you've made quite a leap getting to: code
between Monitor.Enter and Monitor.Exit has volatility guarantees.

Writing a sample "that works" is meaningless to me. I've dealt with
thousands of snippets of code "that worked" in certain circumstances (usually
resulting in me fixing them to "really work").

You're free to interpret the spec any way you want, and if you've gotten
information from Chris or Vance, you've got their interpretation of the spec.
and, best case, you've got information specific to Microsoft's JIT/IL
Compilers.

Based upon the spec, I *know* that this is safe code:
public volatile int number;
public void DoSomething() {
this.Number = 1;
}

This is equally as safe:
public volatile int number;
public void DoSomething() {
lock(locker) {
this.Number = 1;
}
}

I think it's open to interpretation of the spec whether this is safe:
public int number;
public void DoSomething() {
lock(locker) {
this.Number = 1;
}
}

....it might be safe in Microsoft's implementations; but that's not open
information and I don't think it's due to Monitor.Enter/Monitor.Exit.

I don't see what the issue with volatile is, if you're not using "volatile"
for synchronization. Worst case with this:
public volatile int number;
public void DoSomething() {
this.Number = 1;
}
you've explicitly stated your volatility usage/expectation: more readable,
makes no assumptions...

Whereas:
public int number;
public void DoSomething() {
lock(locker) {
this.Number = 1;
}
}

....best case, this isn't as readable because it uses implicit volatility
side-effects.

What happens with the following code?
internal class Tester {
private Object locker = new Object();
private Random random = new Random();
public int number:

public Tester()
{
DoWork(false);
}

public void UpdateNumber() {
Monitor.Enter(locker);
DoWork(true);
}

private void DoWork(Boolean doOut) {
this.number = random.Next();
if(doOut)
{
switch(random.Next(1))
{
case 0:
Out1();
break;
case 1:
Out2();
break;
}
}
}

private void Out1() {
Montior.Exit(this.locker);
}

private void Out2() {
Monitor.Exit(this.locker);
}
}

....clearly there isn't enough information merely from the existence
Monitor.Enter and Monitor.Exit to maintain those guarantees.

Again you're treating atomicity as almost interchangeable with
volatility,
<snip>
No, I'm not. I said you don't need to synchronize an atomic invariant but
you still need to account for its volatility (by declaring it volatile). I
didn't say volatility was a secondary concern, I said it needs to be
accounted for equally. I was implying that using the "lock" keyword is not
as clear in terms of volatility assumptions/needs as is the "volatile"
keyword. If a I read some code that uses "lock", I can't assume the author
did that for volatility reasons and not just synchronization reasons; whereas
if she had put "volatile" on a field, I know for sure why she put that there.
This *is* guaranteed, it's the normal way of working in the framework
(as Willy said, look for volatile fields in the framework itself)
Which ones? Like Hashtable.version or StringBuilder.m_StringValue?
Jun 19 '07 #44
Ben Voigt [C++ MVP] <rb*@nospam.nospamwrote:
Consider a type which isn't thread-aware - in other words, nothing is
marked as volatile, but it also has no thread-affinity. That should be
the most common kind of type, IMO. You can't retrospectively mark the
fields as being volatile, but you *do* want to ensure that if you use

You don't need to modify the type definition, you would need a volatile
variable of that type.
Just because the variable itself is volatile doesn't mean every access
would be volatile in the appropriate way. Consider:

public class Foo
{
public string bar; // No, I'd never use a public field really...
}
public class AnotherClass
{
volatile Foo x;

SomeMethod()
{
x.bar = 100;
}
}

Now, you've got a volatile *read* but not a volatile *write* - which is
what you really want to make sure that the write is visible to other
threads.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jun 19 '07 #45
Peter Ritchie [C# MVP] <PR****@newsgroups.nospamwrote:
It specifies how the system as a whole must behave: given a certain
piece of IL, there are valid behaviours and invalid behaviours. If you
can observe that a variable has been read before a lock has been
acquired and that value has then been used (without rereading) after
the lock has been acquired, then the CLR has a bug, pure and simple.
It violates the spec in a pretty clear-cut manner.

That's not the same thing as saying use of Monitor.Enter and Monitor.Exit
are what are used to maintain that behaviour.
Well, without that guarantee for Monitor.Enter/Monitor.Exit I don't
believe it would be possible to write thread-safe code.
In 335 section 12.6.5 has "[calling Monitor.Enter]...shall implicitly
perform a volatile read operation..." says to me that one volatile operation
is performed. And "[calling Monitor.Exit]...shall implicitly perform a
volatile write operation." A write to what? As in this snippet:
Monitor.Enter(this.locker)
Trace.WriteLine(this.number);
Monitor.Exit(this.locker)
It doesn't matter what the volatile write is to - it's the location in
the CIL that matters. No other writes can be moved (logically) past
that write, no matter what they're writing to.
It only casually mentions "See [section] 12.6.7" which discussions acquire
and release semantics in the context of the volatile prefix (assuming the C#
volatile keyword is what causes generation of this prefix).
I don't see what's "casual" about it, nor why you should believe that
12.6.7 should only apply to instructions with the "volatile." prefix.
The section starts off by mentioning the prefix, but then talks in
terms of volatile reads and volatile writes - which is the same terms
as 12.6.5 talks in.
12.6.7 only
mentions "the read" or "the write" it does not mention anything about a set
or block of read/writes. I think you've made quite a leap getting to: code
between Monitor.Enter and Monitor.Exit has volatility guarantees.
I really, really haven't. I think the problem is the one I talk about
above - you're assuming that *what* is written to matters, rather than
just the location of a volatile write in the CIL stream. Look at the
guarantee provided by the spec:

<quote>
A volatile read has =3Facquire semantics=3F meaning that the read is
guaranteed to occur prior to any references to memory that occur after
the read instruction in the CIL instruction sequence. A volatile write
has =3Frelease semantics=3F meaning that the write is guaranteed to happen
after any memory references prior to the write instruction in the CIL
instruction sequence.
</quote>

Where does that say anything about it being dependent on what is being
written or what is being read? It just talks about reads and writes
being moved in terms of their position in the CIL sequence.

So, no write that occurs before the call to Monitor.Exit in the IL can
be moved beyond the call to Monitor.Exit in the memory model, and no
read that occurs after Monitor.Enter in the IL can be moved to earlier
than Monitor.Enter in the memory model. That's all that's required for
thread safety.
Writing a sample "that works" is meaningless to me. I've dealt with
thousands of snippets of code "that worked" in certain circumstances (usually
resulting in me fixing them to "really work").
I'm not talking about certain circumstances - I'm talking about
*guarantees* provided by the CLI spec.

I'm saying that I can write code which doesn't use volatile but which
is *guaranteed* to work. I believe you won't be able to provide any
exmaple of how it could fail without the CLI spec itself being
violated.
You're free to interpret the spec any way you want, and if you've gotten
information from Chris or Vance, you've got their interpretation of the spec.
and, best case, you've got information specific to Microsoft's JIT/IL
Compilers.
Well, I've got information specific to the .NET 2.0 memory model (which
is stronger than the CLI specified memory model) elsewhere.

However, I feel pretty comfortable in having the interpretation experts
who possibly contributed to the spec or at least have direct contact
with those who wrote it.
Based upon the spec, I *know* that this is safe code:
public volatile int number;
public void DoSomething() {
this.Number = 1;
}

This is equally as safe:
public volatile int number;
public void DoSomething() {
lock(locker) {
this.Number = 1;
}
}

I think it's open to interpretation of the spec whether this is safe:
public int number;
public void DoSomething() {
lock(locker) {
this.Number = 1;
}
}
Well, this is why I suggested that I post a complete program - then you
could suggest ways in which it could go wrong, and I think I'd be able
to defend it in fairly clear-cut terms.
...it might be safe in Microsoft's implementations; but that's not open
information and I don't think it's due to Monitor.Enter/Monitor.Exit.
I *hope* we won't just have to agree to disagree, but I realise that
may be the outcome :(
I don't see what the issue with volatile is, if you're not using "volatile"
for synchronization. Worst case with this:
public volatile int number;
public void DoSomething() {
this.Number = 1;
}
you've explicitly stated your volatility usage/expectation: more readable,
makes no assumptions...
It implies that without volatility you've got problems - which you
haven't (provided you use locking correctly). This means you can use a
single way of working for *all* types, regardless of whether you can
use the volatile modifier on them.
Whereas:
public int number;
public void DoSomething() {
lock(locker) {
this.Number = 1;
}
}

...best case, this isn't as readable because it uses implicit volatility
side-effects.
If you're not used to that being the idiom, you're right. However, if
I'm writing thread-safe code (most types don't need to be thread-safe)
I document what lock any shared data comes under. I can rarely get away
with a single operation anyway.

Consider the simple change from this:

this.number = 1;

to this:

this.number++;

With volatile, your code is now broken - and it's not obvious, and
probably won't show up in testing. With lock, it's not broken.
What happens with the following code?
internal class Tester {
private Object locker = new Object();
private Random random = new Random();
public int number:

public Tester()
{
DoWork(false);
}

public void UpdateNumber() {
Monitor.Enter(locker);
DoWork(true);
}
What happens here is that I don't let this method go through code
review. There have to be *very* good reasons not to use lock{}, and in
those cases there would almost always still be a try/finally.

I wouldn't consider using volatile just to avoid the possibility of
code like this (which I've never seen in production, btw).

private void DoWork(Boolean doOut) {
this.number = random.Next();
if(doOut)
{
switch(random.Next(1))
{
case 0:
Out1();
break;
case 1:
Out2();
break;
}
}
}

private void Out1() {
Montior.Exit(this.locker);
}

private void Out2() {
Monitor.Exit(this.locker);
}
}

...clearly there isn't enough information merely from the existence
Monitor.Enter and Monitor.Exit to maintain those guarantees.
It's the other way round - the JIT compiler doesn't have enough
information to perform certain optimisations, simply because it can't
know whether or not Monitor.Exit will be called.

Assuming the CLR follows the spec, it can't move the write to number to
after the call to random.Next() - because that call to random.Next()
may involve releasing a lock, and it may involve a write.

Now, I agree that it really limits the scope of optimisation for the
JIT - but that's what the CLI spec says.
Again you're treating atomicity as almost interchangeable with
volatility,
<snip>
No, I'm not. I said you don't need to synchronize an atomic invariant but
you still need to account for its volatility (by declaring it volatile). I
didn't say volatility was a secondary concern, I said it needs to be
accounted for equally. I was implying that using the "lock" keyword is not
as clear in terms of volatility assumptions/needs as is the "volatile"
keyword. If a I read some code that uses "lock", I can't assume the author
did that for volatility reasons and not just synchronization reasons; whereas
if she had put "volatile" on a field, I know for sure why she put that there.
I use lock when I'm going to use shared data. When I use shared data, I
want to make sure I don't ignore previous changes - hence it needs to
be volatile.

Volatility is a natural consequence of wanting exclusive access to a
shared variable - which is why exactly the same strategy works in Java,
by the way (which has a slightly different memory model). Without the
guarantees given by the CLI spec, having a lock would be pretty much
useless.
This *is* guaranteed, it's the normal way of working in the framework
(as Willy said, look for volatile fields in the framework itself)

Which ones? Like Hashtable.version or StringBuilder.m_StringValue?
Yup, there are a few - but I believe there are far more places which
use the natural (IMO) way of sharing data via exclusive access, and
taking account the volatility that naturally provides.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jun 19 '07 #46
"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:11*********************@q75g2000hsh.googlegro ups.com...
On Jun 19, 3:11 pm, Peter Ritchie [C# MVP] <PRS...@newsgroups.nospam>
wrote:
>I guess we'll just have to disagree on a few things, for the reasons I've
already stated. I don't see much point in going back and forth saying
the
same things...

I should say (and I've only just remembered) that a few years ago I
was unsure where the safety came from, and I mailed someone (Vance
Morrison? Chris Brumme?) who gave me the explanation I've been giving
you.
>With regard to runtime volatile read/writes and acquire/release semantics
of
Monitor.Enter and Monitor.Exit we can agree.

I don't agree that anything specified in either 334 or 335 covers all
levels
of potential compile-time class member JIT/IL compiler optimizations.

It specifies how the system as a whole must behave: given a certain
piece of IL, there are
>I don't agree that "int number; void UpdateNumber(){lock(locker){
number++;}}" is equally as safe as "volatile int number; void
UpdateNumber(){
number++; }"

I agree - the version without the lock is *unsafe*. Two threads could
both read, then both increment, then both store in the latter case.
With the lock, everything is guaranteed to work.
>With the following Monitor.Enter/Exit IL, for example:

<snip>
>...what part of that IL tells the JIT/IL compiler that Tester.number
specifically should be treated differently--where lines commented // *
are
the only lines distinct to usage of Monitor.Enter/Exit?

The fact that it knows Monitor.Enter is called, so the load (in the
logical memory model) cannot occur before Monitor.Enter. Likewise it
knows that Monitor.Exit is called, so the store can't occur after
Monitor.Exit. If it calls another method which *might* call
Monitor.Enter/Exit, it likewise can't move the reads/writes as that
would violate the spec.
>...where an IL compiler is given ample amounts of information that
Tester.number should be treated differently.

It's being given ample
>I don't think it's safe, readable, or future friendly to utilize syntax
strictly for their secondary consequences (using Monitor.Enter/Exit not
for
synchronization but for acquire/release semantics. As in the above line
where modification of an int is already atomic; "synchronization" is
irrelevant), even if they were effectively identical to another syntax.
Yes,
if you've got a non-atomic invariant you still have to synchronize (with
lock, etc.)... but volatility is different and needs to be accounted for
equally as much as thread-safety.

Again you're treating atomicity as almost interchangeable with
volatility, when they're certainly not. Synchronization is certainly
relevant whether or not writes are atomic. Atomicity just states that
you won't see a "half way" state; volatility state that you will see
the "most recent" value. That's a huge difference.

The volatility is certainly not just a "secondary consequence" - it's
vital to the usefulness of locking.

Consider a type which isn't thread-aware - in other words, nothing is
marked as volatile, but it also has no thread-affinity. That should be
the most common kind of type, IMO. You can't retrospectively mark the
fields as being volatile, but you *do* want to ensure that if you use
objects of the type carefully (i.e. always within a consistent lock)
you won't get any unexpected behaviour. Due to the guarantees of
locking, you're safe. Otherwise, you wouldn't be. Without that
guarantee, you'd be entirely at the mercy of type authors for *all*
types that *might* be used in a multi-threaded environment making all
their fields volatile.

Further evidence that it's not just a secondary effect, but one which
certainly *can* be relied on: there's no other thread-safe way of
using doubles. They *can't* be marked as volatile - do you really
believe that MS would build .NET in such a way that wouldn't let you
write correct code to guarantee that you see the most recent value of
a double, rather than one cached in a register somewhere?

This *is* guaranteed, it's the normal way of working in the framework
(as Willy said, look for volatile fields in the framework itself) and
it's perfectly fine to rely on it.

I see that my remark about the FCL was too strong worded, I didn't mean to
say that "volatile" fields were not used at all in the FCL, sure they are
used, but only in a context where the author wanted to guarantee that a
field (most often a bool) access had acquire/release semantics and would not
be reordered, not in the context of a locked region. Also note that a large
part of the FCL was written against v1.0 (targeting X86 only) at a time
there was no VolatileRead and long before the Interlocked Class was
introduced.
The latest bits in the FCL use more often Interlocked and VolatileXXX
operations than than applying the volatile modifier.
Also note that volatile does not imply a memory barrier, while lock,
Interlocked ops. and VolatileXXX do effectively imply a MemoryBarrier. The
way the barrier is implemented is platform specific, on X86 and X64 a full
barrier is raised, while on IA64 it depends on the operation.
Willy.

Jun 19 '07 #47
"Jon Skeet [C# MVP]" wrote:
I'm saying that I can write code which doesn't use volatile but which
is *guaranteed* to work. I believe you won't be able to provide any
exmaple of how it could fail without the CLI spec itself being
violated.
Actually, I'm having a hard time getting the JIT to optimize *any* member
fields, even with lack of locking. Local variables seem to optimized into
registers easily, but not member fields...

If I could get an optimization of a member field I believe I would be able
show an example.

For example:
private Random random = new Random();
public int Method()
{
int result = 0;
for(int i = 0; i < this.random.Next(); ++i)
{
result += 10;
}
return result;
}

ebx is used for result (and edi for i) while in the loop; but with:
private Random random = new Random();
private int number;
public int Method()
{
for(int i = 0; i < this.random.Next(); ++i)
{
this.number += 10;
}
return this.number
}

....number is always accessed directly and never optimized to a register. I
think I'd find the same thing with re-ordering.
Jun 19 '07 #48
Peter Ritchie [C# MVP] <PR****@newsgroups.nospamwrote:
"Jon Skeet [C# MVP]" wrote:
I'm saying that I can write code which doesn't use volatile but which
is *guaranteed* to work. I believe you won't be able to provide any
exmaple of how it could fail without the CLI spec itself being
violated.
Actually, I'm having a hard time getting the JIT to optimize *any* member
fields, even with lack of locking. Local variables seem to optimized into
registers easily, but not member fields...
I can well believe that, just as an easy way of fulfilling the spec.
If I could get an optimization of a member field I believe I would be able
show an example.
Well, rather than arguing from a particular implementation (which, as
you've said before, may be rather stricter than the spec requires) I'd
be perfectly happy arguing from the spec itself. Then at least if there
are precise examples where I interpret the spec to say one thing and
you interpret it a different way, we'll know exactly where our
disagreement is.

<snip code>

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jun 20 '07 #49
"Peter Ritchie [C# MVP]" <PR****@newsgroups.nospamwrote in message
news:B3**********************************@microsof t.com...
"Jon Skeet [C# MVP]" wrote:
>I'm saying that I can write code which doesn't use volatile but which
is *guaranteed* to work. I believe you won't be able to provide any
exmaple of how it could fail without the CLI spec itself being
violated.
Actually, I'm having a hard time getting the JIT to optimize *any* member
fields, even with lack of locking. Local variables seem to optimized into
registers easily, but not member fields...

If I could get an optimization of a member field I believe I would be able
show an example.

For example:
private Random random = new Random();
public int Method()
{
int result = 0;
for(int i = 0; i < this.random.Next(); ++i)
{
result += 10;
}
return result;
}

ebx is used for result (and edi for i) while in the loop; but with:
private Random random = new Random();
private int number;
public int Method()
{
for(int i = 0; i < this.random.Next(); ++i)
{
this.number += 10;
}
return this.number
}

...number is always accessed directly and never optimized to a register.
I
think I'd find the same thing with re-ordering.

In your sample, the member field has to be read from the object location in
the GC heap, and after the addition it has to be written back to the same
location.
The write "this.number +=.... "must be a "store acquire" to fulfill the
rules imposed by the CLR memory model. Note that this model derives from the
ECMA model!

The assembly code of the core part of the loop, looks something like this
(your mileage may vary):

mov eax,dword ptr [ebp-10h]
add dword ptr [eax+8],0Ah

here the object reference of the current instance (this) is loaded from
[ebp-10h] and stored in eax, after which 0Ah is added to the location of the
'number' field [eax+8].

Question is what else do you expect to optimize any further, and what are
you expecting to illustrate?

Willy.

Jun 20 '07 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
by: Efim | last post by:
Hi all, Due to performance issue, I want to pevent execution of ToString() function in the code like the following: if(reporting_level & DEBUG_LEVEL) log(reporting_level,string.Format("Event of...
3
by: callre | last post by:
when i used javascript onchange() the error is coming "object doesnt support this property" my code is- <script type='text/javascript' language="javascript"> function change() { ...
2
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 7 Feb 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:30 (7.30PM). In this month's session, the creator of the excellent VBE...
0
by: MeoLessi9 | last post by:
I have VirtualBox installed on Windows 11 and now I would like to install Kali on a virtual machine. However, on the official website, I see two options: "Installer images" and "Virtual machines"....
0
by: DolphinDB | last post by:
The formulas of 101 quantitative trading alphas used by WorldQuant were presented in the paper 101 Formulaic Alphas. However, some formulas are complex, leading to challenges in calculation. Take...
0
by: Aftab Ahmad | last post by:
Hello Experts! I have written a code in MS Access for a cmd called "WhatsApp Message" to open WhatsApp using that very code but the problem is that it gives a popup message everytime I clicked on...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: marcoviolo | last post by:
Dear all, I would like to implement on my worksheet an vlookup dynamic , that consider a change of pivot excel via win32com, from an external excel (without open it) and save the new file into a...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.