I got great info on related question previously. This link
<http://www.yoda.arachsys.com/csharp/threads/volatility.shtml> from Brian
Gideon was especially informative.
1) the lock statement does *not* make the processor finish all code in the
code block before giving another thread from your process a slice of
processor time, it only makes sure other threads cannot run the same code
*if* they also adhere to the same lock object. (true/false) - I think true
2) the lock statement is not effective unless it is used for *all* access to
the shared resource; *both* read and write access. (true/false) - I think
true
could someone confirm I'm understanding this correctly? 17 1831
djc,
1) true
2) true
"djc" <no***@nowhere.com> wrote in message
news:OY**************@TK2MSFTNGP04.phx.gbl... I got great info on related question previously. This link <http://www.yoda.arachsys.com/csharp/threads/volatility.shtml> from Brian Gideon was especially informative.
1) the lock statement does *not* make the processor finish all code in the code block before giving another thread from your process a slice of processor time, it only makes sure other threads cannot run the same code *if* they also adhere to the same lock object. (true/false) - I think true
2) the lock statement is not effective unless it is used for *all* access to the shared resource; *both* read and write access. (true/false) - I think true
could someone confirm I'm understanding this correctly?
see notes below.
"djc" <no***@nowhere.com> wrote in message
news:OY**************@TK2MSFTNGP04.phx.gbl... I got great info on related question previously. This link <http://www.yoda.arachsys.com/csharp/threads/volatility.shtml> from Brian Gideon was especially informative.
1) the lock statement does *not* make the processor finish all code in the code block before giving another thread from your process a slice of processor time, it only makes sure other threads cannot run the same code *if* they also adhere to the same lock object. (true/false) - I think true
If I am understanding you correctly this is true, the lock simply defines a
critical section (and insures that multiple threads will never be within
that critical section concurrently)
2) the lock statement is not effective unless it is used for *all* access to the shared resource; *both* read and write access. (true/false) - I think true
This is generally true. It is also important to remember that many things
are already atomic as an example ..
public class foo {
private SomeObject bar;
public SomeObject Bar {
get { return bar; }
set { bar = value; }
}
}
These accessors will not need locking as they are the size of a reference or
smaller (which is assured to be atomic by the framework).
Greg Young <dr*******************@hotmail.com> wrote: 2) the lock statement is not effective unless it is used for *all* access to the shared resource; *both* read and write access. (true/false) - I think true
This is generally true. It is also important to remember that many things are already atomic as an example ..
public class foo { private SomeObject bar; public SomeObject Bar { get { return bar; } set { bar = value; } } }
These accessors will not need locking as they are the size of a reference or smaller (which is assured to be atomic by the framework).
Not true. Atomicity and volatility are not the same thing. See http://www.pobox.com/~skeet/csharp/t...latility.shtml
For instance, if you use a boolean flag to indicate that a thread
should stop processing, and that flag is set from another thread, then
without either locking or volatility there is no guarantee that the
thread will ever see the change. Here's an example:
using System;
using System.Threading;
class Test
{
static bool stop=false;
static void Main()
{
new Thread(KeepRunning).Start();
Thread.Sleep(1000);
stop = true;
Console.WriteLine ("Should stop very soon");
}
static void KeepRunning()
{
int i=0;
while (!stop)
{
i++;
}
}
}
Compile with the /o+ switch to enable optimisations, and on my box at
least, the process never dies.
--
Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jon I can't seem to find where I said that atomicity and volatility were the
same thing, my example was in fact correct in terms of being atomic, the
discussion was on the need for locking.
If you look at the unmanaged code generated by your example it is quite easy
to see why it never ends .. it holds the stop variable in a register (never
checking the memory location). By placing volatile on it you force the JIT
to generate code to re-read the variable from its memory location. Locking
also works here although it works as a convoluted by-product of various jit
optimizations.
If we look at the code in question we can quickly see whats going on
00000000 movzx eax,byte ptr ds:[00912FC8h]
00000007 test eax,eax
00000009 jne 0000000F
0000000b test eax,eax
0000000d je 0000000B
0000000f ret
eax holds a copy of the memory location which is then tested in the loop. If
we use a property we will get the same result (JIT inlines the access for
us). If we lock within that property ...
static bool Stop {
get { lock(LockObject) { return stop;} }
set { lock (LockObject) { stop = value; } }
}
We get
00000000 call dword ptr ds:[00913060h]
00000006 test eax,eax
00000008 jne 00000014
0000000a call dword ptr ds:[00913060h]
00000010 test eax,eax
00000012 je 0000000A
00000014 ret
The only reason it works with the lock is because the lock forced the JIT to
not inline our property (thus not allowing the memory location to be stored
in a register).
The question was of course on locking not on volatility the fact that adding
the lock forces the JIT to not inline the method call (and thus not place it
in a register) is a poor argument for why lock should be used. I agree that
volatility is also important and should in some cases be used where
appropriate but I consider it to be a different discussion than that of
locking.
In any case this is certainly worth a blog post :)
Cheers,
Greg Young
MVP - C# http://codebetter.com/blogs/gregyoung
"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om... Greg Young <dr*******************@hotmail.com> wrote: > 2) the lock statement is not effective unless it is used for *all* > access > to the shared resource; *both* read and write access. (true/false) - I > think true
This is generally true. It is also important to remember that many things are already atomic as an example ..
public class foo { private SomeObject bar; public SomeObject Bar { get { return bar; } set { bar = value; } } }
These accessors will not need locking as they are the size of a reference or smaller (which is assured to be atomic by the framework).
Not true. Atomicity and volatility are not the same thing. See http://www.pobox.com/~skeet/csharp/t...latility.shtml
For instance, if you use a boolean flag to indicate that a thread should stop processing, and that flag is set from another thread, then without either locking or volatility there is no guarantee that the thread will ever see the change. Here's an example:
using System; using System.Threading;
class Test { static bool stop=false;
static void Main() { new Thread(KeepRunning).Start(); Thread.Sleep(1000); stop = true; Console.WriteLine ("Should stop very soon"); }
static void KeepRunning() { int i=0; while (!stop) { i++; } } }
Compile with the /o+ switch to enable optimisations, and on my box at least, the process never dies.
-- Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet If replying to the group, please do not mail me too
Just to be clear the proper way to deal with the code if volatility were a
concern would be public class foo { private volatile SomeObject bar; public SomeObject Bar { get { return bar; } set { bar = value; } } }
Which will inline and generate as volatile. You have to be very careful with
overusing volatile though as it will significantly limit the optimizations
that you can have.
Cheers,
Greg Young
MVP - C# http://codebetter.com/blogs/gregyoung
"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om... Greg Young <dr*******************@hotmail.com> wrote: > 2) the lock statement is not effective unless it is used for *all* > access > to the shared resource; *both* read and write access. (true/false) - I > think true
This is generally true. It is also important to remember that many things are already atomic as an example ..
public class foo { private SomeObject bar; public SomeObject Bar { get { return bar; } set { bar = value; } } }
These accessors will not need locking as they are the size of a reference or smaller (which is assured to be atomic by the framework).
Not true. Atomicity and volatility are not the same thing. See http://www.pobox.com/~skeet/csharp/t...latility.shtml
For instance, if you use a boolean flag to indicate that a thread should stop processing, and that flag is set from another thread, then without either locking or volatility there is no guarantee that the thread will ever see the change. Here's an example:
using System; using System.Threading;
class Test { static bool stop=false;
static void Main() { new Thread(KeepRunning).Start(); Thread.Sleep(1000); stop = true; Console.WriteLine ("Should stop very soon"); }
static void KeepRunning() { int i=0; while (!stop) { i++; } } }
Compile with the /o+ switch to enable optimisations, and on my box at least, the process never dies.
-- Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet If replying to the group, please do not mail me too
Hi Greg. I think that is what Jon was saying. Without the volatile (or a
lock or comparexchange or mem barrier), bar may never be seen to change from
another thread (for some time) - hence the potential issue. So if multiple
threads will read/write Bar, this will be a concern.
--
William Stacey [MVP]
"Greg Young" <dr*******************@hotmail.com> wrote in message
news:OW**************@TK2MSFTNGP04.phx.gbl...
| Just to be clear the proper way to deal with the code if volatility were a
| concern would be
|
| >> public class foo {
| >> private volatile SomeObject bar;
| >> public SomeObject Bar {
| >> get { return bar; }
| >> set { bar = value; }
| >> }
| >> }
|
| Which will inline and generate as volatile. You have to be very careful
with
| overusing volatile though as it will significantly limit the optimizations
| that you can have.
|
| Cheers,
|
| Greg Young
| MVP - C#
| http://codebetter.com/blogs/gregyoung
|
| "Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
| news:MP************************@msnews.microsoft.c om...
| > Greg Young <dr*******************@hotmail.com> wrote:
| >> > 2) the lock statement is not effective unless it is used for *all*
| >> > access
| >> > to the shared resource; *both* read and write access. (true/false) -
I
| >> > think true
| >>
| >> This is generally true. It is also important to remember that many
things
| >> are already atomic as an example ..
| >>
| >> public class foo {
| >> private SomeObject bar;
| >> public SomeObject Bar {
| >> get { return bar; }
| >> set { bar = value; }
| >> }
| >> }
| >>
| >> These accessors will not need locking as they are the size of a
reference
| >> or
| >> smaller (which is assured to be atomic by the framework).
| >
| > Not true. Atomicity and volatility are not the same thing. See
| > http://www.pobox.com/~skeet/csharp/t...latility.shtml
| >
| > For instance, if you use a boolean flag to indicate that a thread
| > should stop processing, and that flag is set from another thread, then
| > without either locking or volatility there is no guarantee that the
| > thread will ever see the change. Here's an example:
| >
| > using System;
| > using System.Threading;
| >
| > class Test
| > {
| > static bool stop=false;
| >
| > static void Main()
| > {
| > new Thread(KeepRunning).Start();
| > Thread.Sleep(1000);
| > stop = true;
| > Console.WriteLine ("Should stop very soon");
| > }
| >
| > static void KeepRunning()
| > {
| > int i=0;
| > while (!stop)
| > {
| > i++;
| > }
| > }
| > }
| >
| > Compile with the /o+ switch to enable optimisations, and on my box at
| > least, the process never dies.
| >
| > --
| > Jon Skeet - <sk***@pobox.com>
| > http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
| > If replying to the group, please do not mail me too
|
|
Greg Young <dr*******************@hotmail.com> wrote: Jon I can't seem to find where I said that atomicity and volatility were the same thing, my example was in fact correct in terms of being atomic, the discussion was on the need for locking.
You claimed that the accessor didn't need locking, because the
read/write was atomic. The atomicity isn't the only reason for locking
- the need to ensure you read up-to-date values is, IMO.
Unless you really don't care whether or not you see a change to a
value, you need the lock or some other way of forcing a memory barrier.
If you look at the unmanaged code generated by your example it is quite easy to see why it never ends .. it holds the stop variable in a register (never checking the memory location). By placing volatile on it you force the JIT to generate code to re-read the variable from its memory location. Locking also works here although it works as a convoluted by-product of various jit optimizations.
Not really as a by-product - it's absolutely specified that using a
lock will have the right effect, and that's by design.
The only reason it works with the lock is because the lock forced the JIT to not inline our property (thus not allowing the memory location to be stored in a register).
No. It works because acquiring a lock counts as a volatile read, and
releasing a lock counts as a volatile write.
The question was of course on locking not on volatility the fact that adding the lock forces the JIT to not inline the method call (and thus not place it in a register) is a poor argument for why lock should be used.
It would be if that were the argument.
I agree that volatility is also important and should in some cases be used where appropriate but I consider it to be a different discussion than that of locking.
They're closely related, IMO - both are used for making sure that the
appropriate value is read, even though lock has *additional* properties
in terms of synchronization.
--
Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Greg Young <dr*******************@hotmail.com> wrote: Just to be clear the proper way to deal with the code if volatility were a concern would be
public class foo { private volatile SomeObject bar; public SomeObject Bar { get { return bar; } set { bar = value; } } }
Which will inline and generate as volatile. You have to be very careful with overusing volatile though as it will significantly limit the optimizations that you can have.
It also fails as soon as you've got a property which needs to set two
different variables. I prefer to have just one way of obtaining thread-
safety: a lock.
(I also believe that very few classes ought to try to be thread-safe,
so the performance penalties are usually minimal.)
--
Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
>> The only reason it works with the lock is because the lock forced the JIT to not inline our property (thus not allowing the memory location to be stored in a register). No. It works because acquiring a lock counts as a volatile read, and releasing a lock counts as a volatile write.
Jon, take a look at the unmanaged code generated I posted ... Memory
barriers are not involved, in this case it simply forces the JIT to not
inline the property and is as such unable to possibly optimize the value
into a register (forcing the equivalent of a volatile read). You could in
fact just make the property code more than the inlining maximum (32 bytes if
I remember correctly) and it would have the same effect (or you could just
use [MethodImpl(MethodImplOptions.NoInlining)] which provides the exact same
behavior).
This code will also work for your stop example ...
static bool stop = false;
static object LockObject = new Object();
static bool Stop {
[MethodImpl(MethodImplOptions.NoInlining)]
get {return stop; }
[MethodImpl(MethodImplOptions.NoInlining)]
set { stop = value; }
}
static void Main() {
new Thread(KeepRunning).Start();
Thread.Sleep(1000);
Stop = true;
Console.WriteLine("Should stop very soon");
}
static void KeepRunning() {
int i = 0;
while (!Stop) {
i++;
}
}
Since the property is not being inlined it will force the value to not be
held in a register (the same thing that happens when the lock is applied).
Since it does not get optimized into a register (and can never be) it
becomes a non-issue. The barriers you mention never come into play here.
I am putting further comments in my other post ...
Cheers,
Greg
"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om... Greg Young <dr*******************@hotmail.com> wrote: Jon I can't seem to find where I said that atomicity and volatility were the same thing, my example was in fact correct in terms of being atomic, the discussion was on the need for locking.
You claimed that the accessor didn't need locking, because the read/write was atomic. The atomicity isn't the only reason for locking - the need to ensure you read up-to-date values is, IMO.
Unless you really don't care whether or not you see a change to a value, you need the lock or some other way of forcing a memory barrier.
If you look at the unmanaged code generated by your example it is quite easy to see why it never ends .. it holds the stop variable in a register (never checking the memory location). By placing volatile on it you force the JIT to generate code to re-read the variable from its memory location. Locking also works here although it works as a convoluted by-product of various jit optimizations.
Not really as a by-product - it's absolutely specified that using a lock will have the right effect, and that's by design.
The only reason it works with the lock is because the lock forced the JIT to not inline our property (thus not allowing the memory location to be stored in a register).
No. It works because acquiring a lock counts as a volatile read, and releasing a lock counts as a volatile write.
The question was of course on locking not on volatility the fact that adding the lock forces the JIT to not inline the method call (and thus not place it in a register) is a poor argument for why lock should be used.
It would be if that were the argument.
I agree that volatility is also important and should in some cases be used where appropriate but I consider it to be a different discussion than that of locking.
They're closely related, IMO - both are used for making sure that the appropriate value is read, even though lock has *additional* properties in terms of synchronization.
-- Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet If replying to the group, please do not mail me too
I agree .. As I said I should have included a comment about using volatile
when needed (which is sometimes hard to know more on that below)
But one should not be using lock to provide this behavior (as suggested in
the article) as its success is a by product of other optimizations (it works
because it disables the JIT optimization from occurring (adding overhead)
and adds significant overhead itself). There has been some commentary on it
being due to memory barriers but they never come into play (and could be
viewed as further unnecesary overhead)
If we look at the code generated by the volatile keyword on our member (even
through a property)... it produces the exact unmanaged code we want, the
lock based method produces code that is orders of magnitude slower even when
uncontested .. just to give you an idea of the difference .. here are the
functions I used to test ...
static int notvol = 1;
static volatile int vol =1 ;
static object lockobj = new object();
static int Notvol {
get { lock(lockobj) { return notvol; }}
}
static int Vol {
get { return vol; }
}
public static void TestWithLock() {
for (int i = 0; i < 100000; i++) {
Dummy = Notvol;
}
}
public static void TestWithVolatile() {
for (int i = 0; i < 100000; i++) {
Dummy = Vol;
}
}
The unmanaged code ..
public static void TestWithVolatile() {
for (int i = 0; i < 100000; i++) {
00000000 xor edx,edx
Dummy = Vol;
00000002 mov eax,dword ptr ds:[00912FF4h]
00000007 mov dword ptr ds:[00912FECh],eax
for (int i = 0; i < 100000; i++) {
0000000c add edx,1
0000000f cmp edx,186A0h
00000015 jl 00000002
00000017 ret
public static void TestWithLock() {
for (int i = 0; i < 100000; i++) {
00000000 push esi
00000001 xor esi,esi
Dummy = Notvol;
00000003 call dword ptr ds:[0091318Ch]
00000009 mov dword ptr ds:[00912FECh],eax
for (int i = 0; i < 100000; i++) {
0000000e add esi,1
00000011 cmp esi,186A0h
00000017 jl 00000003
00000019 pop esi
}
}
0000001a ret
Test : Volatile took average ns = 154115.46886278
Test : NonVolatile took average ns = 11274013.3080039
These results are averaged over 10000 runs of each function. 70 times faster
using the volatile as opposed to the lock (without any lock contention) as
the lock adds in a huge amount of overhead that we don't need (and forces
the JIT optimizer to not be able to inline). Once we factor in the lock
contention we will surely have in a real life scenario I would expect this
number could conservatively be over 100 times.
Since the main place we worry about volatility is in looping conditions
where variables often times get optimized into registers I feel that the
performance here is often of fairly high value. Jon Skeet It also fails as soon as you've got a property which needs to set two different variables. I prefer to have just one way of obtaining thread- safety: a lock.
As I had said, I was dealing with a case where we were dealing with
something under reference size (without screwing with alignment etc). I feel
removing the needless locking is important in performant areas of code.
Jon Skeet (I also believe that very few classes ought to try to be thread-safe, so the performance penalties are usually minimal.)
My advice has always been, don't lock unless you actually need it ..
I do agree with your last post in not doing this at all (which is why I had
no volatile to start with). I generally try to avoid any sort of threading
code (even the use of volatiles) unless I have a case where I actually need
it. I do this because it is nearly impossible for a class to know how the
caller intends to use it. Instead it is best to leave it to the caller how
they would like to use your code (otherwise you would technically need to
prevent all inlining or mark everything as volatile)
Just to be clear .. I agree that volatility is a valid concern; I just think
that using lock to provide it is bad practice as it comes with significant
overhead (in the range of 100 times slower) and works as a by product of the
optimizer not inlining (see previous post for inlining)
Cheers,
Greg Young
MVP - C# http://codebetter.com/blogs/gregyoung
"William Stacey [MVP]" <wi************@gmail.com> wrote in message
news:eW**************@TK2MSFTNGP03.phx.gbl... Hi Greg. I think that is what Jon was saying. Without the volatile (or a lock or comparexchange or mem barrier), bar may never be seen to change from another thread (for some time) - hence the potential issue. So if multiple threads will read/write Bar, this will be a concern.
-- William Stacey [MVP]
"Greg Young" <dr*******************@hotmail.com> wrote in message news:OW**************@TK2MSFTNGP04.phx.gbl... | Just to be clear the proper way to deal with the code if volatility were a | concern would be | | >> public class foo { | >> private volatile SomeObject bar; | >> public SomeObject Bar { | >> get { return bar; } | >> set { bar = value; } | >> } | >> } | | Which will inline and generate as volatile. You have to be very careful with | overusing volatile though as it will significantly limit the optimizations | that you can have. | | Cheers, | | Greg Young | MVP - C# | http://codebetter.com/blogs/gregyoung | | "Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message | news:MP************************@msnews.microsoft.c om... | > Greg Young <dr*******************@hotmail.com> wrote: | >> > 2) the lock statement is not effective unless it is used for *all* | >> > access | >> > to the shared resource; *both* read and write access. (true/false) - I | >> > think true | >> | >> This is generally true. It is also important to remember that many things | >> are already atomic as an example .. | >> | >> public class foo { | >> private SomeObject bar; | >> public SomeObject Bar { | >> get { return bar; } | >> set { bar = value; } | >> } | >> } | >> | >> These accessors will not need locking as they are the size of a reference | >> or | >> smaller (which is assured to be atomic by the framework). | > | > Not true. Atomicity and volatility are not the same thing. See | > http://www.pobox.com/~skeet/csharp/t...latility.shtml | > | > For instance, if you use a boolean flag to indicate that a thread | > should stop processing, and that flag is set from another thread, then | > without either locking or volatility there is no guarantee that the | > thread will ever see the change. Here's an example: | > | > using System; | > using System.Threading; | > | > class Test | > { | > static bool stop=false; | > | > static void Main() | > { | > new Thread(KeepRunning).Start(); | > Thread.Sleep(1000); | > stop = true; | > Console.WriteLine ("Should stop very soon"); | > } | > | > static void KeepRunning() | > { | > int i=0; | > while (!stop) | > { | > i++; | > } | > } | > } | > | > Compile with the /o+ switch to enable optimisations, and on my box at | > least, the process never dies. | > | > -- | > Jon Skeet - <sk***@pobox.com> | > http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet | > If replying to the group, please do not mail me too | |
Also Jon, I may be missing something dealing with multiple processors but I
thought that the these operations were assured to be atomic there as well
(correct me if I am wrong).
Cheers,
Greg
"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om... Greg Young <dr*******************@hotmail.com> wrote: Just to be clear the proper way to deal with the code if volatility were a concern would be
>> public class foo { >> private volatile SomeObject bar; >> public SomeObject Bar { >> get { return bar; } >> set { bar = value; } >> } >> }
Which will inline and generate as volatile. You have to be very careful with overusing volatile though as it will significantly limit the optimizations that you can have.
It also fails as soon as you've got a property which needs to set two different variables. I prefer to have just one way of obtaining thread- safety: a lock.
(I also believe that very few classes ought to try to be thread-safe, so the performance penalties are usually minimal.)
-- Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet If replying to the group, please do not mail me too
Greg Young wrote: No. It works because acquiring a lock counts as a volatile read, and releasing a lock counts as a volatile write. Jon, take a look at the unmanaged code generated I posted ... Memory barriers are not involved, in this case it simply forces the JIT to not inline the property and is as such unable to possibly optimize the value into a register (forcing the equivalent of a volatile read).
Memory barriers most certainly *are* involved - in the specification.
They don't necessarily need to be involved in the implementation, if
the implementation knows that the appropriate memory barriers will
happen anyway.
The specification talks about the effects of locking in terms of memory
barriers, not in terms of inlining. It's possible that a different
implementation could still inline properties involving locks, but the
volatility would still have to be enforced in order to comply with the
spec.
You could in fact just make the property code more than the inlining maximum (32 bytes if I remember correctly) and it would have the same effect (or you could just use [MethodImpl(MethodImplOptions.NoInlining)] which provides the exact same behavior).
This code will also work for your stop example ...
It happens to with the current implementation. There's nothing to
enforce that in the spec though. I don't like designing threading with
only the current implementation in mind.
Jon
Greg Young wrote: Also Jon, I may be missing something dealing with multiple processors but I thought that the these operations were assured to be atomic there as well (correct me if I am wrong).
Each operation would be atomic, but the operation *as a whole*
wouldn't, nor would it be volatile. If I have a property which has:
public string Foo
{
set
{
foo = value;
fooLength = value.Length;
}
}
then there is no guarantee that other threads wouldn't see conflicting
values of foo and fooLength. They may see an update to fooLength but no
update to foo, or vice versa.
By locking all access to the properties (with the same lock) you can
achieve consistency, and guarantee at the same time that you will
always see latest values.
Jon
Greg Young wrote:
<snip> Jon Skeet It also fails as soon as you've got a property which needs to set two different variables. I prefer to have just one way of obtaining thread- safety: a lock. As I had said, I was dealing with a case where we were dealing with something under reference size (without screwing with alignment etc). I feel removing the needless locking is important in performant areas of code.
How often have you come across code where the locking was the
performance hit? I've only done so *once* - and that was when writing
lock primitives myself.
I'll take the balance of consistency (only needing one way of achieving
thread safety) over a *possible* performance hit until that performance
hit is *proven* to be a bottleneck. I believe the number of
applications where it's an issue is vanishingly small.
Just to be clear .. I agree that volatility is a valid concern; I just think that using lock to provide it is bad practice as it comes with significant overhead (in the range of 100 times slower) and works as a by product of the optimizer not inlining (see previous post for inlining)
I believe in simplicity/consistency over performance - it's easier to
refactor to performance than the other way round. I'm a *big* fan of
not micro-optimising.
Jon
> It happens to with the current implementation. There's nothing to enforce that in the spec though. I don't like designing threading with only the current implementation in mind.
This would always work ... it is also coverred in partition 2b - 22.1.10
Cheers,
Greg
"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:11**********************@m73g2000cwd.googlegr oups.com... Greg Young wrote: > No. It works because acquiring a lock counts as a volatile read, and > releasing a lock counts as a volatile write.
Jon, take a look at the unmanaged code generated I posted ... Memory barriers are not involved, in this case it simply forces the JIT to not inline the property and is as such unable to possibly optimize the value into a register (forcing the equivalent of a volatile read).
Memory barriers most certainly *are* involved - in the specification. They don't necessarily need to be involved in the implementation, if the implementation knows that the appropriate memory barriers will happen anyway.
The specification talks about the effects of locking in terms of memory barriers, not in terms of inlining. It's possible that a different implementation could still inline properties involving locks, but the volatility would still have to be enforced in order to comply with the spec.
You could in fact just make the property code more than the inlining maximum (32 bytes if I remember correctly) and it would have the same effect (or you could just use [MethodImpl(MethodImplOptions.NoInlining)] which provides the exact same behavior).
This code will also work for your stop example ...
It happens to with the current implementation. There's nothing to enforce that in the spec though. I don't like designing threading with only the current implementation in mind.
Jon
Greg Young <dr*******************@hotmail.com> wrote: It happens to with the current implementation. There's nothing to enforce that in the spec though. I don't like designing threading with only the current implementation in mind.
This would always work ... it is also coverred in partition 2b - 22.1.10
It would prevent inlining, but that doesn't guarantee a memory barrier.
At least, I see nothing in the spec which says that a native call
boundary has to be a memory barrier.
Inlining is logically orthogonal to memory barriers. Yes, the current
implementation effectively incurs a memory barrier when calling a non-
inlined method, but until you can point out where that is guaranteed in
the spec, I'll continue to believe that it would be possible for a
conforming implementation to still exhibit the buggy behaviour with
your non-inlining code.
--
Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
After you put it this way I spent some time curled up with the spec (been a
while since I read through it).
You are right on the spec part, although the only way native calls cannot
provide a memory barrier is through destructive optimizations. Looking back
I have actually recommended one such optimization which would cause this to
stop working having to do with loops and bounds check removal.
I still prefer the volatile over the lock but you are correct on the
non-inlining not necesarily coverring all cases.
Cheers,
Greg
"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om... Greg Young <dr*******************@hotmail.com> wrote: > It happens to with the current implementation. There's nothing to > enforce that in the spec though. I don't like designing threading with > only the current implementation in mind.
This would always work ... it is also coverred in partition 2b - 22.1.10
It would prevent inlining, but that doesn't guarantee a memory barrier. At least, I see nothing in the spec which says that a native call boundary has to be a memory barrier.
Inlining is logically orthogonal to memory barriers. Yes, the current implementation effectively incurs a memory barrier when calling a non- inlined method, but until you can point out where that is guaranteed in the spec, I'll continue to believe that it would be possible for a conforming implementation to still exhibit the buggy behaviour with your non-inlining code.
-- Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet If replying to the group, please do not mail me too This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Sunny |
last post by:
Hi,
I can not understend completely the lock statement.
Actally what is locked:
1. the part of the code between {...}
or
2. the object in lock()...
|
by: ORC |
last post by:
Hi,
Will an exception or a return inside a lock statement block unlock the code
so that other threads will be able to enter the code?
Thanks...
|
by: Jim S |
last post by:
My company's code often includes locks, using statements, and
try/catch blocks and ends up with a lot of brackets. Would it make
sense for C# to...
|
by: ThunderMusic |
last post by:
Hi,
The subject says it all. Is there an equivalent, in VB.NET, for the C#
statement Lock(Object){}
Thanks
ThunderMusic
|
by: Tao |
last post by:
hi.. Group,
is there any lock statement like C# in managed C++?
thanks
|
by: better678 |
last post by:
Question:
Discuss your understanding of the Java platform. Is the statement "Java is interpreted" correct?
Answer:
Java is an object-oriented...
|
by: teenabhardwaj |
last post by:
How would one discover a valid source for learning news, comfort, and help for engineering designs? Covering through piles of books takes a lot of...
|
by: CD Tom |
last post by:
This only shows up in access runtime. When a user select a report from my report menu when they close the report they get a menu I've called Add-ins...
|
by: jalbright99669 |
last post by:
Am having a bit of a time with URL Rewrite. I need to incorporate http to https redirect with a reverse proxy. I have the URL Rewrite rules made...
|
by: Matthew3360 |
last post by:
Hi there. I have been struggling to find out how to use a variable as my location in my header redirect function.
Here is my code.
...
|
by: Matthew3360 |
last post by:
Hi, I have a python app that i want to be able to get variables from a php page on my webserver. My python app is on my computer. How would I make it...
|
by: Arjunsri |
last post by:
I have a Redshift database that I need to use as an import data source. I have configured the DSN connection using the server, port, database, and...
|
by: WisdomUfot |
last post by:
It's an interesting question you've got about how Gmail hides the HTTP referrer when a link in an email is clicked. While I don't have the specific...
|
by: Matthew3360 |
last post by:
Hi,
I have been trying to connect to a local host using php curl. But I am finding it hard to do this. I am doing the curl get request from my web...
| |