By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
425,625 Members | 1,267 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 425,625 IT Pros & Developers. It's quick & easy.

C#, Threads, Events, and DataGrids/DataSets

P: n/a
I am trying to run a thread off of a form, and every once in a while the thread will raise an event for the form to read. When the form gets the event, the form will place the event into a dataset and display it on a datagrid that is on the form. The problem is that the thread will slowly take over all of the processor time. After about 8 events, the form will not even respond anymore. Here is the guts of my test code

// Class and event for Threa
using System

namespace ThreadTestStuf

public delegate void TestEventHandler(object sender,int count)
public class TestThread
public event TestEventHandler TestEvent
public bool stopRunning = false
public TestThread(
{
public void RunningThread()
int xyz = 0
while (!stopRunning)
xyz += 1
Console.WriteLine("Count: " + xyz.ToString())
if (xyz % 1000 == 0)
TestEvent(this,xyz)



// Form that call the test threa
// Data set only has (int count) and (string desc) in i

using System
using System.Drawing
using System.Collections
using System.ComponentModel
using System.Windows.Forms
using System.Data
using System.Threading
using ThreadTestStuff

namespace ThreadTes

public class ThreadTestForm : System.Windows.Forms.For

private ThreadTest.TestSet testSet1
private System.Windows.Forms.DataGrid TestDG
private System.Windows.Forms.Button StartThreadButton
private Thread localThread
private TestThread localTestThread

private System.ComponentModel.Container components = null

public ThreadTestForm(

InitializeComponent()
protected override void Dispose( bool disposing
{
localTestThread.stopRunning = true
localThread.Abort()
if( disposing

if (components != null)

components.Dispose()
base.Dispose( disposing )
private void InitializeComponent(

this.testSet1 = new ThreadTest.TestSet()
this.TestDG = new System.Windows.Forms.DataGrid()
this.StartThreadButton = new System.Windows.Forms.Button()
((System.ComponentModel.ISupportInitialize)(this.t estSet1)).BeginInit()
((System.ComponentModel.ISupportInitialize)(this.T estDG)).BeginInit()
this.SuspendLayout()
//
// testSet
//
this.testSet1.DataSetName = "TestSet"
this.testSet1.Locale = new System.Globalization.CultureInfo("en-US")
//
// TestD
//
this.TestDG.DataMember = ""
this.TestDG.DataSource = this.testSet1.TestTable
this.TestDG.HeaderForeColor = System.Drawing.SystemColors.ControlText
this.TestDG.Location = new System.Drawing.Point(16, 24)
this.TestDG.Name = "TestDG"
this.TestDG.Size = new System.Drawing.Size(320, 144)
this.TestDG.TabIndex = 0
//
// StartThreadButto
//
this.StartThreadButton.Location = new System.Drawing.Point(224, 184)
this.StartThreadButton.Name = "StartThreadButton"
this.StartThreadButton.Size = new System.Drawing.Size(120, 32)
this.StartThreadButton.TabIndex = 1
this.StartThreadButton.Text = "Start Thread"
this.StartThreadButton.Click += new System.EventHandler(this.StartThreadButton_Click)
//
// ThreadTestFor
//
this.AutoScaleBaseSize = new System.Drawing.Size(5, 13)
this.ClientSize = new System.Drawing.Size(376, 253)
this.Controls.Add(this.StartThreadButton)
this.Controls.Add(this.TestDG)
this.Name = "ThreadTestForm"
this.Text = "Thread Test Form"
((System.ComponentModel.ISupportInitialize)(this.t estSet1)).EndInit()
((System.ComponentModel.ISupportInitialize)(this.T estDG)).EndInit()
this.ResumeLayout(false)
#endregio

/// <summary
/// The main entry point for the application
/// </summary
[STAThread
static void Main()

Application.Run(new ThreadTestForm())

private void EventHappend(object sender, int count

localThread.Interrupt()
testSet1.TestTable.AddTestTableRow(count,"Hello There")
// MessageBox.Show(localThread.ThreadState.ToString() )
private void StartThreadButton_Click(object sender, System.EventArgs e
{
localTestThread = new TestThread();
localTestThread.TestEvent += new TestEventHandler(this.EventHappend);
localThread = new Thread(new ThreadStart(localTestThread.RunningThread));
localThread.Start();
localThread.IsBackground = true;

}
}
}

Can anyone help?
Thanks,
Dennis Owens
Jul 21 '05 #1
Share this Question
Share on Google+
28 Replies


P: n/a
Dennis Owens <an*******@discussions.microsoft.com> wrote:
I am trying to run a thread off of a form, and every once in a while
the thread will raise an event for the form to read. When the form
gets the event, the form will place the event into a dataset and
display it on a datagrid that is on the form. The problem is that the
thread will slowly take over all of the processor time. After about 8
events, the form will not even respond anymore. Here is the guts of my
test code.


I'm not surprised - you've got 8 threads in a tight loop. That's bound
to take over the processor! However, you've got a few other nasties
going on...

Firstly, you're accessing stopRunning in a non-thread-safe way. You
should either declare it as being volatile, or wrap any access to it in
a lock.

Secondly, you should never update the GUI from a non-UI thread, as you
currently are doing. You should use Control.Invoke to invoke a delegate
on the UI thread.

Thirdly, why are you calling localThread.Interrupt() from your event?
At that time, you're actually running *in* the thread you're trying to
interrupt!

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #2

P: n/a
The interrrupt was just a wild guess to try and stop it from sucking up all the processor time. I forgot I left it in there. The only line in that method should be the adding of the row in the dataset. As for your second thing, is this what is causing the thread to take over. Well I will lookup Control.Invoke and see if I can figure it out. I still don't see the eight threads, just the form and the test thread

Thanks Dennis Owen

Jul 21 '05 #3

P: n/a
Hi,
stopRunning doesn't need to be declared volatile and it should not be used
with locks.
Native integer assignment is atomic operation as well as native integer
promotion (even so the last doesn't even count).
No compiler would optimize away loading of cycle variable when there is
nontrivial cycle body (like non-inlined method calls inside of cycle).
And you don't need release/acquire semantic for single variable with atomic
assignment.
Using LOCK for accessing it would be unnecessary performance hit if not to
say a mistake.

I suppose that he misinterpreted meaning of Thread.Interrupt, and you are
correct with the rest of your comments.

-Valery.

See my blog at:
http://www.harper.no/valery

"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
Dennis Owens <an*******@discussions.microsoft.com> wrote:
I am trying to run a thread off of a form, and every once in a while
the thread will raise an event for the form to read. When the form
gets the event, the form will place the event into a dataset and
display it on a datagrid that is on the form. The problem is that the
thread will slowly take over all of the processor time. After about 8
events, the form will not even respond anymore. Here is the guts of my
test code.


I'm not surprised - you've got 8 threads in a tight loop. That's bound
to take over the processor! However, you've got a few other nasties
going on...

Firstly, you're accessing stopRunning in a non-thread-safe way. You
should either declare it as being volatile, or wrap any access to it in
a lock.

Secondly, you should never update the GUI from a non-UI thread, as you
currently are doing. You should use Control.Invoke to invoke a delegate
on the UI thread.

Thirdly, why are you calling localThread.Interrupt() from your event?
At that time, you're actually running *in* the thread you're trying to
interrupt!

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too

Jul 21 '05 #4

P: n/a
You have tight loop inside your thread method - of course it eats all
processor resources!
And btw, setting thread as background doesn't affect thread
priority/scheduling. it only says that .Net process can terminate even so
there are some background threads still running (I just think that you got
it wrong too)

and do as Jon said to you - use Invoke when you update controls from non-UI
threads.

-Valery

See my blog at:
http://www.harper.no/valery
"Dennis Owens" <an*******@discussions.microsoft.com> wrote in message
news:00**********************************@microsof t.com...
The interrrupt was just a wild guess to try and stop it from sucking up all the processor time. I forgot I left it in there. The only line in that
method should be the adding of the row in the dataset. As for your second
thing, is this what is causing the thread to take over. Well I will lookup
Control.Invoke and see if I can figure it out. I still don't see the eight
threads, just the form and the test thread?
Thanks Dennis Owens

Jul 21 '05 #5

P: n/a
Ok here is a simple question, how should this simple example be written

Thanks Dennis Owens
Jul 21 '05 #6

P: n/a
btw (for avoiding being misinterpreted) I didn't say that using bool flag as
thread event is good design :-). He should use kernel object like event for
signaling exit.

-Valery.

See my blog at:
http://www.harper.no/valery

"Valery Pryamikov" <Va****@nospam.harper.no> wrote in message
news:%2******************@TK2MSFTNGP09.phx.gbl...
Hi,
stopRunning doesn't need to be declared volatile and it should not be used
with locks.
Native integer assignment is atomic operation as well as native integer
promotion (even so the last doesn't even count).
No compiler would optimize away loading of cycle variable when there is
nontrivial cycle body (like non-inlined method calls inside of cycle).
And you don't need release/acquire semantic for single variable with atomic assignment.
Using LOCK for accessing it would be unnecessary performance hit if not to
say a mistake.

I suppose that he misinterpreted meaning of Thread.Interrupt, and you are
correct with the rest of your comments.

-Valery.

See my blog at:
http://www.harper.no/valery

"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
Dennis Owens <an*******@discussions.microsoft.com> wrote:
I am trying to run a thread off of a form, and every once in a while
the thread will raise an event for the form to read. When the form
gets the event, the form will place the event into a dataset and
display it on a datagrid that is on the form. The problem is that the
thread will slowly take over all of the processor time. After about 8
events, the form will not even respond anymore. Here is the guts of my
test code.


I'm not surprised - you've got 8 threads in a tight loop. That's bound
to take over the processor! However, you've got a few other nasties
going on...

Firstly, you're accessing stopRunning in a non-thread-safe way. You
should either declare it as being volatile, or wrap any access to it in
a lock.

Secondly, you should never update the GUI from a non-UI thread, as you
currently are doing. You should use Control.Invoke to invoke a delegate
on the UI thread.

Thirdly, why are you calling localThread.Interrupt() from your event?
At that time, you're actually running *in* the thread you're trying to
interrupt!

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too


Jul 21 '05 #7

P: n/a
Valery Pryamikov <Va****@nospam.harper.no> wrote:
stopRunning doesn't need to be declared volatile and it should not be used
with locks.
Native integer assignment is atomic operation as well as native integer
promotion (even so the last doesn't even count).


Being atomic has nothing to do with it. The memory model does not
guarantee that the running thread will *ever* see writes from another
thread unless a memory fence is involved.

You can argue about whether or not it'll actually happen, but I prefer
to work from guarantees when it comes to multi-threading - because
sooner or later, some architecture will come along and destroy all
assumptions apart from the guarantees.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #8

P: n/a
Dennis Owens <an*******@discussions.microsoft.com> wrote:
The interrrupt was just a wild guess to try and stop it from sucking
up all the processor time. I forgot I left it in there. The only line
in that method should be the adding of the row in the dataset. As for
your second thing, is this what is causing the thread to take over.
Well I will lookup Control.Invoke and see if I can figure it out. I
still don't see the eight threads, just the form and the test thread?


Sorry, I thought you'd meant there were 8 clicks, not 8 events -
misread. Yup, there'll only be the one extra thread. It will still be
in a tight loop though, which is going to stuff you to some extent
*whatever* you do.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #9

P: n/a
Valery Pryamikov <Va****@nospam.harper.no> wrote:
btw (for avoiding being misinterpreted) I didn't say that using bool flag as
thread event is good design :-). He should use kernel object like event for
signaling exit.


On the contrary, I'd say using a boolean (but using it properly) is a
perfectly reasonable way of exiting the thread. How would your code
with an event work? It's basically going to end up doing something
*equivalent* to just checking a flag, assuming that the thread wants to
keep doing work until it's told to stop.

Using a boolean is simple (when done right) and allows clean exit
(unlike, say, aborting the thread). Sure, it requires a memory barrier
in order to guarantee that the thread sees the appropriate change in
value, but those are very cheap in the grand scheme of things. What
benefit is there in doing anything else?

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #10

P: n/a
Jon, you are wrong. Atomic assignment has everything to do with it - read on
section om .Net memory model in .Net specs...
And btw. regardless of processor and memory architecture there is always
guaratee that memory writest from one thread will be seen by any other
thread. It is only order of read-to-write or write-to-read that is not
guaranteed and can require memory barrier depending on memory and processor
architecture. (read-modify-write order is not guarateed on any memory and
processor architecture).
In this sample stopRunning doesn't require such (believe me, I used quite
some time for learning and working with multithreading programming).

-Valery

See my blog at:
http://www.harper.no/valery

"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP***********************@msnews.microsoft.co m...
Valery Pryamikov <Va****@nospam.harper.no> wrote:
stopRunning doesn't need to be declared volatile and it should not be used with locks.
Native integer assignment is atomic operation as well as native integer
promotion (even so the last doesn't even count).


Being atomic has nothing to do with it. The memory model does not
guarantee that the running thread will *ever* see writes from another
thread unless a memory fence is involved.

You can argue about whether or not it'll actually happen, but I prefer
to work from guarantees when it comes to multi-threading - because
sooner or later, some architecture will come along and destroy all
assumptions apart from the guarantees.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too

Jul 21 '05 #11

P: n/a
kernel object event is a different thing than C# delegate event.
if you are interesting about how Win32 events are working, read for example
Jeff Richter book.

-Valery (Windows SDK MVP since 1999).

See my blog at:
http://www.harper.no/valery

"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
Valery Pryamikov <Va****@nospam.harper.no> wrote:
btw (for avoiding being misinterpreted) I didn't say that using bool flag as thread event is good design :-). He should use kernel object like event for signaling exit.


On the contrary, I'd say using a boolean (but using it properly) is a
perfectly reasonable way of exiting the thread. How would your code
with an event work? It's basically going to end up doing something
*equivalent* to just checking a flag, assuming that the thread wants to
keep doing work until it's told to stop.

Using a boolean is simple (when done right) and allows clean exit
(unlike, say, aborting the thread). Sure, it requires a memory barrier
in order to guarantee that the thread sees the appropriate change in
value, but those are very cheap in the grand scheme of things. What
benefit is there in doing anything else?

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too

Jul 21 '05 #12

P: n/a
Valery Pryamikov <Va****@nospam.harper.no> wrote:
Jon, you are wrong. Atomic assignment has everything to do with it - read on
section om .Net memory model in .Net specs...
I have - and while atomicity is necessary, it's not sufficient.
And btw. regardless of processor and memory architecture there is always
guaratee that memory writest from one thread will be seen by any other
thread.
Not if the JIT compiler decides to keep the writes "local", not
flushing them back to the main processor memory. It could keep the
value of the variable within a register, for instance, and only write
it back at the end of a method. Similarly, the reading thread could
keep the value of the variable within a register and only read it from
memory once, at the start of the method. Both of those are possible
(though unlikely) under the .NET memory model.

The Java memory model makes all of this somewhat clearer, IMO - while
it's obviously not a good idea to write to the Java memory model when
working in .NET, it gives a good feeling of just how horrible things
can end up.
It is only order of read-to-write or write-to-read that is not
guaranteed and can require memory barrier depending on memory and processor
architecture. (read-modify-write order is not guarateed on any memory and
processor architecture).
So where is the guarantee that writes are seen immediately? If they're
not seen immediately, where's the guarantee that they're seen by any
specific time without any memory barriers? If there's no guarantee of
them being seen by any specific time, what's to stop an implementation
from (say) caching the flag in a register and never (within an infinite
loop) going back to main memory?
In this sample stopRunning doesn't require such (believe me, I used quite
some time for learning and working with multithreading programming).


A lot of people do, and a lot of people will never get burned by it.
The same is true of the double checked locking algorithm - that doesn't
mean it's correct. Just because something has always worked for you
doesn't mean it complies with the specifications.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #13

P: n/a
Valery Pryamikov <Va****@nospam.harper.no> wrote:
kernel object event is a different thing than C# delegate event.
Yes, I wasn't talking about C# delegate events either.
if you are interesting about how Win32 events are working, read for example
Jeff Richter book.


I know a bit about how Win32 events work. I don't see how they're
relevant in this case. Could you provide some code using events which
is a) correct, and b) simpler or "better" in some other way than using
a flag?

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #14

P: n/a
Jon, for God sake, go read some docs and try to reflect it!
JIT compiler just compiles IL to x86 (or whatever platform it is developed
for).
Some optimizing compilers could optimize away loading variable into register
for cycles with trivial cycle body as optimization technique, but no
compiler will do it for non-trivial cycle body (there are books on compiler
optimization theory if you are interesting). Volatile has several meanings
(overloaded semantic) and one of these meanings is to signal optimizing
compiler that it can't use that particular type of optimization. However any
non-inlined method call is one of the criteria that rules off that
optimization too (non-trivial cycle body) and delegate call is never inlined
(not speaking about other things).
Other meaning of volatile is defined by .Net spec where it adds
release/acquire semantic for variable reads/writes. This is important only
for non-trivial memory objects with non atomic assignment and consistency
requirements. For example if we speak of some class that has some instance
fields, then volatile could be important to guarantee that all memory writes
from class constructor will be completed before memory read on 'this'. But
with usage pattern that we discuss here it could never make that problem. I
can even give you exact prove of this fact based both on x86 and .Net memory
models (but I rather would not do it for not wasting time of the newsgroup
readers on something they probably isn't interesting to read anyhow).
As I already said - regardless of processor architecture and memory model,
it is always guaranteed that memory write from one thread will be seen by
all other threads. This is a general rule of computing. period. Order isn't
guaranteed, but visibility is!

Jon, I've used several years of my life learning and programming symmetric
multiprocessing and I'm not going to argue here with you about things that
you apparently don't know well. You can have you last word if you want,
however be warned that trying to argue about something that you are not
really familiar could just harm your reputation as a specialist.

-Valery

See my blog at:
http://www.harper.no/valery
"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
Valery Pryamikov <Va****@nospam.harper.no> wrote:
Jon, you are wrong. Atomic assignment has everything to do with it - read on section om .Net memory model in .Net specs...


I have - and while atomicity is necessary, it's not sufficient.
And btw. regardless of processor and memory architecture there is always
guaratee that memory writest from one thread will be seen by any other
thread.


Not if the JIT compiler decides to keep the writes "local", not
flushing them back to the main processor memory. It could keep the
value of the variable within a register, for instance, and only write
it back at the end of a method. Similarly, the reading thread could
keep the value of the variable within a register and only read it from
memory once, at the start of the method. Both of those are possible
(though unlikely) under the .NET memory model.

The Java memory model makes all of this somewhat clearer, IMO - while
it's obviously not a good idea to write to the Java memory model when
working in .NET, it gives a good feeling of just how horrible things
can end up.
It is only order of read-to-write or write-to-read that is not
guaranteed and can require memory barrier depending on memory and processor architecture. (read-modify-write order is not guarateed on any memory and processor architecture).


So where is the guarantee that writes are seen immediately? If they're
not seen immediately, where's the guarantee that they're seen by any
specific time without any memory barriers? If there's no guarantee of
them being seen by any specific time, what's to stop an implementation
from (say) caching the flag in a register and never (within an infinite
loop) going back to main memory?
In this sample stopRunning doesn't require such (believe me, I used quite some time for learning and working with multithreading programming).


A lot of people do, and a lot of people will never get burned by it.
The same is true of the double checked locking algorithm - that doesn't
mean it's correct. Just because something has always worked for you
doesn't mean it complies with the specifications.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too

Jul 21 '05 #15

P: n/a
Any mutlithreading sample could be used for demonstrating this (literally
tons of them).
You simply create one or more events in your program and use
WaitForSingleObject/MultipleObjects(Ex) from your thread.

-Valery.

See my blog at:
http://www.harper.no/valery

"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
Valery Pryamikov <Va****@nospam.harper.no> wrote:
kernel object event is a different thing than C# delegate event.


Yes, I wasn't talking about C# delegate events either.
if you are interesting about how Win32 events are working, read for example Jeff Richter book.


I know a bit about how Win32 events work. I don't see how they're
relevant in this case. Could you provide some code using events which
is a) correct, and b) simpler or "better" in some other way than using
a flag?

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too

Jul 21 '05 #16

P: n/a
> The same is true of the double checked locking algorithm - that doesn't
mean it's correct. Just because something has always worked for you
doesn't mean it complies with the specifications.

I learned about problems related to singelton double locking initialization
pattern many years ago (in last century literally), and even have a couple
of Petri Nets diagrams with prove of this problem still lying in my desk....

-Valery.

See my blog at:
http://www.harper.no/valery
Jul 21 '05 #17

P: n/a
Valery Pryamikov <Va**************@invalid.sm.siemens.no.nospam> wrote:
Jon, for God sake, go read some docs and try to reflect it!
Um, I *have* read the specification. I have seen no guarantee of the
type you're implying exists.
JIT compiler just compiles IL to x86 (or whatever platform it is developed
for).
Yup.
Some optimizing compilers could optimize away loading variable into register
for cycles with trivial cycle body as optimization technique, but no
compiler will do it for non-trivial cycle body (there are books on compiler
optimization theory if you are interesting).
They could, however. That's the point - they could, and on some
architectures they may do. Yes, it won't be a problem on x86, but I
don't believe the spec guarantees it won't be *in general*.
Volatile has several meanings
(overloaded semantic) and one of these meanings is to signal optimizing
compiler that it can't use that particular type of optimization.
No, it's more than that. Volatile in the .NET CLI has a very clear
meaning, to do with memory barriers. A volatile read/write affects more
than just the variable being read/written - it affects the whole
"stream" of memory accesses.
However any
non-inlined method call is one of the criteria that rules off that
optimization too (non-trivial cycle body) and delegate call is never inlined
(not speaking about other things).
I don't see where inlining is actually relevant here.
Other meaning of volatile is defined by .Net spec where it adds
release/acquire semantic for variable reads/writes.
That's the meaning I'm talking about, seeing as I'm talking about the
specification.
This is important only
for non-trivial memory objects with non atomic assignment and consistency
requirements. For example if we speak of some class that has some instance
fields, then volatile could be important to guarantee that all memory writes
from class constructor will be completed before memory read on 'this'. But
with usage pattern that we discuss here it could never make that problem. I
can even give you exact prove of this fact based both on x86 and .Net memory
models (but I rather would not do it for not wasting time of the newsgroup
readers on something they probably isn't interesting to read anyhow).
Any proof *cannot* be based on the x86 memory model, as the .NET memory
model doesn't refer to the x86 memory model at all, and is indeed much
weaker than it.
As I already said - regardless of processor architecture and memory model,
it is always guaranteed that memory write from one thread will be seen by
all other threads. This is a general rule of computing. period.
Not really. The general rule (to my mind) is that there is some way of
enforcing visibility, but that's not necessary what happens
immediately.
Order isn't guaranteed, but visibility is!
Nope. It really isn't - not without memory barriers. If you're saying
that there isn't a single memory model which pretty much explicitly
states that the code posted might not work, I refer you to the Java
memory model. Choice quotes are:

<quote>
Best practice is that if a variable is ever to be assigned by one
thread and used or assigned by another, then all accesses to that
variable should be enclosed in synchronized methods or synchronized
statements.
</quote>

<quote>
Each thread has a working memory, in which it may keep copies of the
values of variables from the main memory that is shared between all
threads. To access a shared variable, a thread usually first obtains a
lock and flushes its working memory. This guarantees that shared values
will thereafter be loaded from the shared main memory to the threads
working memory. When a thread unlocks a lock it guarantees the values
it holds in its working memory will be written back to the main memory.
</quote>

Now, as I said before, the Java memory model isn't quite the same as
the CLI memory model, but both are relatively weak in terms of the
guarantees they give.

Here's another quote, this time from .NET - the Thread.MemoryBarrier
method documentation:

<quote>
Synchronizes memory. In effect, flushes the contents of cache memory to
main memory, for the processor executing the current thread.
</quote>

Now, that suggests that there is the idea of a "cache" and "main
memory" and that they won't necessarily be in sync. Some kind of
flushing may be required in some situations. Where is the *guarantee*
that such a flush occurs in the posted code?
Jon, I've used several years of my life learning and programming symmetric
multiprocessing and I'm not going to argue here with you about things that
you apparently don't know well. You can have you last word if you want,
however be warned that trying to argue about something that you are not
really familiar could just harm your reputation as a specialist.


If it really is guaranteed, why not just post the relevant parts of the
the CLI specification? If I really am wrong, I'd *really* like to be
proven wrong. For one thing, it would make my life easier when writing
similar code!

I'm certainly *not* an expert in the x86 memory model, or indeed any
specific processor's memory model. I wouldn't even say I'm an *expert*
in the CLI memory model, although I know more about that than about any
specific processor model.

I'm not trying to argue that under the current .NET implementation, the
posted code won't always work. It may well work for all future
implementations on every architecture, too - but I don't believe it's
guaranteed to. All I'm after is some evidence of that guarantee, or an
acknowledgement that it doesn't exist.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #18

P: n/a
Valery Pryamikov <Va**************@invalid.sm.siemens.no.nospam> wrote:
Any mutlithreading sample could be used for demonstrating this (literally
tons of them).
You simply create one or more events in your program and use
WaitForSingleObject/MultipleObjects(Ex) from your thread.


But the thread doesn't *want* to wait. An event would be fine to use in
a queuing system, where it was being given extra work by another
thread. I would usually use an event (or actually just pulse a monitor,
which is very similar) to control the "more work or a signal to stop
has come in" but still use a flag to show the difference between more
work being present and a request to stop.

The code posted, however, didn't rely on another thread giving it any
more work - it doesn't want to *wait* for a signal from another thread,
it just wants to notice when such a signal has been provided, at some
future (but not too distant future) time.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #19

P: n/a
Valery Pryamikov <Va**************@invalid.sm.siemens.no.nospam> wrote:
The same is true of the double checked locking algorithm - that doesn't
mean it's correct. Just because something has always worked for you
doesn't mean it complies with the specifications.
I learned about problems related to singelton double locking initialization
pattern many years ago (in last century literally), and even have a couple
of Petri Nets diagrams with prove of this problem still lying in my desk....


While I don't have the Petri Nets diagrams, I too learned about the
problems with it (in the Java memory model at least) quite a while ago.

My point is that people use things which work for them, and then they
assume that they're guaranteed to work. Just because using a simple
flag with no locking or memory barriers worked for you every time you
used it doesn't mean it's guaranteed to work. (Were you even working in
the .NET memory model then? It may have been guaranteed to work in the
memory model you were using, but that doesn't mean it's guaranteed to
work in the .NET memory model.)

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #20

P: n/a
Volatile has meanings both for compiler and runtime. Runtime meaning is
release/acquire, while as compiler's meaning "don't drop load to register
out of cycle body" menaing. Whatever optimizing compiler is, it would never
try optimization that is proven to be unacceptable by compiler optimization
theory. Non-inlined method call makes cycle body to be considered
non-trivial because it adds too many factors that prohibits making
assumption that variable isn't modified by its location in the cycle body
(including but not limited to possibility of runtime weaveing some sort of
call processing).
When I said that I can give prove for both x86 and .Net memory models I
meant I can prove it separately for each of them (+ for Itanium and Athlon
64 memory models too btw).
The only place where using instance field on the class that we are
discussing could present a problem is during first access to 'this' pointer
after constructing new class instace. But at this point instance isn't
shared and this is never a problem for single thread. At the pointg when
thread runs (Thread.Start()), there is guarantee that there were quite a few
memory barriers in the middle (when OS starts thread it will be a lot of
LOCKs and any LOCK means complete memory barrier with processor caches being
syncronized). So, all processor caches are guaranteed to be synchronized for
'this' and stopProcessing. After that "write - to - read" order doesn't
matter for that usage of stopProcessing sinced its assignment/read from
memory location will be atomic (bool is promoted to native integer and
aligned to the native integer boundary). And non-trivial cycle body assures
that JIT would never drop loading field from memory location to the register
out of the cycle body... I can even draw Petri Nets diagram with prove of
this...

-Valery.

See my blog at:
http://www.harper.no/valery
"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP***********************@msnews.microsoft.co m...
Valery Pryamikov <Va**************@invalid.sm.siemens.no.nospam> wrote:
Jon, for God sake, go read some docs and try to reflect it!


Um, I *have* read the specification. I have seen no guarantee of the
type you're implying exists.
JIT compiler just compiles IL to x86 (or whatever platform it is developed for).


Yup.
Some optimizing compilers could optimize away loading variable into register for cycles with trivial cycle body as optimization technique, but no
compiler will do it for non-trivial cycle body (there are books on compiler optimization theory if you are interesting).


They could, however. That's the point - they could, and on some
architectures they may do. Yes, it won't be a problem on x86, but I
don't believe the spec guarantees it won't be *in general*.
Volatile has several meanings
(overloaded semantic) and one of these meanings is to signal optimizing
compiler that it can't use that particular type of optimization.


No, it's more than that. Volatile in the .NET CLI has a very clear
meaning, to do with memory barriers. A volatile read/write affects more
than just the variable being read/written - it affects the whole
"stream" of memory accesses.
However any
non-inlined method call is one of the criteria that rules off that
optimization too (non-trivial cycle body) and delegate call is never inlined (not speaking about other things).


I don't see where inlining is actually relevant here.
Other meaning of volatile is defined by .Net spec where it adds
release/acquire semantic for variable reads/writes.


That's the meaning I'm talking about, seeing as I'm talking about the
specification.
This is important only
for non-trivial memory objects with non atomic assignment and consistency requirements. For example if we speak of some class that has some instance fields, then volatile could be important to guarantee that all memory writes from class constructor will be completed before memory read on 'this'. But with usage pattern that we discuss here it could never make that problem. I can even give you exact prove of this fact based both on x86 and .Net memory models (but I rather would not do it for not wasting time of the newsgroup readers on something they probably isn't interesting to read anyhow).


Any proof *cannot* be based on the x86 memory model, as the .NET memory
model doesn't refer to the x86 memory model at all, and is indeed much
weaker than it.
As I already said - regardless of processor architecture and memory model, it is always guaranteed that memory write from one thread will be seen by all other threads. This is a general rule of computing. period.


Not really. The general rule (to my mind) is that there is some way of
enforcing visibility, but that's not necessary what happens
immediately.
Order isn't guaranteed, but visibility is!


Nope. It really isn't - not without memory barriers. If you're saying
that there isn't a single memory model which pretty much explicitly
states that the code posted might not work, I refer you to the Java
memory model. Choice quotes are:

<quote>
Best practice is that if a variable is ever to be assigned by one
thread and used or assigned by another, then all accesses to that
variable should be enclosed in synchronized methods or synchronized
statements.
</quote>

<quote>
Each thread has a working memory, in which it may keep copies of the
values of variables from the main memory that is shared between all
threads. To access a shared variable, a thread usually first obtains a
lock and flushes its working memory. This guarantees that shared values
will thereafter be loaded from the shared main memory to the threads
working memory. When a thread unlocks a lock it guarantees the values
it holds in its working memory will be written back to the main memory.
</quote>

Now, as I said before, the Java memory model isn't quite the same as
the CLI memory model, but both are relatively weak in terms of the
guarantees they give.

Here's another quote, this time from .NET - the Thread.MemoryBarrier
method documentation:

<quote>
Synchronizes memory. In effect, flushes the contents of cache memory to
main memory, for the processor executing the current thread.
</quote>

Now, that suggests that there is the idea of a "cache" and "main
memory" and that they won't necessarily be in sync. Some kind of
flushing may be required in some situations. Where is the *guarantee*
that such a flush occurs in the posted code?
Jon, I've used several years of my life learning and programming symmetric multiprocessing and I'm not going to argue here with you about things that you apparently don't know well. You can have you last word if you want,
however be warned that trying to argue about something that you are not
really familiar could just harm your reputation as a specialist.


If it really is guaranteed, why not just post the relevant parts of the
the CLI specification? If I really am wrong, I'd *really* like to be
proven wrong. For one thing, it would make my life easier when writing
similar code!

I'm certainly *not* an expert in the x86 memory model, or indeed any
specific processor's memory model. I wouldn't even say I'm an *expert*
in the CLI memory model, although I know more about that than about any
specific processor model.

I'm not trying to argue that under the current .NET implementation, the
posted code won't always work. It may well work for all future
implementations on every architecture, too - but I don't believe it's
guaranteed to. All I'm after is some evidence of that guarantee, or an
acknowledgement that it doesn't exist.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too

Jul 21 '05 #21

P: n/a
Valery Pryamikov <Va**************@invalid.sm.siemens.no.nospam> wrote:
Volatile has meanings both for compiler and runtime. Runtime meaning is
release/acquire, while as compiler's meaning "don't drop load to register
out of cycle body" menaing.
Which compiler are you talking about here? The C# compiler doesn't deal
with registers at all. The only compiler which really deals with
registers is the JIT compiler, and in that sense it *is* the runtime,
in that after the JIT has worked its magic, it's really just x86 (or
whatever) code.

Where in either the C# or CLI specification does it say anything about
"don't drop load to register out of cycle body"? That's the part I
haven't seen.
Whatever optimizing compiler is, it would never
try optimization that is proven to be unacceptable by compiler optimization
theory.
And where is that explicitly guaranteed in the specification?
Non-inlined method call makes cycle body to be considered
non-trivial because it adds too many factors that prohibits making
assumption that variable isn't modified by its location in the cycle body
(including but not limited to possibility of runtime weaveing some sort of
call processing).
When I said that I can give prove for both x86 and .Net memory models I
meant I can prove it separately for each of them (+ for Itanium and Athlon
64 memory models too btw).
Good - I'm only interested in the .NET memory model though, so don't
worry about the other ones unless you wish to for posterity.
The only place where using instance field on the class that we are
discussing could present a problem is during first access to 'this' pointer
after constructing new class instace. But at this point instance isn't
shared and this is never a problem for single thread. At the pointg when
thread runs (Thread.Start()), there is guarantee that there were quite a few
memory barriers in the middle (when OS starts thread it will be a lot of
LOCKs and any LOCK means complete memory barrier with processor caches being
syncronized). So, all processor caches are guaranteed to be synchronized for
'this' and stopProcessing. After that "write - to - read" order doesn't
matter for that usage of stopProcessing sinced its assignment/read from
memory location will be atomic (bool is promoted to native integer and
aligned to the native integer boundary). And non-trivial cycle body assures
that JIT would never drop loading field from memory location to the register
out of the cycle body... I can even draw Petri Nets diagram with prove of
this...


Where in the CLI specification does it say that the JIT would never
drop loading field from memory location to the register out of the
cycle body? (Or other local cache memory - it doesn't have to be a
register.) It may well be accepted wisdom in other areas that that
doesn't happen, but I don't see where that's guaranteed.

Chris Brumme's blog is interesting on this topic. He gives the same
kind of idea of what an extremely weak memory model is:

<quote>
At the other extreme, we have a world where CPUs operate almost
entirely out of private cache. If another CPU ever sees anything my
CPU is doing, it=3Fs a total accident of timing.
</quote>

He also writes:

<quote>
In my opinion, we screwed up when we specified the ECMA memory model.
That model is unreasonable because:
* All stores to shared memory really require a volatile prefix.
[...]
</quote>

I disagree with him in terms of how hard it is to write code to this
model though - basically, if you make *all* access to data available to
multiple threads locked (with access to any single item of data only
available through the same lock) then you'll be safe. Now, the problem
there is in terms of performance - but for *most* people, I don't
believe the overhead of that kind of locking is going to be
significant. For some people doing incredibly fiddly stuff, using locks
may be overkill and memory barriers would be preferable - but they're
harder to work with (IMO) and so I recommend the "safe but slightly
slower" approach usually.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #22

P: n/a
Jon Skeet [C# MVP] <sk***@pobox.com> wrote:

<snip>

I've just emailed Vance Morrison at Microsoft about this - he's helped
me out with a previous memory model question. Without external expert
help, I suspect we're not going to make any progress here. Valery -
drop me a line if you want a copy of the email.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #23

P: n/a
This will work on any memory model listed in
ftp://gatekeeper.dec.com/pub/DEC/WRL...RL-TR-95.7.pdf.
Optimizing compilers theory has a lots of discussons about acceptable types
of optimization.
You can find some really good paper about it on
http://citeseer.nj.nec.com/cs.
Rearragnig memory access by optimizing compilers is somethign that was
beaten to death and it does apply to all existing optimizing compilers
(including Just In Time compilers).

I don't have any more time to spend on this conversation, sorry.
-Valery.

See my blog at:
http://www.harper.no/valery
"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
Valery Pryamikov <Va**************@invalid.sm.siemens.no.nospam> wrote:
Volatile has meanings both for compiler and runtime. Runtime meaning is
release/acquire, while as compiler's meaning "don't drop load to register out of cycle body" menaing.


Which compiler are you talking about here? The C# compiler doesn't deal
with registers at all. The only compiler which really deals with
registers is the JIT compiler, and in that sense it *is* the runtime,
in that after the JIT has worked its magic, it's really just x86 (or
whatever) code.

Where in either the C# or CLI specification does it say anything about
"don't drop load to register out of cycle body"? That's the part I
haven't seen.
Whatever optimizing compiler is, it would never
try optimization that is proven to be unacceptable by compiler optimization theory.


And where is that explicitly guaranteed in the specification?
Non-inlined method call makes cycle body to be considered
non-trivial because it adds too many factors that prohibits making
assumption that variable isn't modified by its location in the cycle body (including but not limited to possibility of runtime weaveing some sort of call processing).
When I said that I can give prove for both x86 and .Net memory models I
meant I can prove it separately for each of them (+ for Itanium and Athlon 64 memory models too btw).


Good - I'm only interested in the .NET memory model though, so don't
worry about the other ones unless you wish to for posterity.
The only place where using instance field on the class that we are
discussing could present a problem is during first access to 'this' pointer after constructing new class instace. But at this point instance isn't
shared and this is never a problem for single thread. At the pointg when
thread runs (Thread.Start()), there is guarantee that there were quite a few memory barriers in the middle (when OS starts thread it will be a lot of
LOCKs and any LOCK means complete memory barrier with processor caches being syncronized). So, all processor caches are guaranteed to be synchronized for 'this' and stopProcessing. After that "write - to - read" order doesn't
matter for that usage of stopProcessing sinced its assignment/read from
memory location will be atomic (bool is promoted to native integer and
aligned to the native integer boundary). And non-trivial cycle body assures that JIT would never drop loading field from memory location to the register out of the cycle body... I can even draw Petri Nets diagram with prove of this...


Where in the CLI specification does it say that the JIT would never
drop loading field from memory location to the register out of the
cycle body? (Or other local cache memory - it doesn't have to be a
register.) It may well be accepted wisdom in other areas that that
doesn't happen, but I don't see where that's guaranteed.

Chris Brumme's blog is interesting on this topic. He gives the same
kind of idea of what an extremely weak memory model is:

<quote>
At the other extreme, we have a world where CPUs operate almost
entirely out of private cache. If another CPU ever sees anything my
CPU is doing, it=3Fs a total accident of timing.
</quote>

He also writes:

<quote>
In my opinion, we screwed up when we specified the ECMA memory model.
That model is unreasonable because:
* All stores to shared memory really require a volatile prefix.
[...]
</quote>

I disagree with him in terms of how hard it is to write code to this
model though - basically, if you make *all* access to data available to
multiple threads locked (with access to any single item of data only
available through the same lock) then you'll be safe. Now, the problem
there is in terms of performance - but for *most* people, I don't
believe the overhead of that kind of locking is going to be
significant. For some people doing incredibly fiddly stuff, using locks
may be overkill and memory barriers would be preferable - but they're
harder to work with (IMO) and so I recommend the "safe but slightly
slower" approach usually.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too

Jul 21 '05 #24

P: n/a
Sure, I'd be glad to see that mail from Vance (whatever hes answer will be).
It definitely will be interesting to the group too.
-Valery

See my blog at:
http://www.harper.no/valery

"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
Jon Skeet [C# MVP] <sk***@pobox.com> wrote:

<snip>

I've just emailed Vance Morrison at Microsoft about this - he's helped
me out with a previous memory model question. Without external expert
help, I suspect we're not going to make any progress here. Valery -
drop me a line if you want a copy of the email.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too

Jul 21 '05 #25

P: n/a
Jon Skeet [C# MVP] <sk***@pobox.com> wrote:
I've just emailed Vance Morrison at Microsoft about this - he's helped
me out with a previous memory model question. Without external expert
help, I suspect we're not going to make any progress here. Valery -
drop me a line if you want a copy of the email.


Vance has replied with a a great response. It's here below. I haven't
included the emails I sent to Vance to start with, which are somewhat
related (in particular when he talks about "point 2" of Valery's
argument, which refers to what kind of situation the JIT compiler can
cache values) but I don't *think* it's crucial. If Valery disagrees he
can certainly post the mails - no problem as far as I'm concerned, but
I don't like posting other people's words without getting their consent
first, and I suspect getting Valery's sick of emails from me today :)

Here's Vance's reply (with a couple of typos cleaned up):
Jon asked me to weigh in on the memory model issue below.

First, I agree with the argument that Jon attributes to Valery below
(given points (1) and (2) it follows that any rational platform the
code below will 'work'. I say 'rational' because technically speaking
there is enough wiggle room in the spec to cause grief. Note that the
spec does not say anything about how long it takes for one processor to
see the writes of another. Thus if one processor wrote to 'stopRunning'
and then spun, there is nothing in the spec that forces the write to
ever be flushed to main memory and thus be seen by a thread running on
another processor. Thus you can in theory get a deadlock. This is
clearly a corner case, but I think I was called in as a spec lawyer, so
I am being picky.

More seriously, however, is the issue that assumption (2) below (that
the body is 'non-trivial' and thus compilers are not allowed to cache
'stopRunning') is not really true from a spec perspective. The runtime
is allowed at any point to treat built in functions like
'Console.WriteLine' as intrinsic (that is the runtime owns the
implementation, so the JIT compiler can know special things about it).
Thus you can imagine a compiler that knows that 'Console.WriteLine does
not modify any visible global variables, and thus 'knows' that
stopRunning can be safely enregistered. Of course this is not true in
practice, but could be true if 'Console.WriteLine, was instead say
Math.Sin(). Inlining also cause the same effect (if we inlined
'Console.Writeline, and ToString(), to the point that there are no
function calls in ANY path through the loop, then it is possible for
there to be a spin lock.

Even more seriously, however, is that we don't want spec quibbling to
get in the way of doing the right thing. Variables that are shared
across threads without additional synchronization need to be accesses
as volatile variables. (either declaring them volatile or using the
System.Threading.Thread.Volatile* methods) Why is this the right thing
to do?

1) Doing so EXACTLY describes the intention of the program (that a
memory cell will be access cross thread without synchronization). It is
a big red flag to both the compiler and more importantly people reading
the code that something cross-thread is happening here (and in
particular the loop is not infinite).

2) Because we have declared our intention to the world correctly, the
world can 'play nice' with our code. We don't need to have long
discussions about the subtleties of memory model and cache coherence.
We can live in a much simpler world where code is not surprising.

3) Note that the assumptions built into the analysis above relied on
details that are fragile. If 'Console.WriteLine' were pulled out, the
program becomes incorrect. Why build fragile code when you can build
robust code by changing the code in a trivial way?

OK that is enough on the particular issue.

Note that unless you are doing something advanced,(eg building low
level synchronization primitives for a multi-processor scenario),
spinning in a loop (even if there are SLEEPs), is generally a poor
solution. The Windows team is already banned such things within
Microsoft because they cause the processor to spin even when the
machine is idle from a user perspective. For Laptops, this is a issue
(even if you poll only once a second, if you have 100 apps running
doing this, you are consuming non-trivial power for no good reason).
You are also keeping memory pages hot that could be swapped out and
used for better purposes. You should be waiting on events.

Finally (I will end this e-mail eventually), When you play tricks to
get away from doing explicit thread synchronization, you are playing
with fire. It CAN be done, but only in special cases. The example below
only works because you never set 'stopRunning' to 'false' once it is
true (thus its value is 'monotonic' it only 'increases'). Moreover you
don't care that it gets set exactly once, or who 'wins' any races. This
is what allows you to get away without any interlocked operations (but
you still need volatile).

The vast majority of code does not need this kind of 'lock free'
performance. Don't do it unless you have the need (synchronized methods
are easy and much simpler to reason about). Getting concurrency right
in practice requires a diligence that is HARD. When you have bugs, they
are VERY hard to find. Keeping things as simple as possible from a
concurrency perspective is a really good idea.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #26

P: n/a
Vance made an excellent point in his mail. Even so his answer indirectly
confirms that my conclusions were correct (see [1] below), but I agree with
his point that reliance on that type of code analyze makes code difficult to
support and therefore fragile.

I also have to make public apology to Jon for being intolerant and rude in a
couple of my responses to that thread.

- Valery.

P.S:

[1] If you read Vance's response you can note that he was talking about
Console.Write that could be made intrinsic and therefore eliminate
non-triviality of cycle's body. While as I was talking about delegate call
that will always guarantee that. After I send a short mail to Vance with
mentioning delegate call in cycle body he agreed that this indeed should
work on any CLI implementation that complies ECMA spec. However I totally
agree with his point that it is rather fragile assumption - if that call to
delegate will be deleted or commented some times later than volatile will be
required.

See my blog at:

http://www.harper.no/valery


Jul 21 '05 #27

P: n/a
Valery Pryamikov <Va****@nospam.harper.no> wrote:
Vance made an excellent point in his mail. Even so his answer indirectly
confirms that my conclusions were correct (see [1] below), but I agree with
his point that reliance on that type of code analyze makes code difficult to
support and therefore fragile.
I'm still not entirely convinced they were all correct even with the
delegate, although I *was* definitely wrong about it being able to
cache it in a register (without some truly weird smarts going on). If
it caches the value anywhere, it's got to make sure that the thread
uses that cache everywhere it deals with the value, in order for the
access within the thread itself to remain consistent. That's
technically possible (I believe), but of course highly, highly
improbable. As I said to Valery in an email, the kind of system which
might show that would be a distributed CLR which used an entire
computer's memory as cache, with a central networked backing store as
"main memory".

My conclusions:

1) On any architecture we're ever likely to see .NET on, the code would
have worked fine. In "bizarro world" with a CLR which stretches the
specification to its limits, it may or may not work - there may still
be some doubt both ways, depending on whether or not I've convinced you
:) Of course, in such a world you're likely to quickly come across a
whole load of other code which is also badly synchronized... no doubt
including a lot of mine!

2) The above doesn't mean it's a good way to write code, if only
because anything which takes two experts and an interested observer to
decide on whether or not it's correct is a really bad idea :)
I also have to make public apology to Jon for being intolerant and rude in a
couple of my responses to that thread.


Nah - just passionate. If it would make you feel better, I could find
dozens of posts where I've been flat out nasty! I think you probably
raised my adrenaline level, but not my blood pressure, which is always
a sign of healthy debate! Apologies in return for anything I said which
annoyed you. I'll look forward to our next debate. If you'd like to
claim that objects are passed by reference, I'm sure we could really
get going :)

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #28

P: n/a
> If you'd like to
claim that objects are passed by reference, I'm sure we could really
get going :)


No kidding? <G> We ran around the block on that before :-) Cheers!

--
William Stacey, MVP

Jul 21 '05 #29

This discussion thread is closed

Replies have been disabled for this discussion.