By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,092 Members | 1,597 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,092 IT Pros & Developers. It's quick & easy.

thread pooling and short lived threads

P: n/a
Everywhere in documentation there are recommendations to use threads from
thread pooling for relatively short tasks. As I understand, fetching a page
or multiple pages (sometimes up to 50 but not tipical) from the Internet and
doing some processing on those would be considered to be a big/long task for
a thread from a pool. In our app it is possible to break the task into some
small ones (thread per fetch and processing thereafter or event smaller), but
the more threads we create the more context switches we get. Where is the
ballance point? What penalties do we incur for big/long tasks?

One more thing. There seems to be a bug with large numbers of asynchronous
IO where threads deadlock and timeout (it is all over the developers
communities). Any info on when the fix is going to be out?

Thank you,

Alexei
Jul 21 '05 #1
Share this Question
Share on Google+
31 Replies


P: n/a
You may want to be real careful about using the ThreadPool for downloading
stuff using WebRequest classes as they use the ThreadPool too. I had a lot
of pain figuring this out - not fun!

--
Sriram Krishnan

http://www.dotnetjunkies.com/weblog/sriram
"AlexeiOst" <Al*******@discussions.microsoft.com> wrote in message
news:79**********************************@microsof t.com...
Everywhere in documentation there are recommendations to use threads from
thread pooling for relatively short tasks. As I understand, fetching a
page
or multiple pages (sometimes up to 50 but not tipical) from the Internet
and
doing some processing on those would be considered to be a big/long task
for
a thread from a pool. In our app it is possible to break the task into
some
small ones (thread per fetch and processing thereafter or event smaller),
but
the more threads we create the more context switches we get. Where is the
ballance point? What penalties do we incur for big/long tasks?

One more thing. There seems to be a bug with large numbers of asynchronous
IO where threads deadlock and timeout (it is all over the developers
communities). Any info on when the fix is going to be out?

Thank you,

Alexei

Jul 21 '05 #2

P: n/a
Thank you for your warning. Unfortunatelly, I bumped into this problem
already as I mentioned in the 'One more thing' part of the message. Or is
there something else that you saw?

Thank you,

Alexei

"Sriram Krishnan" wrote:
You may want to be real careful about using the ThreadPool for downloading
stuff using WebRequest classes as they use the ThreadPool too. I had a lot
of pain figuring this out - not fun!

--
Sriram Krishnan

http://www.dotnetjunkies.com/weblog/sriram
"AlexeiOst" <Al*******@discussions.microsoft.com> wrote in message
news:79**********************************@microsof t.com...
Everywhere in documentation there are recommendations to use threads from
thread pooling for relatively short tasks. As I understand, fetching a
page
or multiple pages (sometimes up to 50 but not tipical) from the Internet
and
doing some processing on those would be considered to be a big/long task
for
a thread from a pool. In our app it is possible to break the task into
some
small ones (thread per fetch and processing thereafter or event smaller),
but
the more threads we create the more context switches we get. Where is the
ballance point? What penalties do we incur for big/long tasks?

One more thing. There seems to be a bug with large numbers of asynchronous
IO where threads deadlock and timeout (it is all over the developers
communities). Any info on when the fix is going to be out?

Thank you,

Alexei


Jul 21 '05 #3

P: n/a
Unfortunately - you consider it a bug while Microsoft says that is 'by
design'. So you're stuck I'm afraid. You might want to check out
http://www.bearcanyon.com/dotnet/#tpcontrol for an example of how to
increase the pool thread limit

--
Sriram Krishnan

http://www.dotnetjunkies.com/weblog/sriram
"AlexeiOst" <Al*******@discussions.microsoft.com> wrote in message
news:3D**********************************@microsof t.com...
Thank you for your warning. Unfortunatelly, I bumped into this problem
already as I mentioned in the 'One more thing' part of the message. Or is
there something else that you saw?

Thank you,

Alexei

"Sriram Krishnan" wrote:
You may want to be real careful about using the ThreadPool for
downloading
stuff using WebRequest classes as they use the ThreadPool too. I had a
lot
of pain figuring this out - not fun!

--
Sriram Krishnan

http://www.dotnetjunkies.com/weblog/sriram
"AlexeiOst" <Al*******@discussions.microsoft.com> wrote in message
news:79**********************************@microsof t.com...
> Everywhere in documentation there are recommendations to use threads
> from
> thread pooling for relatively short tasks. As I understand, fetching a
> page
> or multiple pages (sometimes up to 50 but not tipical) from the
> Internet
> and
> doing some processing on those would be considered to be a big/long
> task
> for
> a thread from a pool. In our app it is possible to break the task into
> some
> small ones (thread per fetch and processing thereafter or event
> smaller),
> but
> the more threads we create the more context switches we get. Where is
> the
> ballance point? What penalties do we incur for big/long tasks?
>
> One more thing. There seems to be a bug with large numbers of
> asynchronous
> IO where threads deadlock and timeout (it is all over the developers
> communities). Any info on when the fix is going to be out?
>
> Thank you,
>
> Alexei


Jul 21 '05 #4

P: n/a
The threadpool is a system wide resource that by default provides 25
threads. The BCL uses this internally for many of its operations so if you
are making heavy use of it yourself in long-running tasks there is a danger
that you will exhaust all available threads.

However, it does sound like your processing requirements are good candidates
for a threadpool. There are plenty of opensource threadpool classes
available that you can use for those tasks; use google to locate one. This
will give you complete control over the number of threads in the threadpool
and how they behave, and you can customize it for your own purposes (e.g.
tasks of different priorities, etc).
"AlexeiOst" <Al*******@discussions.microsoft.com> wrote in message
news:79**********************************@microsof t.com...
Everywhere in documentation there are recommendations to use threads from
thread pooling for relatively short tasks. As I understand, fetching a
page
or multiple pages (sometimes up to 50 but not tipical) from the Internet
and
doing some processing on those would be considered to be a big/long task
for
a thread from a pool. In our app it is possible to break the task into
some
small ones (thread per fetch and processing thereafter or event smaller),
but
the more threads we create the more context switches we get. Where is the
ballance point? What penalties do we incur for big/long tasks?

One more thing. There seems to be a bug with large numbers of asynchronous
IO where threads deadlock and timeout (it is all over the developers
communities). Any info on when the fix is going to be out?

Thank you,

Alexei

Jul 21 '05 #5

P: n/a
David Levine <no****************@wi.rr.com> wrote:
The threadpool is a system wide resource that by default provides 25
threads. The BCL uses this internally for many of its operations so if you
are making heavy use of it yourself in long-running tasks there is a danger
that you will exhaust all available threads.

However, it does sound like your processing requirements are good candidates
for a threadpool. There are plenty of opensource threadpool classes
available that you can use for those tasks; use google to locate one. This
will give you complete control over the number of threads in the threadpool
and how they behave, and you can customize it for your own purposes (e.g.
tasks of different priorities, etc).


I have one which you're welcome to use, for example. It's part of my
MiscUtil library available at
http://www.pobox.com/~skeet/csharp/miscutil

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #6

P: n/a
The threadpool is not a system wide resource, it's a CLR (aka. per process)
managed pool of:
1) max. 25 threads per CPU plus
2) a pool of max 1000 IOCP threads per process.

Willy.

"David Levine" <no****************@wi.rr.com> wrote in message
news:%2****************@tk2msftngp13.phx.gbl...
The threadpool is a system wide resource that by default provides 25
threads. The BCL uses this internally for many of its operations so if you
are making heavy use of it yourself in long-running tasks there is a
danger that you will exhaust all available threads.

However, it does sound like your processing requirements are good
candidates for a threadpool. There are plenty of opensource threadpool
classes available that you can use for those tasks; use google to locate
one. This will give you complete control over the number of threads in
the threadpool and how they behave, and you can customize it for your own
purposes (e.g. tasks of different priorities, etc).
"AlexeiOst" <Al*******@discussions.microsoft.com> wrote in message
news:79**********************************@microsof t.com...
Everywhere in documentation there are recommendations to use threads from
thread pooling for relatively short tasks. As I understand, fetching a
page
or multiple pages (sometimes up to 50 but not tipical) from the Internet
and
doing some processing on those would be considered to be a big/long task
for
a thread from a pool. In our app it is possible to break the task into
some
small ones (thread per fetch and processing thereafter or event smaller),
but
the more threads we create the more context switches we get. Where is the
ballance point? What penalties do we incur for big/long tasks?

One more thing. There seems to be a bug with large numbers of
asynchronous
IO where threads deadlock and timeout (it is all over the developers
communities). Any info on when the fix is going to be out?

Thank you,

Alexei


Jul 21 '05 #7

P: n/a
I stated it badly...by "system" I was referring to the managed process. I
was unaware of the 1000 IOCP threads/process; is this a Windows limit or a
CLR limit?

"Willy Denoyette [MVP]" <wi*************@pandora.be> wrote in message
news:eS****************@TK2MSFTNGP14.phx.gbl...
The threadpool is not a system wide resource, it's a CLR (aka. per
process) managed pool of:
1) max. 25 threads per CPU plus
2) a pool of max 1000 IOCP threads per process.

Willy.

"David Levine" <no****************@wi.rr.com> wrote in message
news:%2****************@tk2msftngp13.phx.gbl...
The threadpool is a system wide resource that by default provides 25
threads. The BCL uses this internally for many of its operations so if
you are making heavy use of it yourself in long-running tasks there is a
danger that you will exhaust all available threads.

However, it does sound like your processing requirements are good
candidates for a threadpool. There are plenty of opensource threadpool
classes available that you can use for those tasks; use google to locate
one. This will give you complete control over the number of threads in
the threadpool and how they behave, and you can customize it for your own
purposes (e.g. tasks of different priorities, etc).
"AlexeiOst" <Al*******@discussions.microsoft.com> wrote in message
news:79**********************************@microsof t.com...
Everywhere in documentation there are recommendations to use threads
from
thread pooling for relatively short tasks. As I understand, fetching a
page
or multiple pages (sometimes up to 50 but not tipical) from the Internet
and
doing some processing on those would be considered to be a big/long task
for
a thread from a pool. In our app it is possible to break the task into
some
small ones (thread per fetch and processing thereafter or event
smaller), but
the more threads we create the more context switches we get. Where is
the
ballance point? What penalties do we incur for big/long tasks?

One more thing. There seems to be a bug with large numbers of
asynchronous
IO where threads deadlock and timeout (it is all over the developers
communities). Any info on when the fix is going to be out?

Thank you,

Alexei



Jul 21 '05 #8

P: n/a
>
I have one which you're welcome to use, for example. It's part of my
MiscUtil library available at
http://www.pobox.com/~skeet/csharp/miscutil

I'll take a look at it - I'm always on the lookout for good open source
libs. Thanks.
Jul 21 '05 #9

P: n/a
David Levine <no****************@wi.rr.com> wrote:
I have one which you're welcome to use, for example. It's part of my
MiscUtil library available at
http://www.pobox.com/~skeet/csharp/miscutil
I'll take a look at it - I'm always on the lookout for good open source
libs. Thanks.


Goodo - it hasn't had much use yet, so although I'm fairly sure it's
thread-safe, there may be other features which would be useful to add.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #10

P: n/a
Here are some comments on the threadpool; please keep in mind that this is
very preliminary and does not really evaluate it fully, especially in terms
of its correctness - analyzing the correctness of multithreaded behavior is
very difficult!

Also, I suggest several options as ways that it could be extended - it in no
way suggests the current implementation has any deficiencies.

When setting the property of the Name, you might check to see if it has
already been initialized, and if it has throw an exception (similar to
current .net functionality). This is more a question of how it should behave
then - the code isn't wrong.

The value for MinThreads should not be allowed to exceed the value of
MaxThreads. Either throw an exception or increase the value of MaxThreads to
that of MinThreads.

When setting MaxThreads, if the new MaxThreads count is less then
MinThreads, either throw an exception or set the value of MinThreads to that
of MaxThreads.

Processing the OnException event #1: it locks around making a copy of the
exceptionHandler event, and outside of that lock it iterates the list and
forwards the exception to all subscribers (calls: eh(this,e)). This brings
up the following scenario:
1. OnException calls eh(this,e)
2. The invoked handler wants to remove a different handler from the list,
perhaps because the exception has invalidated some other logic.
3. It calls WorkerException -= new ExceptionHandler(yada);

This will remove the handler from the actual event but not from the event
that is being iterated. The result is that the handler will actually be
called even though the subscriber believes it is no longer subscribed.

There are a couple of ways to handle this: first, you could define this as
"bad" and a "good" programmer should not do this - e caveat programmer -
and you deserve whatever you get. You could programmatically handle this by
adding logic to detect if a remove occurs while iterating the subscriber
list, and then to skip delegates that have been removed during the
callback - there should be ways to synchronize this so that even twisted
code paths will be handled correctly. If the subscriber removes itself in
its own callback this should have no impact.

Processing the OnException event #2: The callback probably ought to be
encapsulated in a try-catch so that a handler that throws does not abort
other subscribers to the exception event. There is a try-catch around the
entire body but then one exception terminates the entire list of callbacks.

Processing the OnException event #3: Never never never use an empty catch!
(my own pet peeve :-)
I usually supply a method that I call SwallowException. This can output the
exception to the debug port, log it, whatever, but it is not the same as
publishing it. There are times where swallowing an exception is the correct
thing to do but these should be rare. Also, when you swallow using a naked
catch you are swallowing all exceptions, including non-CLSCompliant
exceptions. It is better to use ... catch(Exception) so that it does not
blindly swallow everything.

Processing the OnException event #4: An option would be to define a handler
that uses an out bool argument to indicate if further processing should be
halted. I think it might be likely that a subscriber may only care about
exceptions that occur on a particular thread or during a particular
operation, and not want other subscribers to see or handle those events.

In addition, you may want to add an argument that passes the same arguments
that were passed to the worker item queue so that the context that the
exception occurred in can be established. Without that information it will
be difficult for the exception callback handler to determine what operation
the exception occurred in.

It would also be very useful to have a log of exceptions that occurred
during a callback.
An ExecuteWorkItem Option: When firing the OnBeforeWorkItem, use it to check
if the subscriber wants to cancel the work item. Or add a cancel mechanism.

ExecuteWorkItem: This is called from within a large loop in method
WorkerThreadLoop(), and the try-catch covers everything in that loop. I
would surround the call to ExecuteWorkItem with a local try-catch in
addition to the outer try-catch. This will distinguish between exceptions
caused by other logic and exceptions in the worker callback itself.
AddWorkItem: The comment says to start the thread before adding the item,
but the actual code starts the thread after the item has been added to the
queue and the Monitor pulsed. A couple of points: the thread probably ought
to be started first, and the code should probably check if a thread is
available for use before starting a new thread. As it is I believe it will
keep creating threads until MaxThreads is reached regardless if one is
actually needed. There could be an IdleThread count that is checked before a
new thread is created. The code as is ought to work but it could perhaps be
optimized.

In WorkerThreadLoop (line 600), the LOC: Monitor.Wait(queueLock,idlePeriod);
This checks the actual variable idlePeriod, not the property. This means
that this check occurs outside the protection of the stateLock monitor. Is
this supposed to be like this? If not, I would suggest renaming all private
fields to use a leading underscore - this makes it easier to distinguish
between internal fields and properties.

I did not evaluate the correctness of the thread control/worker dequeue
logic - a quick glance did not reveal an obvious problems - I expect it
works correctly, but this kind of code is extremely hard to analyze for
correctness. Did you analyze what happens if a worker callback throws an
abort exception? This will kill the thread because there is nothing in the
logic to reset the abort. I think it ought to be ok...

To sum it up, it looks like it ought to work; without actually testing it I
can't say for sure. Most of my comments are concerned with features that I
would like to see added, not with any actual problems in the code itself.
Other features that could be added are...

Priority queues.
Set/get the priority of the worker threads. This ought to be checked before
and after the callback so that if a callback changes the priority the
threadpool resets it back to the correct setting.
Related to this, priority inversion detection/prevention.
Cancellation logic (remove an item from the queue).
Future considerations - CER mechanism may be useful for logic in finally
blocks. I need to see documenation on this...

Cheers,
Dave

"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
David Levine <no****************@wi.rr.com> wrote:
> I have one which you're welcome to use, for example. It's part of my
> MiscUtil library available at
> http://www.pobox.com/~skeet/csharp/miscutil

I'll take a look at it - I'm always on the lookout for good open source
libs. Thanks.


Goodo - it hasn't had much use yet, so although I'm fairly sure it's
thread-safe, there may be other features which would be useful to add.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too

Jul 21 '05 #11

P: n/a
David Levine <no****************@wi.rr.com> wrote:
Here are some comments on the threadpool; please keep in mind that this is
very preliminary and does not really evaluate it fully, especially in terms
of its correctness - analyzing the correctness of multithreaded behavior is
very difficult!
Tell me about it :)

Thanks ever so much for such detailed comments - it's much appreciated.
Also, I suggest several options as ways that it could be extended - it in no
way suggests the current implementation has any deficiencies.
I think it highlights a few genuine implementation deficiencies, myself
:)
When setting the property of the Name, you might check to see if it has
already been initialized, and if it has throw an exception (similar to
current .net functionality). This is more a question of how it should behave
then - the code isn't wrong.
Ah - I didn't realise .NET did this. It's an interesting debate though
- I know when I was writing some servlet code in Java, it was helpful
in debug to set the name of the thread itself to the URL of the request
being processed.

Given the .NET behaviour, perhaps I shouldn't be setting the Name
property of the individual threads? Not sure.

Rather than make it a "set once" property (which seems a bit clunky to
me) I think I'd rather have another constructor for the pool which
takes the name as a parameter - and then I can remove the "setter" for
the Name property entirely.
The value for MinThreads should not be allowed to exceed the value of
MaxThreads. Either throw an exception or increase the value of MaxThreads to
that of MinThreads.

When setting MaxThreads, if the new MaxThreads count is less then
MinThreads, either throw an exception or set the value of MinThreads to that
of MaxThreads.
Spot on - which behaviour do you think would be better?
Processing the OnException event #1: it locks around making a copy of the
exceptionHandler event, and outside of that lock it iterates the list and
forwards the exception to all subscribers (calls: eh(this,e)). This brings
up the following scenario:
1. OnException calls eh(this,e)
2. The invoked handler wants to remove a different handler from the list,
perhaps because the exception has invalidated some other logic.
3. It calls WorkerException -= new ExceptionHandler(yada);

This will remove the handler from the actual event but not from the event
that is being iterated. The result is that the handler will actually be
called even though the subscriber believes it is no longer subscribed.
Ooh, nasty. I hadn't thought of that.
There are a couple of ways to handle this: first, you could define this as
"bad" and a "good" programmer should not do this - e caveat programmer -
and you deserve whatever you get. You could programmatically handle this by
adding logic to detect if a remove occurs while iterating the subscriber
list, and then to skip delegates that have been removed during the
callback - there should be ways to synchronize this so that even twisted
code paths will be handled correctly. If the subscriber removes itself in
its own callback this should have no impact.
Yikes. That involves asking the delegate for the invocation list and
doing each invocation myself, correct?
Processing the OnException event #2: The callback probably ought to be
encapsulated in a try-catch so that a handler that throws does not abort
other subscribers to the exception event. There is a try-catch around the
entire body but then one exception terminates the entire list of callbacks.
Right. I thought I'd documented that as intended behaviour, but I see
that I haven't. (It turns out I've done it in ThreadController.)
Processing the OnException event #3: Never never never use an empty catch!
(my own pet peeve :-)
I usually supply a method that I call SwallowException. This can output the
exception to the debug port, log it, whatever, but it is not the same as
publishing it. There are times where swallowing an exception is the correct
thing to do but these should be rare. Also, when you swallow using a naked
catch you are swallowing all exceptions, including non-CLSCompliant
exceptions. It is better to use ... catch(Exception) so that it does not
blindly swallow everything.
But then, do I definitely want the ThreadPool to stop if a non-
CLSCompliant exception is thrown? ThreadAbortException won't be
swallowed, because it *can't* be swallowed. Then again, I see that I
don't have an extra catch{} block in the main worker loop, so I ought
to at least be consistent.

This is definitely an area I'm relatively unhappy about in .NET. If a
non-compliant exception is thrown, but you want to keep going anyway,
there's not a lot you can do - there isn't an exception to pass to any
handlers.

I take your point about not just swallowing it wholesale - but if I
call SwallowException, what should a library do in this case? It can't
know about debugging/ogging etc, and we're already in an OnException
case here - I don't want to go down a "OnExceptionInExceptionHandler"
route :)

Advice welcome here.
Processing the OnException event #4: An option would be to define a handler
that uses an out bool argument to indicate if further processing should be
halted. I think it might be likely that a subscriber may only care about
exceptions that occur on a particular thread or during a particular
operation, and not want other subscribers to see or handle those events.
Right. That's easy enough to do if I go down the manual invocation
list, but not easy otherwise. I'll think about it.

I think for the sake of convenience it would be better to make it a ref
parameter which would always have the value "false" on entry, just so
that in the normal case of "keep processing" the handler didn't have to
specifically set the value. What do you think?
In addition, you may want to add an argument that passes the same arguments
that were passed to the worker item queue so that the context that the
exception occurred in can be established. Without that information it will
be difficult for the exception callback handler to determine what operation
the exception occurred in.
That's a nice idea. The disadvantage is that the parameters couldn't be
garbage collected until after a job completed or failed. I don't know
how significant a disadvantage that would be.
It would also be very useful to have a log of exceptions that occurred
during a callback.
Not entirely sure what you mean here - could you elaborate?
An ExecuteWorkItem Option: When firing the OnBeforeWorkItem, use it to check
if the subscriber wants to cancel the work item. Or add a cancel mechanism.
That's a nice idea. Again, passing in the work item which is about to
be executed, right?
ExecuteWorkItem: This is called from within a large loop in method
WorkerThreadLoop(), and the try-catch covers everything in that loop. I
would surround the call to ExecuteWorkItem with a local try-catch in
addition to the outer try-catch. This will distinguish between exceptions
caused by other logic and exceptions in the worker callback itself.
Yes, that makes sense. Any suggestions as to what should happen to
logic exceptions? I'm inclined to let them propagate up.
AddWorkItem: The comment says to start the thread before adding the item,
but the actual code starts the thread after the item has been added to the
queue and the Monitor pulsed.
Ah, I'm with you. That looks like it's an incomplete comment, which
suggests that I should look *very* carefully at that code. I don't like
finding bits which clearly I've left incomplete :( I seem to remember
having an interesting time trying to work out all the appropriate
semantics there.
A couple of points: the thread probably ought
to be started first, and the code should probably check if a thread is
available for use before starting a new thread. As it is I believe it will
keep creating threads until MaxThreads is reached regardless if one is
actually needed. There could be an IdleThread count that is checked before a
new thread is created. The code as is ought to work but it could perhaps be
optimized.
That's actually deliberate, I *think*. (It's a while since I wrote the
code, admittedly.) I'm not convinced it was the correct decision
though. I'll think about it more with a few use cases.
In WorkerThreadLoop (line 600), the LOC: Monitor.Wait(queueLock,idlePeriod);
This checks the actual variable idlePeriod, not the property. This means
that this check occurs outside the protection of the stateLock monitor. Is
this supposed to be like this? If not, I would suggest renaming all private
fields to use a leading underscore - this makes it easier to distinguish
between internal fields and properties.
I suspect it's meant to be IdlePeriod, yes. I also suspect I'd have
made the same mistake even if I'd changed the convention. Personally I
hate the underscore convention - I find it makes code harder to read.
It's definitely a personal thing, but I'd need very convincing reasons
to change habits now :)
I did not evaluate the correctness of the thread control/worker dequeue
logic - a quick glance did not reveal an obvious problems - I expect it
works correctly, but this kind of code is extremely hard to analyze for
correctness. Did you analyze what happens if a worker callback throws an
abort exception? This will kill the thread because there is nothing in the
logic to reset the abort. I think it ought to be ok...
I think it should be too, but I haven't thought about it properly. That
whole loop is far too big at the moment - it desperately needs
refactoring, but last time I tried I failed to make it any better. I'll
have another go.
To sum it up, it looks like it ought to work; without actually testing it I
can't say for sure. Most of my comments are concerned with features that I
would like to see added, not with any actual problems in the code itself.
Other features that could be added are...

Priority queues.
So each item has a priority, and the queue is sorted by priority? That
makes sense, and shouldn't be too hard to implement.
Set/get the priority of the worker threads. This ought to be checked before
and after the callback so that if a callback changes the priority the
threadpool resets it back to the correct setting.
Agreed - good point and easy to implement. (In fact, I'm not sure that
actual checking is necessary - just setting the priority in each place
should do the trick, shouldn't it? I would hope that setting a thread's
priority to the one it had before isn't significantly more expensive
than checking the existing priority.)
Related to this, priority inversion detection/prevention.
I don't know much about priority inversion, but this sounds pretty
tricky. Any suggestions?
Cancellation logic (remove an item from the queue).
Mmm. That sounds potentially tricky, unless I add an identification
mechanism for items. I don't want to do it by comparing parameters.
Does a separate identification mechanism (just object, which if null
means the item can't be removed, and Equals is called for each item in
the queue to check for a match) sound okay?
Future considerations - CER mechanism may be useful for logic in finally
blocks. I need to see documenation on this...


Without knowing more about CER, I don't want to comment on this.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #12

P: n/a
Jon Skeet [C# MVP] <sk***@pobox.com> wrote:
In WorkerThreadLoop (line 600), the LOC: Monitor.Wait(queueLock,idlePeriod);
This checks the actual variable idlePeriod, not the property. This means
that this check occurs outside the protection of the stateLock monitor. Is
this supposed to be like this? If not, I would suggest renaming all private
fields to use a leading underscore - this makes it easier to distinguish
between internal fields and properties.


I suspect it's meant to be IdlePeriod, yes. I also suspect I'd have
made the same mistake even if I'd changed the convention. Personally I
hate the underscore convention - I find it makes code harder to read.
It's definitely a personal thing, but I'd need very convincing reasons
to change habits now :)


<snip>

While it's a bug, the fix *isn't* to make it use the IdlePeriod
property. I believe it's actually meant to be waitPeriod instead of
idlePeriod. Put it this way - if it's not meant to be waitPeriod, I
don't know why I've bothered calculating waitPeriod to start with.

The reason I *can't* use IdlePeriod in there is that it could cause a
deadlock - that would acquire stateLock when the thread already owns
queueLock, and the reverse is true elsewhere. I've carefully documented
the order in which locks are allowed to be acquired :) Just another of
those instances where it's scarily easy to get things wrong...

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #13

P: n/a
David Levine <no****************@wi.rr.com> wrote:
The value for MinThreads should not be allowed to exceed the value of
MaxThreads. Either throw an exception or increase the value of MaxThreads to
that of MinThreads.

When setting MaxThreads, if the new MaxThreads count is less then
MinThreads, either throw an exception or set the value of MinThreads to that
of MaxThreads.


<snip>

I think I've remembered the reason for these being independent, odd as
it may seem. If you want to set both of them, you would have to either
set one of them twice (if they were self-regulating) catch a potential
exception (if they threw exceptions) or check the current values first.
For instance, say you want to get them to be (5, 10). If they're
currently (2, 3) then you couldn't set MinThreads first. If they're
currently (12, 15) then you couldn't set MaxThreads first.

Possibly the solution to this is to have a separate method to set them
both atomically - what do you reckon?

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #14

P: n/a
>> When setting MaxThreads, if the new MaxThreads count is less then
MinThreads, either throw an exception or set the value of MinThreads to
that
of MaxThreads.


<snip>

I think I've remembered the reason for these being independent, odd as
it may seem. If you want to set both of them, you would have to either
set one of them twice (if they were self-regulating) catch a potential
exception (if they threw exceptions) or check the current values first.
For instance, say you want to get them to be (5, 10). If they're
currently (2, 3) then you couldn't set MinThreads first. If they're
currently (12, 15) then you couldn't set MaxThreads first.

Possibly the solution to this is to have a separate method to set them
both atomically - what do you reckon?

I thought that might have been the reason, in which case an atomic method
that set both would do the trick, and then you could leave the other checks
in the property setter.

Jul 21 '05 #15

P: n/a

"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
Jon Skeet [C# MVP] <sk***@pobox.com> wrote:
> In WorkerThreadLoop (line 600), the LOC:
> Monitor.Wait(queueLock,idlePeriod);
> This checks the actual variable idlePeriod, not the property. This
> means
> that this check occurs outside the protection of the stateLock monitor.
> Is
> this supposed to be like this? If not, I would suggest renaming all
> private
> fields to use a leading underscore - this makes it easier to
> distinguish
> between internal fields and properties.


I suspect it's meant to be IdlePeriod, yes. I also suspect I'd have
made the same mistake even if I'd changed the convention. Personally I
hate the underscore convention - I find it makes code harder to read.
It's definitely a personal thing, but I'd need very convincing reasons
to change habits now :)


<snip>

While it's a bug, the fix *isn't* to make it use the IdlePeriod
property. I believe it's actually meant to be waitPeriod instead of
idlePeriod. Put it this way - if it's not meant to be waitPeriod, I
don't know why I've bothered calculating waitPeriod to start with.

The reason I *can't* use IdlePeriod in there is that it could cause a
deadlock - that would acquire stateLock when the thread already owns
queueLock, and the reverse is true elsewhere. I've carefully documented
the order in which locks are allowed to be acquired :) Just another of
those instances where it's scarily easy to get things wrong...

Yup, which is why I wasn't sure if it was right or wrong :-). Checking lock
orders is very tricky, time-consuming, and error prone. Do you recall what
the purpose of the check was?


Jul 21 '05 #16

P: n/a
>> analyzing the correctness of multithreaded behavior is
very difficult!
Tell me about it :)


For a whale of a good time, try doing this in a kernel mode device driver
:-)
When setting the property of the Name, you might check to see if it has
already been initialized, and if it has throw an exception (similar to
current .net functionality). This is more a question of how it should
behave
then - the code isn't wrong.
Ah - I didn't realise .NET did this. It's an interesting debate though
- I know when I was writing some servlet code in Java, it was helpful
in debug to set the name of the thread itself to the URL of the request
being processed.

Given the .NET behaviour, perhaps I shouldn't be setting the Name
property of the individual threads? Not sure.


Agreed. I feel this is one of those issues that does not have a single
"correct" answer.
Rather than make it a "set once" property (which seems a bit clunky to
me) I think I'd rather have another constructor for the pool which
takes the name as a parameter - and then I can remove the "setter" for
the Name property entirely.
That would work. Another option is to add the name to the workitem, and then
set the name of the thread to that value immediately before invoking the
worker callback. Since the only legitimate use of the thread name is for
diagnositic/instrumentation purposes this seems to me like a good place to
set the name. That way the thread takes on the identity of the worker item
being processed. You could set the name to idle when it has no more items
left to process.
Processing the OnException event #1: it locks around making a copy of the
exceptionHandler event, and outside of that lock it iterates the list and
forwards the exception to all subscribers (calls: eh(this,e)). This
brings
up the following scenario:
1. OnException calls eh(this,e)
2. The invoked handler wants to remove a different handler from the list,
perhaps because the exception has invalidated some other logic.
3. It calls WorkerException -= new ExceptionHandler(yada);

This will remove the handler from the actual event but not from the event
that is being iterated. The result is that the handler will actually be
called even though the subscriber believes it is no longer subscribed.


Ooh, nasty. I hadn't thought of that.
There are a couple of ways to handle this: first, you could define this
as
"bad" and a "good" programmer should not do this - e caveat programmer -
and you deserve whatever you get. You could programmatically handle this
by
adding logic to detect if a remove occurs while iterating the subscriber
list, and then to skip delegates that have been removed during the
callback - there should be ways to synchronize this so that even twisted
code paths will be handled correctly. If the subscriber removes itself in
its own callback this should have no impact.


Yikes. That involves asking the delegate for the invocation list and
doing each invocation myself, correct?


Yes, but that isn't any more onerous that handling the add/remove as you are
already doing. I think something along the lines of the following would work
(caveat - this is completely untested and off the top of my head).
bool _runningList = false;
object _unsubscribeLock = new object();
ArrayList _unsubscriptionList = new ArrayList();

public event ExceptionHandler WorkerException
{
<snip>
remove
{
lock (eventLock)
{
exceptionHandler -= value;
if ( _runningList )
lock(queueLock)
{
queue.Enqueue(value);
if ( _runningList == true )
{
lock(_unsubscriptionList)
{ // add delegate to temp list
_unsubscriptionList.Add(value);
}
}
}
}
}
}

and in the invocation list

void OnException(Exception e)
{
// NOTE: I disagree with ignoring the exceptions, but that's a different
tale :-)
// If anything goes wrong within the exception handler,
// ignore it.
try
{
ExceptionHandler eh;
lock (eventLock)
{
eh = exceptionHandler;
}
if (eh != null)
{
_unsubscriptionList.Clear(); // reset list
_runningList = true;
foreach ( ExceptionHandler d in eh.GetInvocationList() )
{
if ( !IsUnsubscribed(d) )
d(this,e);
}
//eh(this, e);
}
}
catch // add better exception handling here
{
}
finally
{
_runningList = false;
}
}
private bool IsUnsubscribed(ExceptionHandler d)
{
lock(_unsubscribeLock)
{
foreach ( ExceptionHandler eh in _unsubscriptionList )
{
if ( d.Equals(eh) )
return true;
}
}
return false;
}
I would also change the lock used to synchronize the invocation list from
the event subscription so that the user can unsubscribe while processing the
exception even if called on a different thread. As it is you may get into a
deadlock if the user makes a blocking call to another thread that tries to
unsubscribe from the event while blocking in the worker callback.

Note: This is just a suggestion - this would not work for the final solution
because it does not account for multiple exceptions that may be processed
from multiple worker threads for multiple work items. So it really needs a
queue per worker thread.
Processing the OnException event #2: The callback probably ought to be
encapsulated in a try-catch so that a handler that throws does not abort
other subscribers to the exception event. There is a try-catch around the
entire body but then one exception terminates the entire list of
callbacks.
Right. I thought I'd documented that as intended behaviour, but I see
that I haven't. (It turns out I've done it in ThreadController.)


I haven't seen that class yet. Perhaps you could copy the documentation over
to this class.
Processing the OnException event #3: Never never never use an empty
catch!
(my own pet peeve :-)
I usually supply a method that I call SwallowException. This can output
the
exception to the debug port, log it, whatever, but it is not the same as
publishing it. There are times where swallowing an exception is the
correct
thing to do but these should be rare. Also, when you swallow using a
naked
catch you are swallowing all exceptions, including non-CLSCompliant
exceptions. It is better to use ... catch(Exception) so that it does not
blindly swallow everything.
But then, do I definitely want the ThreadPool to stop if a non-
CLSCompliant exception is thrown? ThreadAbortException won't be
swallowed, because it *can't* be swallowed. Then again, I see that I
don't have an extra catch{} block in the main worker loop, so I ought
to at least be consistent.


My take on non-CLSCompliant exceptions is...let the runtime barf if you get
one (IOW, don't catch it). The reason is that as far as I know the only way
you can even throw a non-clscompliant exception is to do it directly in
IL...I don't know of any language that will even let you generate one. You
can't get one of those even from interop because the runtime catches and
translates all exceptions from the interop layer. So if you do get one
you're are probably in an indeterminate state anyway and there are no
guarantees about what will happen next. I'd rather exit the app then keep
running under those conditions.


This is definitely an area I'm relatively unhappy about in .NET. If a
non-compliant exception is thrown, but you want to keep going anyway,
there's not a lot you can do - there isn't an exception to pass to any
handlers.

Agreed. I understand why they designed it in, but the problem is that they
did not give us a complete story about what it means, how it should be
handled, and what to expect if it is not handled.
I take your point about not just swallowing it wholesale - but if I
call SwallowException, what should a library do in this case? It can't
know about debugging/ogging etc, and we're already in an OnException
case here - I don't want to go down a "OnExceptionInExceptionHandler"
route :)

Advice welcome here.
I feel that it is ok to do at least some minimal tracing - heck, the runtime
itself will generate tracing when a listener is subscribed. I'd rather have
a trace facility built into the threadpool that the user can opt out of at
runtime. These kinds of breadcrumbs are invaluable when troubleshooting
nasty bugs. There could be a static property called UseDebugOutput that by
default is set to true. When it is true it could execute some logic like
this.

static void DebugOutput(string format,params object[] args)
{
if ( UseDebugOutput )
{
string msg = string.Format(format,args);
if ( System.Diagnostics.Debugger.IsAttached )
System.Diagnostics.Debug.WriteLine(msg);
else
System.Diagnostics.Trace.WriteLine(msg);
}
}

Processing the OnException event #4: An option would be to define a
handler
that uses an out bool argument to indicate if further processing should
be
halted. I think it might be likely that a subscriber may only care about
exceptions that occur on a particular thread or during a particular
operation, and not want other subscribers to see or handle those events.


Right. That's easy enough to do if I go down the manual invocation
list, but not easy otherwise. I'll think about it.


Fair enough.
I think for the sake of convenience it would be better to make it a ref
parameter which would always have the value "false" on entry, just so
that in the normal case of "keep processing" the handler didn't have to
specifically set the value. What do you think?

That sounds like a good idea.
In addition, you may want to add an argument that passes the same
arguments
that were passed to the worker item queue so that the context that the
exception occurred in can be established. Without that information it
will
be difficult for the exception callback handler to determine what
operation
the exception occurred in.


That's a nice idea. The disadvantage is that the parameters couldn't be
garbage collected until after a job completed or failed. I don't know
how significant a disadvantage that would be.


I don't think that matters much because if an exception occurs it is already
going down a sub-optimal code path anyway.
It would also be very useful to have a log of exceptions that occurred
during a callback.


Not entirely sure what you mean here - could you elaborate?

I was thinking about logging it to the event log itself. There are some
exceptions that are more setup or configuration type errors (setting the
MinThreadPoolSize to an invalid value) that should be obviously apparent,
but some runtime problems are extremely trickly to capture, let alone
diagnose and fix. In the case of a threadpool, because unhandled exceptions
go unnoticed there is a real danger that the application wont even realize
that an exception occurred and it could wind up hanging without having a
clue what went wrong. This could happen if the app did not subscribe to the
exception event. You could even filter the logging so that it only logged if
the exception event was not subscribed to.

An ExecuteWorkItem Option: When firing the OnBeforeWorkItem, use it to
check
if the subscriber wants to cancel the work item. Or add a cancel
mechanism.


That's a nice idea. Again, passing in the work item which is about to
be executed, right?

Yup. Or passing a magic cookie that represents the work item (it could be
generated and returned to the caller when the work item is queued up).
ExecuteWorkItem: This is called from within a large loop in method
WorkerThreadLoop(), and the try-catch covers everything in that loop. I
would surround the call to ExecuteWorkItem with a local try-catch in
addition to the outer try-catch. This will distinguish between exceptions
caused by other logic and exceptions in the worker callback itself.
Yes, that makes sense. Any suggestions as to what should happen to
logic exceptions? I'm inclined to let them propagate up.


This is perhaps a policy question. I think those sorts of exceptions
represents bugs in the threadpool class itself rather then a user error. In
that case I am not sure what should happen as it means that the threadpool
class has detected an internal error and it may not be able to continue to
operate.
AddWorkItem: The comment says to start the thread before adding the item,
but the actual code starts the thread after the item has been added to
the
queue and the Monitor pulsed.
Ah, I'm with you. That looks like it's an incomplete comment, which
suggests that I should look *very* carefully at that code. I don't like
finding bits which clearly I've left incomplete :( I seem to remember
having an interesting time trying to work out all the appropriate
semantics there.

Yes, this part of the operation is very tricky. I think there is a fair bit
of work to do to ensure that it operates correctly.

I've got a freeware threadpool class that I got from a MSFT blog over a year
ago - I'll forward it to your email. It is structured very different from
yours but it may give you some ideas. To be honest I find yours much easier
to understand :-).
A couple of points: the thread probably ought
to be started first, and the code should probably check if a thread is
available for use before starting a new thread. As it is I believe it
will
keep creating threads until MaxThreads is reached regardless if one is
actually needed. There could be an IdleThread count that is checked
before a
new thread is created. The code as is ought to work but it could perhaps
be
optimized.


That's actually deliberate, I *think*. (It's a while since I wrote the
code, admittedly.) I'm not convinced it was the correct decision
though. I'll think about it more with a few use cases.

Fair enough.
Personally I
hate the underscore convention - I find it makes code harder to read.
It's definitely a personal thing, but I'd need very convincing reasons
to change habits now :)

Very definitely personal - I love it :-) It makes it easy for me to tell the
difference between a field and a property.

I think it should be too, but I haven't thought about it properly. That
whole loop is far too big at the moment - it desperately needs
refactoring, but last time I tried I failed to make it any better. I'll
have another go.
This definitely non-trivial to get it right...
Priority queues.
So each item has a priority, and the queue is sorted by priority? That
makes sense, and shouldn't be too hard to implement.


Yes, and it is very useful for a large app.
Set/get the priority of the worker threads. This ought to be checked
before
and after the callback so that if a callback changes the priority the
threadpool resets it back to the correct setting.


Agreed - good point and easy to implement. (In fact, I'm not sure that
actual checking is necessary - just setting the priority in each place
should do the trick, shouldn't it? I would hope that setting a thread's
priority to the one it had before isn't significantly more expensive
than checking the existing priority.)

Yes, setting it immediately before and after each worker item callback
should do the trick. I think you want to do both to ensure that the NT
thread scheduler treats each thread the same.
Related to this, priority inversion detection/prevention.


I don't know much about priority inversion, but this sounds pretty
tricky. Any suggestions?


I've used classes that implement this. The basic idea is that multiple
threads want to acquire a lock on a synchronization primitive, such as a
mutex. Rather then acquire/release it directly, the mutex is wrapped by a
class that initializes the mutex and which takes as an argument the base
priority that all threads attempting to acquire the lock must be at. When a
thread attempts to get the lock the mutex wrapper ensures that the thread is
executing at the correct priority level and then attempts to acquire the
lock. After releasing the lock it sets the priority back to the original
level (there's obviously more to it then that, but that's the basic idea).

This ensures that all threads that own the mutex are running at a base
priority level such that a thread that normally runs at a low priority level
will instead run at an elevated level. This prevents the system from
suspending the thread that owns the mutex in favor of a third (or more)
thread that is running at a level higher then the base level of the low
priority thread but below the priority level of the mutex.
Cancellation logic (remove an item from the queue).


Mmm. That sounds potentially tricky, unless I add an identification
mechanism for items. I don't want to do it by comparing parameters.
Does a separate identification mechanism (just object, which if null
means the item can't be removed, and Equals is called for each item in
the queue to check for a match) sound okay?


I think a magic cookie may be the answer here. Just return a number that is
associated with the work item and return it from the Queueing call.
Future considerations - CER mechanism may be useful for logic in finally
blocks. I need to see documenation on this...


Without knowing more about CER, I don't want to comment on this.

Fair enough. I've read a bit about it on Brumme's weblog but I have not yet
seen the docs on it.

Cheers
Dave

Jul 21 '05 #17

P: n/a
David Levine <no****************@wi.rr.com> wrote:
I think I've remembered the reason for these being independent, odd as
it may seem. If you want to set both of them, you would have to either
set one of them twice (if they were self-regulating) catch a potential
exception (if they threw exceptions) or check the current values first.
For instance, say you want to get them to be (5, 10). If they're
currently (2, 3) then you couldn't set MinThreads first. If they're
currently (12, 15) then you couldn't set MaxThreads first.

Possibly the solution to this is to have a separate method to set them
both atomically - what do you reckon?


I thought that might have been the reason, in which case an atomic method
that set both would do the trick, and then you could leave the other checks
in the property setter.


Yup. I'll do it that way.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #18

P: n/a
David Levine <no****************@wi.rr.com> wrote:
While it's a bug, the fix *isn't* to make it use the IdlePeriod
property. I believe it's actually meant to be waitPeriod instead of
idlePeriod. Put it this way - if it's not meant to be waitPeriod, I
don't know why I've bothered calculating waitPeriod to start with.

The reason I *can't* use IdlePeriod in there is that it could cause a
deadlock - that would acquire stateLock when the thread already owns
queueLock, and the reverse is true elsewhere. I've carefully documented
the order in which locks are allowed to be acquired :) Just another of
those instances where it's scarily easy to get things wrong...


Yup, which is why I wasn't sure if it was right or wrong :-). Checking lock
orders is very tricky, time-consuming, and error prone. Do you recall what
the purpose of the check was?


Not sure what you mean. I'm not sure I can get away with *not*
requiring a certain lock order, unless I have just a single lock (which
is a possibility, I suspect - I'd have to check).

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #19

P: n/a

"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
David Levine <no****************@wi.rr.com> wrote:
> While it's a bug, the fix *isn't* to make it use the IdlePeriod
> property. I believe it's actually meant to be waitPeriod instead of
> idlePeriod. Put it this way - if it's not meant to be waitPeriod, I
> don't know why I've bothered calculating waitPeriod to start with.
>
> The reason I *can't* use IdlePeriod in there is that it could cause a
> deadlock - that would acquire stateLock when the thread already owns
> queueLock, and the reverse is true elsewhere. I've carefully documented
> the order in which locks are allowed to be acquired :) Just another of
> those instances where it's scarily easy to get things wrong...


Yup, which is why I wasn't sure if it was right or wrong :-). Checking
lock
orders is very tricky, time-consuming, and error prone. Do you recall
what
the purpose of the check was?


Not sure what you mean. I'm not sure I can get away with *not*
requiring a certain lock order, unless I have just a single lock (which
is a possibility, I suspect - I'd have to check).


Sorry if I wasn't clear. I meant that I had not analyzed the relationship
between the stateLock and the queueLock to verify that the lock order was
guaranteed if you used the property instead of the field. I agree that you
need a certain lock order, or a single lock.
Jul 21 '05 #20

P: n/a
David Levine <no****************@wi.rr.com> wrote:
Not sure what you mean. I'm not sure I can get away with *not*
requiring a certain lock order, unless I have just a single lock (which
is a possibility, I suspect - I'd have to check).


Sorry if I wasn't clear. I meant that I had not analyzed the relationship
between the stateLock and the queueLock to verify that the lock order was
guaranteed if you used the property instead of the field. I agree that you
need a certain lock order, or a single lock.


Right. I'll have a look and see whether I can combine them without too
much impact. (The event lock is a simple one, fortunately - I don't
need to access anything else within it, basically.)

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #21

P: n/a

"David Levine" <no****************@wi.rr.com> wrote in message
news:eX**************@TK2MSFTNGP15.phx.gbl...
I stated it badly...by "system" I was referring to the managed process. I
was unaware of the 1000 IOCP threads/process; is this a Windows limit or a
CLR limit?


Both are CLR managed pools, so it's a CLR limit.

Willy.
Jul 21 '05 #22

P: n/a
David Levine <no****************@wi.rr.com> wrote:
Ah - I didn't realise .NET did this. It's an interesting debate though
- I know when I was writing some servlet code in Java, it was helpful
in debug to set the name of the thread itself to the URL of the request
being processed.

Given the .NET behaviour, perhaps I shouldn't be setting the Name
property of the individual threads? Not sure.
Agreed. I feel this is one of those issues that does not have a single
"correct" answer.
Rather than make it a "set once" property (which seems a bit clunky to
me) I think I'd rather have another constructor for the pool which
takes the name as a parameter - and then I can remove the "setter" for
the Name property entirely.


That would work. Another option is to add the name to the workitem, and then
set the name of the thread to that value immediately before invoking the
worker callback. Since the only legitimate use of the thread name is for
diagnositic/instrumentation purposes this seems to me like a good place to
set the name. That way the thread takes on the identity of the worker item
being processed. You could set the name to idle when it has no more items
left to process.


Unfortunately, that doesn't work due to .NET only allowing you to set
the thread name once :(
Yikes. That involves asking the delegate for the invocation list and
doing each invocation myself, correct?


Yes, but that isn't any more onerous that handling the add/remove as you are
already doing. I think something along the lines of the following would work
(caveat - this is completely untested and off the top of my head).

bool _runningList = false;
object _unsubscribeLock = new object();
ArrayList _unsubscriptionList = new ArrayList();

public event ExceptionHandler WorkerException
{
<snip>
remove
{
lock (eventLock)
{
exceptionHandler -= value;
if ( _runningList )
lock(queueLock)
{
queue.Enqueue(value);
if ( _runningList == true )
{
lock(_unsubscriptionList)
{ // add delegate to temp list
_unsubscriptionList.Add(value);
}
}
}
}
}
}


Unfortunately it's not that simple, for the reasons you state later.

Given that all of this is entirely different to the way .NET treats
event handlers the rest of the time, I think I'll go for something like
this:

1) Implement the idea of cancelling further processing of the event - I
like that.

2) Use the normal .NET semantics of "whatever was subscribed when the
event is fired, use that" - don't try to react to changes during the
event.

3) Use the normal .NET semantics of an exception stopping further
processing.

4) Add some sort of exception logging for this and the other events.

<snip>
I would also change the lock used to synchronize the invocation list from
the event subscription so that the user can unsubscribe while processing the
exception even if called on a different thread. As it is you may get into a
deadlock if the user makes a blocking call to another thread that tries to
unsubscribe from the event while blocking in the worker callback.
Not with the current code. I don't hold the event lock while executing
the callback.
Note: This is just a suggestion - this would not work for the final solution
because it does not account for multiple exceptions that may be processed
from multiple worker threads for multiple work items. So it really needs a
queue per worker thread.


Yup. That's where things get tricky. I think it *could* be implemented
using a very clever linked list to track the whole invocation list
(never actually combining any delegates) but I don't think it's worth
it, to be honest.
Processing the OnException event #2: The callback probably ought to be
encapsulated in a try-catch so that a handler that throws does not abort
other subscribers to the exception event. There is a try-catch around the
entire body but then one exception terminates the entire list of
callbacks.


Right. I thought I'd documented that as intended behaviour, but I see
that I haven't. (It turns out I've done it in ThreadController.)


I haven't seen that class yet. Perhaps you could copy the documentation over
to this class.


I'll certainly revise all the documentation when I've got the semantics
above implemented - and probably implement and document the same
semantics in ThreadController.
But then, do I definitely want the ThreadPool to stop if a non-
CLSCompliant exception is thrown? ThreadAbortException won't be
swallowed, because it *can't* be swallowed. Then again, I see that I
don't have an extra catch{} block in the main worker loop, so I ought
to at least be consistent.


My take on non-CLSCompliant exceptions is...let the runtime barf if you get
one (IOW, don't catch it). The reason is that as far as I know the only way
you can even throw a non-clscompliant exception is to do it directly in
IL...I don't know of any language that will even let you generate one. You
can't get one of those even from interop because the runtime catches and
translates all exceptions from the interop layer. So if you do get one
you're are probably in an indeterminate state anyway and there are no
guarantees about what will happen next. I'd rather exit the app then keep
running under those conditions.


Yup, sounds good to me.
This is definitely an area I'm relatively unhappy about in .NET. If a
non-compliant exception is thrown, but you want to keep going anyway,
there's not a lot you can do - there isn't an exception to pass to any
handlers.


Agreed. I understand why they designed it in, but the problem is that they
did not give us a complete story about what it means, how it should be
handled, and what to expect if it is not handled.


Yup.
I take your point about not just swallowing it wholesale - but if I
call SwallowException, what should a library do in this case? It can't
know about debugging/ogging etc, and we're already in an OnException
case here - I don't want to go down a "OnExceptionInExceptionHandler"
route :)

Advice welcome here.


I feel that it is ok to do at least some minimal tracing - heck, the runtime
itself will generate tracing when a listener is subscribed. I'd rather have
a trace facility built into the threadpool that the user can opt out of at
runtime. These kinds of breadcrumbs are invaluable when troubleshooting
nasty bugs. There could be a static property called UseDebugOutput that by
default is set to true. When it is true it could execute some logic like
this.

static void DebugOutput(string format,params object[] args)
{
if ( UseDebugOutput )
{
string msg = string.Format(format,args);
if ( System.Diagnostics.Debugger.IsAttached )
System.Diagnostics.Debug.WriteLine(msg);
else
System.Diagnostics.Trace.WriteLine(msg);
}
}


Right. Presumably that means the code should be compiled with TRACE and
DEBUG defined, right? Otherwise those calls are removed entirely, I
believe. That's the other thing about libraries - having a debug
version and a separate release version is often a pain, IMO :(

<snip>

[Exception handler getting parameters]
That's a nice idea. The disadvantage is that the parameters couldn't be
garbage collected until after a job completed or failed. I don't know
how significant a disadvantage that would be.


I don't think that matters much because if an exception occurs it is already
going down a sub-optimal code path anyway.


But the problem is, you've got to keep the parameters around *whether
or not* an exception is thrown, just in case an exception is thrown.
It would also be very useful to have a log of exceptions that occurred
during a callback.


Not entirely sure what you mean here - could you elaborate?


I was thinking about logging it to the event log itself. There are some
exceptions that are more setup or configuration type errors (setting the
MinThreadPoolSize to an invalid value) that should be obviously apparent,
but some runtime problems are extremely trickly to capture, let alone
diagnose and fix. In the case of a threadpool, because unhandled exceptions
go unnoticed there is a real danger that the application wont even realize
that an exception occurred and it could wind up hanging without having a
clue what went wrong. This could happen if the app did not subscribe to the
exception event. You could even filter the logging so that it only logged if
the exception event was not subscribed to.


Right, that's not a bad idea. I haven't used the event log much - I'll
investigate.
An ExecuteWorkItem Option: When firing the OnBeforeWorkItem, use it to
check
if the subscriber wants to cancel the work item. Or add a cancel
mechanism.


That's a nice idea. Again, passing in the work item which is about to
be executed, right?

Yup. Or passing a magic cookie that represents the work item (it could be
generated and returned to the caller when the work item is queued up).


I think I prefer the idea of the caller optionally providing it - they
can then give something useful if they want to, and just ignore it if
they're not going to use the facility anyway.
ExecuteWorkItem: This is called from within a large loop in method
WorkerThreadLoop(), and the try-catch covers everything in that loop. I
would surround the call to ExecuteWorkItem with a local try-catch in
addition to the outer try-catch. This will distinguish between exceptions
caused by other logic and exceptions in the worker callback itself.


Yes, that makes sense. Any suggestions as to what should happen to
logic exceptions? I'm inclined to let them propagate up.


This is perhaps a policy question. I think those sorts of exceptions
represents bugs in the threadpool class itself rather then a user error. In
that case I am not sure what should happen as it means that the threadpool
class has detected an internal error and it may not be able to continue to
operate.


I think letting the exception propagate is probably the way forward -
it's a shame I can't detect whether or not AppDomain.UnhandledException
has a handler attached already...
Ah, I'm with you. That looks like it's an incomplete comment, which
suggests that I should look *very* carefully at that code. I don't like
finding bits which clearly I've left incomplete :( I seem to remember
having an interesting time trying to work out all the appropriate
semantics there.

Yes, this part of the operation is very tricky. I think there is a fair bit
of work to do to ensure that it operates correctly.

I've got a freeware threadpool class that I got from a MSFT blog over a year
ago - I'll forward it to your email. It is structured very different from
yours but it may give you some ideas. To be honest I find yours much easier
to understand :-).


Received - I'll have a look. I think the problem is that there's a
potential race condition just about whatever you do. I think I'll try
to fix it so it creates threads in fewer cases, but when in doubt it
should create one. I suspect the logic is "when I add the item to the
queue, was it empty before? If so, start a thread if none are running.
Otherwise, start a thread if fewer than the maximum number of threads
are running."

<snip>
Personally I
hate the underscore convention - I find it makes code harder to read.
It's definitely a personal thing, but I'd need very convincing reasons
to change habits now :)


Very definitely personal - I love it :-) It makes it easy for me to tell the
difference between a field and a property.


I do that with case instead - apart from constants which are in Pascal
case.
I think it should be too, but I haven't thought about it properly. That
whole loop is far too big at the moment - it desperately needs
refactoring, but last time I tried I failed to make it any better. I'll
have another go.


This definitely non-trivial to get it right...


Yup. I think I can probably factor out the "work out how long to wait
for" code, which would help.

<snip>
Set/get the priority of the worker threads. This ought to be checked
before
and after the callback so that if a callback changes the priority the
threadpool resets it back to the correct setting.


Agreed - good point and easy to implement. (In fact, I'm not sure that
actual checking is necessary - just setting the priority in each place
should do the trick, shouldn't it? I would hope that setting a thread's
priority to the one it had before isn't significantly more expensive
than checking the existing priority.)

Yes, setting it immediately before and after each worker item callback
should do the trick. I think you want to do both to ensure that the NT
thread scheduler treats each thread the same.


I don't think I want to set it before the worker item callback itself -
I want to set it before the "before work item" event is raised, so that
if that decides to change the priority, it can. I'll set it to the
default again after the "after work item" event is raised. Sound
reasonable?
Related to this, priority inversion detection/prevention.


I don't know much about priority inversion, but this sounds pretty
tricky. Any suggestions?


I've used classes that implement this. The basic idea is that multiple
threads want to acquire a lock on a synchronization primitive, such as a
mutex. Rather then acquire/release it directly, the mutex is wrapped by a
class that initializes the mutex and which takes as an argument the base
priority that all threads attempting to acquire the lock must be at. When a
thread attempts to get the lock the mutex wrapper ensures that the thread is
executing at the correct priority level and then attempts to acquire the
lock. After releasing the lock it sets the priority back to the original
level (there's obviously more to it then that, but that's the basic idea).

This ensures that all threads that own the mutex are running at a base
priority level such that a thread that normally runs at a low priority level
will instead run at an elevated level. This prevents the system from
suspending the thread that owns the mutex in favor of a third (or more)
thread that is running at a level higher then the base level of the low
priority thread but below the priority level of the mutex.


Right. I can see how that works, but I'm not sure
Cancellation logic (remove an item from the queue).


Mmm. That sounds potentially tricky, unless I add an identification
mechanism for items. I don't want to do it by comparing parameters.
Does a separate identification mechanism (just object, which if null
means the item can't be removed, and Equals is called for each item in
the queue to check for a match) sound okay?


I think a magic cookie may be the answer here. Just return a number that is
associated with the work item and return it from the Queueing call.


Again, I think I prefer the idea of the client providing the
identification, but after that it's fine.
Future considerations - CER mechanism may be useful for logic in finally
blocks. I need to see documenation on this...


Without knowing more about CER, I don't want to comment on this.

Fair enough. I've read a bit about it on Brumme's weblog but I have not yet
seen the docs on it.


Right. I'll post again when I've implemented the changes and uploaded
them.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #23

P: n/a

<snip>
Unfortunately, that doesn't work due to .NET only allowing you to set
the thread name once :(
Oops, right - darn it. I really do wonder what the justification was for
that decision - I can't think of a good reason why the name of a thread has
set-once semantics.

<snip>
Unfortunately it's not that simple, for the reasons you state later.

Given that all of this is entirely different to the way .NET treats
event handlers the rest of the time, I think I'll go for something like
this:

1) Implement the idea of cancelling further processing of the event - I
like that.

2) Use the normal .NET semantics of "whatever was subscribed when the
event is fired, use that" - don't try to react to changes during the
event.

3) Use the normal .NET semantics of an exception stopping further
processing.

4) Add some sort of exception logging for this and the other events.

All that sounds reasonable to me. After I sent out my last msg I started
thinking about the code snippet I had sent and realized there were a number
of problems with it...it's the sort of code you really want to think about
and not just whip up in 10 minutes.
Yup. That's where things get tricky. I think it *could* be implemented
using a very clever linked list to track the whole invocation list
(never actually combining any delegates) but I don't think it's worth
it, to be honest.
Agreed. If that ever became a requirement then the entire worker item/thread
mechanism may need to be reworked.
> But then, do I definitely want the ThreadPool to stop if a non-
> CLSCompliant exception is thrown? ThreadAbortException won't be
> swallowed, because it *can't* be swallowed. Then again, I see that I
> don't have an extra catch{} block in the main worker loop, so I ought
> to at least be consistent.

I remembered something that I missed the first time around - you can catch
the ThreadAbortException and reset it yourself (assuming you have sufficient
security).

<snip>
Right. Presumably that means the code should be compiled with TRACE and
DEBUG defined, right? Otherwise those calls are removed entirely, I
believe. That's the other thing about libraries - having a debug
version and a separate release version is often a pain, IMO :(

Good point. You can always define those constants even for a release build
if necessary.
<snip>

[Exception handler getting parameters] > That's a nice idea. The disadvantage is that the parameters couldn't be
> garbage collected until after a job completed or failed. I don't know
> how significant a disadvantage that would be.
I don't think that matters much because if an exception occurs it is
already
going down a sub-optimal code path anyway.


But the problem is, you've got to keep the parameters around *whether
or not* an exception is thrown, just in case an exception is thrown.


Refactoring the code might provide a way to avoid that, but it probably
isn't worth the effort. I usually don't worry too much about the performance
and memory penalties associated with exceptions because once you accept that
after you have an exception the performance in that code path is horrible,
then the extra bit of horribleness isn't all that much worse. Unless you
have LOTS of exceptions occurring at a high rate, in which case the
application probably is doomed anyway.

I think it requires care in laying it all out, but I wouldn't let the
possible memory pressure stop me from providing some potentially extremely
useful functionality.

Another possibility to consider is to change the arguments in the work item
from an array to a single object. The caller can still provide an object
that is an array, but it wouldn't force everyone to allocate an array when a
single object will suffice.


I think letting the exception propagate is probably the way forward -
it's a shame I can't detect whether or not AppDomain.UnhandledException
has a handler attached already...
You could always add your own handler onto the chain, but I've not convinced
myself that this would be all that useful.

Received - I'll have a look. I think the problem is that there's a
potential race condition just about whatever you do. I think I'll try
to fix it so it creates threads in fewer cases, but when in doubt it
should create one. I suspect the logic is "when I add the item to the
queue, was it empty before? If so, start a thread if none are running.
Otherwise, start a thread if fewer than the maximum number of threads
are running."
That is exactly the type of logic that it would take, and while there are
ways to eliminate the race it involves a rather complicated handshaking
using two or more events - it isn't worth it in this case. I think a simpler
mechanism will suffice.

<snip>

I don't think I want to set it before the worker item callback itself -
I want to set it before the "before work item" event is raised, so that
if that decides to change the priority, it can. I'll set it to the
default again after the "after work item" event is raised. Sound
reasonable?


Yes that ought to do it. It needs to be set before the 1st callback occurs
and reset after the last callback has completed.

Cheers,
Dave
Jul 21 '05 #24

P: n/a
David Levine <no****************@wi.rr.com> wrote:
Unfortunately it's not that simple, for the reasons you state later.

Given that all of this is entirely different to the way .NET treats
event handlers the rest of the time, I think I'll go for something like
this:

1) Implement the idea of cancelling further processing of the event - I
like that.

2) Use the normal .NET semantics of "whatever was subscribed when the
event is fired, use that" - don't try to react to changes during the
event.

3) Use the normal .NET semantics of an exception stopping further
processing.

4) Add some sort of exception logging for this and the other events.
All that sounds reasonable to me.


Having rejigged the implementation for most of the stuff we've talked
about, I've now actually just got exceptions from the events
propagating up. That means I don't have to swallow *any* exceptions,
which is quite nice. I still need to work out exactly what to do if an
exception occurs in the work item and there are no handlers available,
but that's a separable piece of work.
After I sent out my last msg I started
thinking about the code snippet I had sent and realized there were a number
of problems with it...it's the sort of code you really want to think about
and not just whip up in 10 minutes.
Absolutely.
> But then, do I definitely want the ThreadPool to stop if a non-
> CLSCompliant exception is thrown? ThreadAbortException won't be
> swallowed, because it *can't* be swallowed. Then again, I see that I
> don't have an extra catch{} block in the main worker loop, so I ought
> to at least be consistent.
I remembered something that I missed the first time around - you can catch
the ThreadAbortException and reset it yourself (assuming you have sufficient
security).


Mmm... Not entirely sure I should be doing that though.
[Exception handler getting parameters]
> That's a nice idea. The disadvantage is that the parameters couldn't be
> garbage collected until after a job completed or failed. I don't know
> how significant a disadvantage that would be.

I don't think that matters much because if an exception occurs it is
already
going down a sub-optimal code path anyway.


But the problem is, you've got to keep the parameters around *whether
or not* an exception is thrown, just in case an exception is thrown.


Refactoring the code might provide a way to avoid that, but it probably
isn't worth the effort.


I don't see how it could though. Before calling the delegate, you've
got to make the decision about whether or not you're going to keep the
parameters around. If you are, you've got to keep them until the
delegate has finished executing. You can't make that decision at the
point of time when the work item throws an exception.
I usually don't worry too much about the performance
and memory penalties associated with exceptions because once you accept that
after you have an exception the performance in that code path is horrible,
then the extra bit of horribleness isn't all that much worse. Unless you
have LOTS of exceptions occurring at a high rate, in which case the
application probably is doomed anyway.
But the point is that the penalty is incurred for the non-exception
case as well...
I think it requires care in laying it all out, but I wouldn't let the
possible memory pressure stop me from providing some potentially extremely
useful functionality.
I've ended up making it optional. The work item class is now a separate
public class which publishes its parameter array, ID, priority and
whether or not parameters are preserved, allowing those to be set at
construction time. (It also allows the calling code to specify whether
the parameter array should be cloned at construction time or not.)
Another possibility to consider is to change the arguments in the work item
from an array to a single object. The caller can still provide an object
that is an array, but it wouldn't force everyone to allocate an array when a
single object will suffice.


An array has to be allocated either way, for the delegate invocation.
I've now added the params modifier so that the creation of the array
doesn't need to occur in client code though.

One thing neither of us noticed when talking about the cancelling and
prioritising was that Queue doesn't allow you to look at anything other
than the start of the queue, or add things anywhere other than the end.
So I've had to write a RandomAccessQueue collection which is most
efficient at normal queue operations, but allows other access too. I've
written it this morning, but given that I'm ill and it's about 500
lines long, I think I'll need to review it carefully before trusting it
too much. Writing code fast like that is energising but error-prone!

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #25

P: n/a
>
Having rejigged the implementation for most of the stuff we've talked
about, I've now actually just got exceptions from the events
propagating up. That means I don't have to swallow *any* exceptions,
which is quite nice. I still need to work out exactly what to do if an
exception occurs in the work item and there are no handlers available,
but that's a separable piece of work.
I'm not sure I'm following what's happening there, but that's ok for
now...when you get the code to a point you're happy with it make it
available and I'll take another look at it.

I remembered something that I missed the first time around - you can
catch
the ThreadAbortException and reset it yourself (assuming you have
sufficient
security).


Mmm... Not entirely sure I should be doing that though.


What I'm thinking is that a callback routine may have a ThreadAbort injected
into it, but since it is a threadpool thread it is really owned by the
threadpool class, so it ought to log it and possibly reset it. Also,
depending on the semantics offered by the class, you may want to save a copy
of the exception object and make it available later, either by someone
requesting a copy of it, or rethrowing the same (or a wrapped) exception
object but on a different thread. This is similar to what the async
callbacks do - when an exception occurs on another thread the exception is
saved. Later, when the code calls EndInvoke the exception is rethrown.

I don't see how it could though. Before calling the delegate, you've
got to make the decision about whether or not you're going to keep the
parameters around. If you are, you've got to keep them until the
delegate has finished executing. You can't make that decision at the
point of time when the work item throws an exception.
I think you are more concerned about this then I am. But wouldn't you have
to save them until the callback had completed running anyway? I think I am
missing something here.
<snip>
I've ended up making it optional. The work item class is now a separate
public class which publishes its parameter array, ID, priority and
whether or not parameters are preserved, allowing those to be set at
construction time. (It also allows the calling code to specify whether
the parameter array should be cloned at construction time or not.)

I'd like to see the code before I comment on this.

An array has to be allocated either way, for the delegate invocation.
I've now added the params modifier so that the creation of the array
doesn't need to occur in client code though.
I was thinking about using the params keyword as well. I don't quite see why
the object context itself needs to be an array - I assume you have a
different purpose in mind then I do.
One thing neither of us noticed when talking about the cancelling and
prioritising was that Queue doesn't allow you to look at anything other
than the start of the queue, or add things anywhere other than the end.
So I've had to write a RandomAccessQueue collection which is most
efficient at normal queue operations, but allows other access too. I've
written it this morning, but given that I'm ill and it's about 500
lines long, I think I'll need to review it carefully before trusting it
too much. Writing code fast like that is energising but error-prone!


I love the smell of fresh code in the morning :-) Best let it simmer for a
day or two, then add a little seasoning before letting anyone have a taste
of it :-)

I think a random access queue is a useful construct.

Dave


Jul 21 '05 #26

P: n/a
David Levine <no****************@wi.rr.com> wrote:
Having rejigged the implementation for most of the stuff we've talked
about, I've now actually just got exceptions from the events
propagating up. That means I don't have to swallow *any* exceptions,
which is quite nice. I still need to work out exactly what to do if an
exception occurs in the work item and there are no handlers available,
but that's a separable piece of work.
I'm not sure I'm following what's happening there, but that's ok for
now...when you get the code to a point you're happy with it make it
available and I'll take another look at it.


Yup. Should be later today.

Basically the only exceptions I specifically catch are those within the
work items themselves. Anything else can go to the AppDomain's
unhandled exception event.
I remembered something that I missed the first time around - you can
catch
the ThreadAbortException and reset it yourself (assuming you have
sufficient
security).


Mmm... Not entirely sure I should be doing that though.


What I'm thinking is that a callback routine may have a ThreadAbort injected
into it, but since it is a threadpool thread it is really owned by the
threadpool class, so it ought to log it and possibly reset it. Also,
depending on the semantics offered by the class, you may want to save a copy
of the exception object and make it available later, either by someone
requesting a copy of it, or rethrowing the same (or a wrapped) exception
object but on a different thread. This is similar to what the async
callbacks do - when an exception occurs on another thread the exception is
saved. Later, when the code calls EndInvoke the exception is rethrown.


Hmm. That's a possibility. I'm not sure about resetting thread aborts
though - that seems to me the kind of thing I shouldn't be doing, at
least not without being told to. Maybe I could make it an option.
I don't see how it could though. Before calling the delegate, you've
got to make the decision about whether or not you're going to keep the
parameters around. If you are, you've got to keep them until the
delegate has finished executing. You can't make that decision at the
point of time when the work item throws an exception.


I think you are more concerned about this then I am. But wouldn't you have
to save them until the callback had completed running anyway? I think I am
missing something here.


No, you don't need to save them. If you put them into just a local
variable, the JIT can easily notice when nothing else is going to read
them.
<snip>
I've ended up making it optional. The work item class is now a separate
public class which publishes its parameter array, ID, priority and
whether or not parameters are preserved, allowing those to be set at
construction time. (It also allows the calling code to specify whether
the parameter array should be cloned at construction time or not.)


I'd like to see the code before I comment on this.


Understandable :)
An array has to be allocated either way, for the delegate invocation.
I've now added the params modifier so that the creation of the array
doesn't need to occur in client code though.


I was thinking about using the params keyword as well. I don't quite see why
the object context itself needs to be an array - I assume you have a
different purpose in mind then I do.


I'm calling Delegate.Invoke, which uses an object array as its way of
passing parameters. I can't get away from that (unless I special case
specific parameter sets, which feels ugly to me) so I think
params object[] is fine. The only time it's a problem is when you want
to pass null or an object[] as the sole parameter, at which point you
need to cast that to object (otherwise it'll be viewed as the parameter
array itself).
One thing neither of us noticed when talking about the cancelling and
prioritising was that Queue doesn't allow you to look at anything other
than the start of the queue, or add things anywhere other than the end.
So I've had to write a RandomAccessQueue collection which is most
efficient at normal queue operations, but allows other access too. I've
written it this morning, but given that I'm ill and it's about 500
lines long, I think I'll need to review it carefully before trusting it
too much. Writing code fast like that is energising but error-prone!


I love the smell of fresh code in the morning :-) Best let it simmer for a
day or two, then add a little seasoning before letting anyone have a taste
of it :-)


I'll look over it again today and then publish it. The more eyes that
look at it, the better :)
I think a random access queue is a useful construct.


Likewise. Of course, that means someone else has probably already done
it. Currently the code has a few "TODO: This is too hard to think about
right now" bits, but only where efficiency is concerned. Basically I
should be using Array.Copy to do things in a bulky way where I'm
currently not, but that can improve over time (and it only occurs with
the ThreadPool when you cancel an item or insert one with a different
priority so it ends up in the middle of the queue).

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #27

P: n/a
>
I'll look over it again today and then publish it. The more eyes that
look at it, the better :)


Let me know when you have it ready; I'll save all my comments until then.

Dave

Jul 21 '05 #28

P: n/a
David Levine <no****************@wi.rr.com> wrote:
I'll look over it again today and then publish it. The more eyes that
look at it, the better :)


Let me know when you have it ready; I'll save all my comments until then.


It's up now, in fact. I've also added the ability to make worker
threads foreground or background in the same way as thread priorities.
(So you can have threads that are background for most of the time, but
set them to foreground in the BeforeWorkItem if you want, to make sure
the CLR doesn't decide to quit half way through an item.)

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #29

P: n/a
It'll take me a few days to get to it...things are crazy at work right now.
I'll post back on this thread when I've had a chance to look at it.

PS: I took a quick glance and there have been some significant changes.

"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
David Levine <no****************@wi.rr.com> wrote:
> I'll look over it again today and then publish it. The more eyes that
> look at it, the better :)


Let me know when you have it ready; I'll save all my comments until then.


It's up now, in fact. I've also added the ability to make worker
threads foreground or background in the same way as thread priorities.
(So you can have threads that are background for most of the time, but
set them to foreground in the BeforeWorkItem if you want, to make sure
the CLR doesn't decide to quit half way through an item.)

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too

Jul 21 '05 #30

P: n/a
David Levine <no****************@wi.rr.com> wrote:
It'll take me a few days to get to it...things are crazy at work right now.
I'll post back on this thread when I've had a chance to look at it.
Yup, no problem. I appreciate you looking at it at all, whenever that
happens to be :)
PS: I took a quick glance and there have been some significant changes.


Oh yes :) I don't think there's anything significant that we haven't
discussed though, apart from the background thread bit. The main loop
is refactored quite a bit, which should help readability.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Jul 21 '05 #31

P: n/a
I started a new thread on this.

"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
David Levine <no****************@wi.rr.com> wrote:
It'll take me a few days to get to it...things are crazy at work right
now.
I'll post back on this thread when I've had a chance to look at it.


Yup, no problem. I appreciate you looking at it at all, whenever that
happens to be :)
PS: I took a quick glance and there have been some significant changes.


Oh yes :) I don't think there's anything significant that we haven't
discussed though, apart from the background thread bit. The main loop
is refactored quite a bit, which should help readability.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too

Jul 21 '05 #32

This discussion thread is closed

Replies have been disabled for this discussion.