Alvin,
No, I don't. Major IHV's (and Universities) and large software companies
(MS, Oracle, SAP, etc..) have established all sort of 'joint' programs. A
'joint competence' program is one where both companies share their
knowledge, a 'joint R&D' program is one where they share research and
development efforts.
Willy.
"Alvin Bruney - ASP.NET MVP" <www.lulu.com/owc> wrote in message
news:u0******** ******@TK2MSFTN GP14.phx.gbl...
| >People in our joint MS competence center are further investigating this.
| You work for MS?
|
| --
| Regards,
| Alvin Bruney [MVP ASP.NET]
|
| [Shameless Author plug]
| The Microsoft Office Web Components Black Book with .NET
| Now Available @
www.lulu.com/owc
| Forth-coming VSTO.NET - Wrox/Wiley 2006
| -------------------------------------------------------
|
|
|
| "Willy Denoyette [MVP]" <wi************ *@telenet.be> wrote in message
| news:ed******** ******@TK2MSFTN GP09.phx.gbl...
| >
| > "Jon Skeet [C# MVP]" <sk***@pobox.co m> wrote in message
| > news:11******** *************@g 47g2000cwa.goog legroups.com...
| > | Willy Denoyette [MVP] wrote:
| > |
| > | <snip>
| > |
| > | > Ok, we finished a code analysis on different HW, and it turns out
(in
| > short)
| > | > that the issue relates to the size of the L2 cache the memory
systems
| > | > hierarchy and the memory model.
| > | > Processors with 512KB or less do have the issue I mentioned above,
| > | > processors with 1MB and 2MB caches do not have the issue with the
code
| > as
| > | > posted, but they show the same behavior with larger List's, and in
| > | > real-world applications.
| > | > If you change the code line
| > | > List<string> list = new List<string>();
| > | > into:
| > | > List<string> list = new List<string>(Si ze);
| > | > and run the code, you'll see consistent behavior. The reason is that
| the
| > | > former code line creates a List with 132076 entries (null
initialized)
| > while
| > | > the latter creates a List with 100000 entries, that is, 13076 * 4
and
| > 100000
| > | > * 4 bytes respectively. This means that in the first case, the L2
| cache
| > has
| > | > no more room left after the List is initialized and 'filled', in the
| > latter
| > | > case the cache has ~100000 bytes 'free', which is enough to load the
| > | > delegates IL and compile it when the second test starts executing.
| > |
| > | Wow. Interesting stuff. Doesn't explain the profiler dying, of course
| > | :)
| > |
| > | Of course, in the real world the cost of the iteration is usually not
| > | going to be the limiting factor, but it's fascinating to see the
| > | difference this can make.
| > |
| > | Jon
| > |
| >
| > The VS profiler dying is still under investigation (it's intermittent),
| all
| > we know is that it's timing related. People in our joint MS competence
| > center are further investigating this.
| > The in-house profiler issue was a bug in the profiler code which
| incorrectly
| > handled the anon delegate target creation, something that gets done when
| the
| > List.ForEach method runs for the first time (so not at method JIT time).
| > And that's also the biggest difference between both:
| >
| > list.ForEach (delegate(strin g s) { sum += s.Length; });
| > and:
| > list.ForEach (action);
| >
| > In the former case the delegate target is loaded from the IL and JIT
| > compiled when ForEach executes for the first time (quite interesting to
| > follow the code path, really). In the latter case the target is built
| before
| > 'ForEach' runs (but also after the method gets JIT compiled). The
| resulting
| > code (after the first iteration) is exactly the same for both cases, BUT
| the
| > former is quite disturbing the cache (L1 and L2) in this particular test
| > (where the cache already contains most of the data to be iterated over).
| >
| > Willy.
| >
| >
| >
| >
| >
| >
| >
|
|