There's been increasing talk about how much better GPUs are at
math-intensive apps. GPUs today rival supercomputers of 7 years ago.
Example: http://hardware.slashdot.org/article.../11/09/2056233
When the people over at the Folding@Home (protein folding distributed
app like SETI@Home) switched it's processing to the GPU that it
increased a computer's ability to crunch the numbers by 20-40 TIMES.
There are C compilers being released by ATI for their GPUs, which is
the last thing I'd want to have to use.
I don't know enough of how it would work best, but either have the
Framework able to automatically make use of GPUs (when present) for
some of it's math or maybe have specific assemblies with specific types
of threads you could create which would take advantage of the GPU if
possible. If not, it would still process (slowly) on the CPU.
Processing matrices would be very fast, and a number of libraries and
technologies would be built on that.
There are some math-intensive distributed apps I have in mind but
really don't want to be compiling separate libraries intended for
specific GPUs... I want to have that abstracted for me.
With Vista being so graphics intensive, more computers will be coming
with better GPUs that programmers should have an easy way to leverage.
For MS, it could also help give a very serious image to .NET
development in the number crunching arena. Some of the benchmark graphs
could be pretty amazing. :) If it were able to automatically make use
of the GPU where applicable without needing any special coding, then
the whole framework could get a performance boost.