"James" <sn****@jamison.snort> wrote in message
news:3f********@clarion.carno.net.au...
What's the point of calculating MFLOPS? What does it show us when
normally we can't get the maximum amount of floating point operations,
as we should. What can we normally tell with such a measure? I'm sure
two different compilers (doing different things) would give separate
results... so can we then predict how good or bad one is from the other?
The MegaFLOP (million floating point operations per second) is an old
measure of supercomputer performance, dating from the days when
supercomputers ran at 50 MHz.
Many operating systems let you measure the CPU time of an individual thread.
That's how it was normally done, although I occasionally saw MFLOP ratings
based on "wall clock time" instead. In that case you'd do your best to keep
other threads from running during the test.