In article <th************ *************** *****@4ax.com>,
Mark McIntyre <ma**********@s pamcop.netwrote :
>On Tue, 25 Jul 2006 15:37:21 +0000 (UTC), in comp.lang.c ,
ro******@ibd.n rc-cnrc.gc.ca (Walter Roberson) wrote:
>>It so happens that your statement is incorrect on some operating
systems, including SGI IRIX and including Linux.
>... and Windows. However typically unless you have a UPS or other
battery supply, you get about a hundredth of a second to do anything
in any OS...
Tsk, overly specific on the timelines ;-)
Some OSes only run on systems which are designed to provide longer
usable emergency power-shutdown times.
Besides, 1/100th of a second might well be long -enough- on
systems with battery-backed (or capacitor-backed) hard disk controllers,
or systems with SDRAM or other writable non-volatile memory.
At (say) 1 Gops (10^9 core operations per second), 1/100th of second
is enough for 10 million core operations. Even with the usual
"drop by a factor of 10 for each level of cache", one can expect
several tens of thousands of real memory operations, and the speed
of your permanent storage (e.g., disk drives) typically is
the limiting factor.
It's been some time since I had a disk soft corrupted due to
power failure, and much longer yet since the last hard corruption
due to power failure: systems these days typically last long enough for
a drive cache flush. But brownouts (not bad enough to be
detected as power failures) have caused me the occasional trouble
within the last year.
--
"law -- it's a commodity"
-- Andrew Ryan (The Globe and Mail, 2005/11/26)