Lady WeaknessForCats,
Thank you for getting back to me. I appreciate your deep and thoughtful response. If I was rude, please accept my apologies as that was not my intent.
I had looked at some of those types of articles, and couldn't decide if it was going to give me useful information, or if it would provide a false (or useless) statistic for what I am trying to measure. Also, I was trying to avoid reinvention of wheel, by asking if other folks had solid information or knowledge.
TOO EXPENSIVE: estimate a 10,000 write-cycle life-span for each bit of flash; the computed aggregate write rate should not cause failure over an operational lifetime of ten years. If 10,000 is exceeded, what is the estimate for write-count, and what is the expected system life-span? So, I do not have an exact number, but I have a trivially expressed goal (10,000 write-cycles). Basic threshold is likely a factor of ten low (100,000 write-cycles), but I don't know the exact chip being used (yet). As the flash is accessed with a JFFS2 file-system, there may be error control mechanisms that significantly extend the lifespan.
Seeing how people are pulling that from
is quite helpful, as I got lost reading through code and reports without labelled information.
Obviously I need to perform some more interesting and detailed measurements, and see if there is useful information which I can capture.
Your observations are germane:
- Bug free is never guaranteed, but I do try my best. Instrumentation which reports incorrect values, or analysis of the wrong sort, can cause incredible long-term problems.
- Virtual machines cannot be faster than the host, period end.
- Yes, I do minimise calls, although there are other processes that I do not have control over while performing experiments, so I will have to control for them.
- I am currently stuck with SQLite, unless I determine that there is a disaster waiting to happen (e.g. destruction of flash).
The current model is that a specific group of transactions will cause an implicit database backup. Other transactions, which do not write critical information, do not cause the backup.
As to running the database from the flash, instead of periodic backups, is that it may actually write less data than copying the (aprox.) 50 kiByte file for critical transactions. This is a solution that I am considering, as it removes a great deal of (unnecessary) system complexity.
I have also considered a backup process that monitors a flag bit for backup-required, and watches the file to determine if it is active, or no. There are some ugly complexity issues involved with this sort of solution, and I will discuss them with you if I am unable to figure out the necessary IOCTL and locking.
As to the database use, I'm on an embedded system with millions of customer units, so power-off, unintended reboots, and other fails are a fact of life. I do not leave customers in an unrecoverable state.
If you'd like, we can discuss in detail the data collection and retention, as I am not adverse to good suggestions.
I will look into your suggested statistics collection and get back to you tomorrow, once I have a chance to work with them.
Many Thanks,
Oralloy