Karl Heinz Buchegger wrote:
I am not sure if I understand correctly what your wrote.
Portability with reading on a binary level is a hard thing.
That is: You write the file on one platform. Transport it to
another platform and read it there.
That's right. I have more or less concluded that binary files are a dead
loss for average use. I inherited a program with a binary file format
once, and had to put in some additions. Result: entirely new struct
definitions, then converter programs, etc. Then I put it on Linux as
well as DOS. Big trouble, as padding was different on the same
processor. But after I converted to a text format, all these troubles
went away (provided I wrote once-only bullet-proof cr/lf recognising code).
I assigned a version of the file a version number. Then, when new
versions added fields to the data, I upped the number. Output always
sent out the latest version, but input code had this form:
read original fields
if (version > 1) { read some more fields } else set sensible defaults
if (version > 2) { read yet more... } else set defaults...
I am up to about version 20 and have an app that is almost
unrecogniseable from the original, but it still reads and correctly
loads the very first file format (not counting the binary ones, that is!).
I had to delete some fields in one version. Even that can be sensibly
handled in an obvious way.
--
Ron House
ho***@usq.edu.au http://www.sci.usq.edu.au/staff/house