By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,988 Members | 1,074 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,988 IT Pros & Developers. It's quick & easy.

File read error win9x winNT

P: n/a
I have been having a very peculiar issue from a long time.

I have an application where multiple clients read from a shared set of
files. When a record is changed, sometimes the win9x clients still read
the old data (if it was read earlier) and this is causing data
corruption. WinNT clients inlcuding windows2000 & XP do not have this
issue. The program is complied in VC++, console mode.

I am unable to understand the cause. I flush the files before the read
and still have this issue. The problem is aggrevated if the write was
from another win9x client and subsequent read if from another win9x
client: this results in a dirty read.

Sep 22 '06 #1
Share this Question
Share on Google+
8 Replies


P: n/a
dosworldguy wrote:
I have been having a very peculiar issue from a long time.

I have an application where multiple clients read from a shared set of
files. When a record is changed, sometimes the win9x clients still read
the old data (if it was read earlier) and this is causing data
corruption. WinNT clients inlcuding windows2000 & XP do not have this
issue. The program is complied in VC++, console mode.

I am unable to understand the cause. I flush the files before the read
and still have this issue. The problem is aggrevated if the write was
from another win9x client and subsequent read if from another win9x
client: this results in a dirty read.
fread can cache data.
That means if you fread one record, the system might actually read 3 and
a half records, caching the remaining 2.5 records.

flushing input streams is undefined behavior.

Your question anyway sounds windows specific, ask in
a windows related programming channel.

Sep 22 '06 #2

P: n/a

dosworldguy wrote:
I have been having a very peculiar issue from a long time.

I have an application where multiple clients read from a shared set of
files. When a record is changed, sometimes the win9x clients still read
the old data (if it was read earlier) and this is causing data
corruption.

This will never work reliably. What if the app writes a record, but
the reader happens to read that area in the middle of the write? No
OS that I know of guarantees that disk I/O is "atomic". The OS is free
to reorder disk writes as it sees fit. For example, if a file is
spread out over disparate areas of the disk, many OS's have an
"elevator" algorithm-- they reorder read/write operations to minimize
the distance the disk heads have to travel. It's called the "elevator"
algorithm as the intent is to reorder the operations so the disk head
sweeps back and forth with a minimum number of changes direction.

Also if yo're accessing the file across the network, the disk blocks
may arrive out of order due to typical network protocols.

You need to implement some kind of record or file locking. For a C
program, see the "flock" or "lockf" library routines. If you're
doing this jsut for Windows, there are some non-portable WIndows file
locking API's that probably will give better granularity and
performance than the C lib functions.

Sep 22 '06 #3

P: n/a
Ancient_Hacker wrote:
dosworldguy wrote:
>I have been having a very peculiar issue from a long time.

I have an application where multiple clients read from a shared set of
files. When a record is changed, sometimes the win9x clients still read
the old data (if it was read earlier) and this is causing data
corruption.


This will never work reliably. What if the app writes a record, but
the reader happens to read that area in the middle of the write? No
OS that I know of guarantees that disk I/O is "atomic". The OS is free
to reorder disk writes as it sees fit. For example, if a file is
spread out over disparate areas of the disk, many OS's have an
"elevator" algorithm-- they reorder read/write operations to minimize
the distance the disk heads have to travel. It's called the "elevator"
algorithm as the intent is to reorder the operations so the disk head
sweeps back and forth with a minimum number of changes direction.
An OS can have all the knowledge of all read /writes by all
applications, and often have a cache aiding in keeping
things consistent even though the actual disk IO might be reordered.
Thus it can syncronize read/writes to the same file.

You do ofcourse need to synchronize readers and writes anyway, but
usually not for reasons that have to do with the actual IO gynmastics
of the platform.
Sep 22 '06 #4

P: n/a

Nils O. Selåsdal wrote:
An OS can have all the knowledge of all read /writes by all
applications, and often have a cache aiding in keeping
things consistent even though the actual disk IO might be reordered.
Thus it can syncronize read/writes to the same file.
I realize this would be a Good Thing, but do we know for sure all the
major OS's do in fact guarantee this? Your use of the word "can"
leaves a lot of wiggle-room. I'm pretty sure this isnt guaranteed by
some very popular remote-disk protocols, like NFS. The poor OP is
probably looking for something dependable and standard.

Sep 22 '06 #5

P: n/a
dosworldguy wrote:
>
I have been having a very peculiar issue from a long time.

I have an application where multiple clients read from a shared set of
files. When a record is changed, sometimes the win9x clients still read
the old data (if it was read earlier) and this is causing data
corruption. WinNT clients inlcuding windows2000 & XP do not have this
issue. The program is complied in VC++, console mode.

I am unable to understand the cause. I flush the files before the read
and still have this issue. The problem is aggrevated if the write was
from another win9x client and subsequent read if from another win9x
client: this results in a dirty read.
If you're using stream I/O (ie: fopen/fread/fwrite/fclose, which are
the only ones discussed here) then you have the problem that the other
processes may have already read, and buffered, the old data before you
write the new data. (And flushing input streams is not defined.)

If you're using the POSIX open/read/write/close functions (which are
not discussed here), then you may need to ask a Windows group about
something called "opportunistic locking", which can cause the O/S
itself to locally cache data from a file on another server, causing
updates to be missed. (BTDTGTTS)

--
+-------------------------+--------------------+-----------------------+
| Kenneth J. Brody | www.hvcomputer.com | #include |
| kenbrody/at\spamcop.net | www.fptech.com | <std_disclaimer.h|
+-------------------------+--------------------+-----------------------+
Don't e-mail me at: <mailto:Th*************@gmail.com>
Sep 22 '06 #6

P: n/a
Thank you all for your thoughts.

To add:

Locking is implemented via "co-operative" means: when a client wants to
write, it requests a server for permission, gets the "go ahead",
performs write and hands back the flag to the server. No other client
can write during that time.

After one write, another client askes for the write token. Now when
this client reads a record written earlier, it gets to read 'dirty
data'.

This is not consistent. Also, open, read & write have been used, not
fopen & etc.

I am looking for 'C' help. Instead of Flush, will it help to close the
files and re-open?
Kenneth Brody wrote:
dosworldguy wrote:

I have been having a very peculiar issue from a long time.

I have an application where multiple clients read from a shared set of
files. When a record is changed, sometimes the win9x clients still read
the old data (if it was read earlier) and this is causing data
corruption. WinNT clients inlcuding windows2000 & XP do not have this
issue. The program is complied in VC++, console mode.

I am unable to understand the cause. I flush the files before the read
and still have this issue. The problem is aggrevated if the write was
from another win9x client and subsequent read if from another win9x
client: this results in a dirty read.

If you're using stream I/O (ie: fopen/fread/fwrite/fclose, which are
the only ones discussed here) then you have the problem that the other
processes may have already read, and buffered, the old data before you
write the new data. (And flushing input streams is not defined.)

If you're using the POSIX open/read/write/close functions (which are
not discussed here), then you may need to ask a Windows group about
something called "opportunistic locking", which can cause the O/S
itself to locally cache data from a file on another server, causing
updates to be missed. (BTDTGTTS)

--
+-------------------------+--------------------+-----------------------+
| Kenneth J. Brody | www.hvcomputer.com | #include |
| kenbrody/at\spamcop.net | www.fptech.com | <std_disclaimer.h|
+-------------------------+--------------------+-----------------------+
Don't e-mail me at: <mailto:Th*************@gmail.com>
Sep 23 '06 #7

P: n/a

dosworldguy wrote:
Thank you all for your thoughts.

To add:

Locking is implemented via "co-operative" means: when a client wants to
write, it requests a server for permission, gets the "go ahead",
performs write and hands back the flag to the server.
How do you ensure that the client write has happened and completed and
flushed to the server disk? OS's, particularly on network
connections, do extensive file buffering. the client program may have
done a write(), but that just puts the data in a kernel buffer. Even
doing an explicit C close() on the file descriptor does not guarantee
the data is at the server and consistent to other readers or writers.
And file packets can arrive out of order, so even an explicit close
might have to wait until allpackets have arrived and been acknowledged
and reverse acknowledged by the client.

There's nothing in standard C to help with this... You're going to have
to pore thru the OS's network file server API, there's almost certainly
a
"ReallyReallyFlushThisDataToTheNetworkDiskAndEnsur eAllTheDiskCachesGotFlushedAndThe
ToDiskAndTheWriteSuceededAndTheFileCloseWentOkayAn dNeverGiveMeAPrematureAndOverlyOptimisticAllIsOkay (
"\\\\Server\\Path\\FileName.Ext" );

Sep 23 '06 #8

P: n/a
>dosworldguy wrote:
>I have an application where multiple clients read from a shared set of
files. When a record is changed, sometimes the win9x clients still read
the old data (if it was read earlier) and this is causing data
corruption.
In article <11*********************@d34g2000cwd.googlegroups. com>
Ancient_Hacker <gr**@comcast.netwrote:
>This will never work reliably.
Not in general, no.
>What if the app writes a record, but the reader happens to read
that area in the middle of the write? No OS that I know of
guarantees that disk I/O is "atomic".
HRFS, in vxWorks 6.x, does. (Well, the on-disk writes are not
atomic, but at the file I/O level, they *appear* to be, as they
are "transactionalized". The file system guarantees that, even if
the power fails in the middle of a write() call, the write() is
either not-started-at-all or completely-done when the file is
examined later. This does depend on certain [common] disk drive
characteristics; in particular the disk itself must not fail due
to the power-off.)
>You need to implement some kind of record or file locking.
Or some other OS-specific method, certainly. So the solution is
off-topic in comp.lang.c, alas. :-)
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Sep 24 '06 #9

This discussion thread is closed

Replies have been disabled for this discussion.