By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
445,898 Members | 2,039 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 445,898 IT Pros & Developers. It's quick & easy.

ferror()

P: n/a
Hi,

If I attempt to read past the end of a file, feof() will return a non-zero
value. But can I guarantee that ferror() is 0? In short, will the error
indicator be set in some implementations just because the end-of-file
indicator is set? A search on the standard does not reveal if eof is
regarded as an "error" (if does say something, please quote the heading
numbers).

Thanks

Stephen Howe

Nov 14 '05 #1
Share this Question
Share on Google+
36 Replies


P: n/a
"Stephen Howe" <NO**********@dial.pipex.com> wrote:
Hi,

If I attempt to read past the end of a file, feof() will return a non-zero
value. But can I guarantee that ferror() is 0? In short, will the error
indicator be set in some implementations just because the end-of-file
indicator is set? A search on the standard does not reveal if eof is
regarded as an "error" (if does say something, please quote the heading
numbers).


There is not much point in using feof(). Whatever function you
use to read data will indicate when an end of file or an i/o
error occurs, for example:

while (fread(..., fp) != 0) {
...
}

When the loop exits, you know positively one or the other
condition has occurred, but generally the the only exception you
want to take is for an error,

if (ferror(fp)) {
... /* handle the error */
}

The program simply continues if no error is indicated, because
the end of file condition is expected.

If an error did occur, it makes no difference if feof() is true
or not.

--
Floyd L. Davidson <http://web.newsguy.com/floyd_davidson>
Ukpeagvik (Barrow, Alaska) fl***@barrow.com
Nov 14 '05 #2

P: n/a
In <3f**********************@reading.news.pipex.net > "Stephen Howe" <NO**********@dial.pipex.com> writes:
If I attempt to read past the end of a file, feof() will return a non-zero
value. But can I guarantee that ferror() is 0? In short, will the error
indicator be set in some implementations just because the end-of-file
indicator is set? A search on the standard does not reveal if eof is
regarded as an "error" (if does say something, please quote the heading
numbers).


Having reached the end of file is not considered an error.

OTOH, I can't figure out the practical side of your question. Why do you
need any guarantees about the ferror return value once you have reached
the end of the file?

In practice, reaching the end of file is the most common reason for the
failure of an input function. So, if the input function call returns
a failure indication (EOF or a null pointer), you simply call feof() to
figure out whether it was an eof condition or an I/O error. You don't
need to call ferror() at all in this case.

ferror() is usually useful for output streams, when you don't want to
bother checking each and every output call. If, before calling fclose(),
ferror() returns zero, you can assume that everything was fine up to that
point (if you have performed no actions on that stream that would reset
the stream's error indicator).

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #3

P: n/a
Dan Pop wrote:

(snip regarding feof() and ferror())
In practice, reaching the end of file is the most common reason for the
failure of an input function. So, if the input function call returns
a failure indication (EOF or a null pointer), you simply call feof() to
figure out whether it was an eof condition or an I/O error. You don't
need to call ferror() at all in this case. ferror() is usually useful for output streams, when you don't want to
bother checking each and every output call. If, before calling fclose(),
ferror() returns zero, you can assume that everything was fine up to that
point (if you have performed no actions on that stream that would reset
the stream's error indicator).


Shouldn't you also check the return value of fclose()?

I believe that in the case of buffering external to C, all the data
won't necessarily be pushed all the way to the disk, and a disk full
condition could still occur.

I do believe that only a small fraction of programs correctly check
the return status on output files.

-- glen

Nov 14 '05 #4

P: n/a
> OTOH, I can't figure out the practical side of your question. Why do you
need any guarantees about the ferror return value once you have reached
the end of the file?
Colleague's code.

He has a function which calls various combinations of fread(), fgetc(),
fgets() and does not bother to inspect the return values.
At the end, he calls ferror() to see if an error occured reading the file
and returns a value from the function if it was "successful" or "not". I am
wondering what happens if the end-of-file is reached whether ferror()
returns 0 or not. I have to say, it intrinsically does not seem to be
robust. I would be testing every call to fread(), fgetc(), fgets() in case
you have an unexpected truncated file.
ferror() is usually useful for output streams, when you don't want to
bother checking each and every output call. If, before calling fclose(),
ferror() returns zero, you can assume that everything was fine up to that
point (if you have performed no actions on that stream that would reset
the stream's error indicator).


Is that enough? It could be that disk space is tight, ferror() indicates no
error yet calling fclose() flushes any buffers in effect and at that point
the C file system suddenly detects there is a problem. You want to flush
first and then see what ferror() returns or alternatively take note of what
fclose() returns.

Stephen Howe
Nov 14 '05 #5

P: n/a
in comp.lang.c i read:
Dan Pop wrote:

ferror() is usually useful for output streams, when you don't want to
bother checking each and every output call. If, before calling
fclose(), ferror() returns zero, you can assume that everything was fine
up to that point (if you have performed no actions on that stream that
would reset the stream's error indicator).


Shouldn't you also check the return value of fclose()?


yes, because there may have been buffering of the stream (by default a
stream referencing a file would be block buffered) and the fclose will
flush that data before it closes the underlying interface, and either of
those actions (fflush or underlying_close()) may encounter an error.

--
a signature
Nov 14 '05 #6

P: n/a
"glen herrmannsfeldt" <ga*@ugcs.caltech.edu> wrote:
Shouldn't you also check the return value of fclose()?

I believe that in the case of buffering external to C, all the
data won't necessarily be pushed all the way to the disk, and
a disk full condition could still occur.

I do believe that only a small fraction of programs correctly
check the return status on output files.


If the fclose function returns an error, which could be due to
disk full, is it reasonable to ask the user to rectify that
condition and then retry the fclose?

ie. something like:
while(fclose(fp))
{
printf("File close failed, press 'r' to retry\n");
if(getchar() != 'r') break;
}

Or is the file pointer invalid after the unsuccessful call?

--
Simon.
Nov 14 '05 #7

P: n/a
"Simon Biber" <ne**@ralminNOSPAM.cc> writes:
If the fclose function returns an error, which could be due to
disk full, is it reasonable to ask the user to rectify that
condition and then retry the fclose?


No, you must not do that. See the definition in the Standard:

7.19.5.1 The fclose function
Synopsis
1 #include <stdio.h>
int fclose(FILE *stream);
Description

2 A successful call to the fclose function causes the stream
pointed to by stream to be flushed and the associated file
to be closed. Any unwritten buffered data for the stream are
delivered to the host environment to be written to the file;
any unread buffered data are discarded. Whether or not the
^^^^^^^^^^^^^^^^^^
call succeeds, the stream is disassociated from the file and
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^
any buffer set by the setbuf or setvbuf function is
disassociated from the stream (and deallocated if it was
automatically allocated).

--
int main(void){char p[]="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuv wxyz.\
\n",*q="kl BIcNBFr.NKEzjwCIxNJC";int i=sizeof p/2;char *strchr();int putchar(\
);while(*q){i+=strchr(p,*q++)-p;if(i>=(int)sizeof p)i-=sizeof p-1;putchar(p[i]\
);}return 0;}
Nov 14 '05 #8

P: n/a
In <GLLBb.15182$8y1.59614@attbi_s52> glen herrmannsfeldt <ga*@ugcs.caltech.edu> writes:
Dan Pop wrote:

(snip regarding feof() and ferror())
In practice, reaching the end of file is the most common reason for the
failure of an input function. So, if the input function call returns
a failure indication (EOF or a null pointer), you simply call feof() to
figure out whether it was an eof condition or an I/O error. You don't
need to call ferror() at all in this case.

ferror() is usually useful for output streams, when you don't want to
bother checking each and every output call. If, before calling fclose(),
ferror() returns zero, you can assume that everything was fine up to that
point (if you have performed no actions on that stream that would reset
the stream's error indicator).


Shouldn't you also check the return value of fclose()?


Have I said or implied otherwise?

Of course you *have* to check it, but this has nothing to do with ferror()
which can no longer be used after the stream has been closed.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #9

P: n/a
Dan Pop wrote:
(snip)
ferror() is usually useful for output streams, when you don't want to
bother checking each and every output call. If, before calling fclose(),
ferror() returns zero, you can assume that everything was fine up to that
point (if you have performed no actions on that stream that would reset
the stream's error indicator).
Shouldn't you also check the return value of fclose()?
Have I said or implied otherwise?
Maybe not, but since you didn't mention it, and since the return value
of fclose() is so rarely checked, I thought it was worth adding to
the discussion.
Of course you *have* to check it, but this has nothing to do with ferror()
which can no longer be used after the stream has been closed.


-- glen

Nov 14 '05 #10

P: n/a
On Wed, 10 Dec 2003 20:56:06 GMT, glen herrmannsfeldt
<ga*@ugcs.caltech.edu> wrote:
Dan Pop wrote:

(snip regarding feof() and ferror())
In practice, reaching the end of file is the most common reason for the
failure of an input function. So, if the input function call returns
a failure indication (EOF or a null pointer), you simply call feof() to
figure out whether it was an eof condition or an I/O error. You don't
need to call ferror() at all in this case.
ferror() is usually useful for output streams, when you don't want to
bother checking each and every output call. If, before calling fclose(),
ferror() returns zero, you can assume that everything was fine up to that
point (if you have performed no actions on that stream that would reset
the stream's error indicator).


Shouldn't you also check the return value of fclose()?

Maybe, but I rarely do. I don't know of any error that would be
recoverable at that point. If the fwrite's have been checked, about
the only thing left is inability to flush the buffers. That's better
detected with an fflush, when you still have ways of correcting the
problem.
I believe that in the case of buffering external to C, all the data
won't necessarily be pushed all the way to the disk, and a disk full
condition could still occur.

I do believe that only a small fraction of programs correctly check
the return status on output files.

-- glen


--
Al Balmer
Balmer Consulting
re************************@att.net
Nov 14 '05 #11

P: n/a
Alan Balmer wrote:

On Wed, 10 Dec 2003 20:56:06 GMT, glen herrmannsfeldt
<ga*@ugcs.caltech.edu> wrote:

Shouldn't you also check the return value of fclose()?

Maybe, but I rarely do. I don't know of any error that would be
recoverable at that point. If the fwrite's have been checked, about
the only thing left is inability to flush the buffers. That's better
detected with an fflush, when you still have ways of correcting the
problem.


Even if fflush() succeeds, fclose() can fail.

Whether the fclose() failure is recoverable or not is only
part of the story. You may not be able to do anything about
the error that doomed the fclose(), but you can at least refrain
from making things worse:

stream = fopen("datafile.tmp", "w");
...
fclose (stream);
delete ("datafile.dat");
rename ("datafile.tmp", "datafile.dat");

If the fclose() fails (so the integrity of "datafile.tmp" is
suspect at best), this code merrily clobbers the old and
presumably valid file with the new and possibly broken one.
Better, I think, to detect the fclose() failure, leave both
files intact, and give the user the maximum opportunity to
sort things out.

And yes, this has happened. I've told the tale before
and won't repeat it (go to Google and hunt up the thread
called "reading an Int Array from a Binary file?" if
interested). As for me: Once burned, forever shy.

--
Er*********@sun.com
Nov 14 '05 #12

P: n/a
On Thu, 11 Dec 2003 15:07:23 -0500, Eric Sosman <Er*********@sun.com>
wrote:
Alan Balmer wrote:

On Wed, 10 Dec 2003 20:56:06 GMT, glen herrmannsfeldt
<ga*@ugcs.caltech.edu> wrote:
>
>Shouldn't you also check the return value of fclose()?
> Maybe, but I rarely do. I don't know of any error that would be
recoverable at that point. If the fwrite's have been checked, about
the only thing left is inability to flush the buffers. That's better
detected with an fflush, when you still have ways of correcting the
problem.


Even if fflush() succeeds, fclose() can fail.

I don't doubt it, especially in c.l.c. :-) However, I can't think of
many such failure modes, and those involve catastrophic hardware
failures. On the implementation I have readily available, any error
condition which is documented for fclose would also be reported on the
fopen, and I suspect that's true of many actual implementations (which
is OT here, of course.)
Whether the fclose() failure is recoverable or not is only
part of the story. You may not be able to do anything about
the error that doomed the fclose(), but you can at least refrain
from making things worse:
The action taken depends on the requirements of the job, the
implementation, the exact error reported, and probably other things,
none of which are standardized. I doubt that there is any standard C
approach to such error recovery which would be very useful.
stream = fopen("datafile.tmp", "w");
...
fclose (stream);
delete ("datafile.dat");
rename ("datafile.tmp", "datafile.dat");

If the fclose() fails (so the integrity of "datafile.tmp" is
suspect at best), this code merrily clobbers the old and
presumably valid file with the new and possibly broken one.
Better, I think, to detect the fclose() failure, leave both
files intact, and give the user the maximum opportunity to
sort things out.

And yes, this has happened. I've told the tale before
and won't repeat it (go to Google and hunt up the thread
called "reading an Int Array from a Binary file?" if
interested). As for me: Once burned, forever shy.


--
Al Balmer
Balmer Consulting
re************************@att.net
Nov 14 '05 #13

P: n/a
On Thu, 11 Dec 2003 16:45:47 -0700, Alan Balmer <al******@att.net>
wrote:
any error
condition which is documented for fclose would also be reported on the
fopen,


(Reference post above) Ouch! Didn't catch that bit of nonsense until
after downloading the post. Should read "any error condition which is
documented for fclose would also be reported on the fflush, " of
course.

--
Al Balmer
Balmer Consulting
re************************@att.net
Nov 14 '05 #14

P: n/a
>On Thu, 11 Dec 2003 15:07:23 -0500, Eric Sosman <Er*********@sun.com>
wrote:
Even if fflush() succeeds, fclose() can fail.

In article <news:bs********************************@4ax.com >
Alan Balmer <al******@spamcop.net> writes:I don't doubt it, especially in c.l.c. :-) However, I can't think of
many such failure modes, and those involve catastrophic hardware
failures.


It happened in perfectly ordinary "Unixy" code on Sun workstations
using NFS servers, even with all the hardware working perfectly.

Files written to the server would be write-behind cached on the
workstations. On the final fflush()-before-close, the last data
would be transferred from the user process to the client workstation
kernel. The kernel continued to cache the data, not sending any
of it to the NFS server yet.

On the close(), the workstation would realize that it was now
time to send the cached data to the server, which would reject
the write with EDQUOT, "user is over quota".

The close() would return the EDQUOT error to the user process,
alerting the user that his file was incomplete because he was
now out of administrator-assigned disk space.

(This kind of failure generally came as a total shock to the users,
whose programs completely ignored the close() failure and often
followed the failed close() by a rename() operation that wiped
out the original backup file. Now they had plenty of disk space
for the data, but no data to go in it.)
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (4039.22'N, 11150.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Nov 14 '05 #15

P: n/a
Right. In general, pay attention to the return value of fclose()
particularly if the file is opened for writing.

Stephen Howe
Nov 14 '05 #16

P: n/a
In <ma3Cb.364935$ao4.1227051@attbi_s51> glen herrmannsfeldt <ga*@ugcs.caltech.edu> writes:
Dan Pop wrote:
(snip)
ferror() is usually useful for output streams, when you don't want to
bother checking each and every output call. If, before calling fclose(),
ferror() returns zero, you can assume that everything was fine up to that
point (if you have performed no actions on that stream that would reset
the stream's error indicator).Shouldn't you also check the return value of fclose()?
Have I said or implied otherwise?


Maybe not, but since you didn't mention it, and since the return value
of fclose() is so rarely checked, ^^^^^^^^^^^^^^^^

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
How do you know?
I thought it was worth adding to the discussion.


Huh? Has it anything whatsoever to do with ferror, which is the topic
of this discussion?

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #17

P: n/a
In <s5********************************@4ax.com> Alan Balmer <al******@att.net> writes:
On Wed, 10 Dec 2003 20:56:06 GMT, glen herrmannsfeldt
<ga*@ugcs.caltech.edu> wrote:
Dan Pop wrote:

(snip regarding feof() and ferror())
In practice, reaching the end of file is the most common reason for the
failure of an input function. So, if the input function call returns
a failure indication (EOF or a null pointer), you simply call feof() to
figure out whether it was an eof condition or an I/O error. You don't
need to call ferror() at all in this case.
ferror() is usually useful for output streams, when you don't want to
bother checking each and every output call. If, before calling fclose(),
ferror() returns zero, you can assume that everything was fine up to that
point (if you have performed no actions on that stream that would reset
the stream's error indicator).


Shouldn't you also check the return value of fclose()?

Maybe, but I rarely do. I don't know of any error that would be
recoverable at that point. If the fwrite's have been checked, about
the only thing left is inability to flush the buffers. That's better
detected with an fflush,


All fflush can tell you is that the data has successfully left the
stdio buffers. It may still be bufferred by the OS. Only fclose can
confirm that it successfully reached its final destination.
when you still have ways of correcting the problem.


Even if it's not recoverable, the user still needs to be informed about
the problem. As it is impossible to predict the consequences of a
failed fclose, it is unacceptable to ignore this possibility.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #18

P: n/a
In <af********************************@4ax.com> Alan Balmer <al******@att.net> writes:
On Thu, 11 Dec 2003 16:45:47 -0700, Alan Balmer <al******@att.net>
wrote:
any error
condition which is documented for fclose would also be reported on the
fopen,


(Reference post above) Ouch! Didn't catch that bit of nonsense until
after downloading the post. Should read "any error condition which is
documented for fclose would also be reported on the fflush, " of
course.


Chapter and verse, please.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #19

P: n/a
In <3f***********************@reading.news.pipex.ne t> "Stephen Howe" <NO**********@dial.pipex.com> writes:
Right. In general, pay attention to the return value of fclose()
particularly if the file is opened for writing.


If it's opened for input only, you couldn't/shouldn't care less.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #20

P: n/a
On 12 Dec 2003 13:35:39 GMT, Da*****@cern.ch (Dan Pop) wrote:
In <af********************************@4ax.com> Alan Balmer <al******@att.net> writes:
On Thu, 11 Dec 2003 16:45:47 -0700, Alan Balmer <al******@att.net>
wrote:
any error
condition which is documented for fclose would also be reported on the
fopen,


(Reference post above) Ouch! Didn't catch that bit of nonsense until
after downloading the post. Should read "any error condition which is
documented for fclose would also be reported on the fflush, " of
course.


Chapter and verse, please.

Dan

Sorry for not repeating the entire post. It was in reference to the
preceding post, which was implementation specific, and clearly
indicated as such.

--
Al Balmer
Balmer Consulting
re************************@att.net
Nov 14 '05 #21

P: n/a
On 12 Dec 2003 05:59:37 GMT, Chris Torek <no****@torek.net> wrote:
On Thu, 11 Dec 2003 15:07:23 -0500, Eric Sosman <Er*********@sun.com>
wrote:
Even if fflush() succeeds, fclose() can fail.

In article <news:bs********************************@4ax.com >
Alan Balmer <al******@spamcop.net> writes:
I don't doubt it, especially in c.l.c. :-) However, I can't think of
many such failure modes, and those involve catastrophic hardware
failures.


It happened in perfectly ordinary "Unixy" code on Sun workstations
using NFS servers, even with all the hardware working perfectly.

Files written to the server would be write-behind cached on the
workstations. On the final fflush()-before-close, the last data
would be transferred from the user process to the client workstation
kernel. The kernel continued to cache the data, not sending any
of it to the NFS server yet.

On the close(), the workstation would realize that it was now
time to send the cached data to the server, which would reject
the write with EDQUOT, "user is over quota".

The close() would return the EDQUOT error to the user process,
alerting the user that his file was incomplete because he was
now out of administrator-assigned disk space.


That must have been fun to track down the first time ;-) I see your
point, though I would be inclined to call this a case of needing the
user process to cover a system design quirk. It means that you can't
trust fflush to actually force data to be written. Could be awkward if
you're relying on it for synchronization with another system and don't
really want to close and reopen the stream every time. But such things
are off-topic here anyway :-)
(This kind of failure generally came as a total shock to the users,
whose programs completely ignored the close() failure and often
followed the failed close() by a rename() operation that wiped
out the original backup file. Now they had plenty of disk space
for the data, but no data to go in it.)


--
Al Balmer
Balmer Consulting
re************************@att.net
Nov 14 '05 #22

P: n/a
On 12 Dec 2003 13:33:58 GMT, Da*****@cern.ch (Dan Pop) wrote:
In <s5********************************@4ax.com> Alan Balmer <al******@att.net> writes:
On Wed, 10 Dec 2003 20:56:06 GMT, glen herrmannsfeldt
<ga*@ugcs.caltech.edu> wrote:
Dan Pop wrote:

(snip regarding feof() and ferror())
<snip>
All fflush can tell you is that the data has successfully left the
stdio buffers. It may still be bufferred by the OS. Only fclose can
confirm that it successfully reached its final destination.
How does fclose confirm that? The description of fclose in this
respect is identical to that of fflush: "Any unwritten buffered data
for the stream are delivered to the host environment to be written to
the file;"
when you still have ways of correcting the problem.
Even if it's not recoverable, the user still needs to be informed about
the problem. As it is impossible to predict the consequences of a
failed fclose, it is unacceptable to ignore this possibility.


That doesn't mean that catching the problem *before* closing the file
is a bad thing, does it?
Dan


--
Al Balmer
Balmer Consulting
re************************@att.net
Nov 14 '05 #23

P: n/a
In <ii********************************@4ax.com> Alan Balmer <al******@att.net> writes:
On 12 Dec 2003 13:33:58 GMT, Da*****@cern.ch (Dan Pop) wrote:
In <s5********************************@4ax.com> Alan Balmer <al******@att.net> writes:
On Wed, 10 Dec 2003 20:56:06 GMT, glen herrmannsfeldt
<ga*@ugcs.caltech.edu> wrote:

Dan Pop wrote:

(snip regarding feof() and ferror())
<snip>

All fflush can tell you is that the data has successfully left the
stdio buffers. It may still be bufferred by the OS. Only fclose can
confirm that it successfully reached its final destination.


How does fclose confirm that? The description of fclose in this
respect is identical to that of fflush: "Any unwritten buffered data
for the stream are delivered to the host environment to be written to
the file;"


fclose() does more than fflush().

2 A successful call to the fclose function causes the stream
pointed to by stream to be flushed and the associated file to be
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
closed.
^^^^^^
This is what allows fclose to detect what fflush may not be able to
detect. Closing the associated file implies flushing all the buffers
associated to that file, even those stdio (and, implicitly, fflush) has
no control upon.
when you still have ways of correcting the problem.


Even if it's not recoverable, the user still needs to be informed about
the problem. As it is impossible to predict the consequences of a
failed fclose, it is unacceptable to ignore this possibility.


That doesn't mean that catching the problem *before* closing the file
is a bad thing, does it?


Have I said or implied otherwise? My point was that this check does NOT
make the fclose check superfluous, not that checking fflush is
superfluous. Checking fflush is still needed, but only its failure has
any relevance, its success does not guarantee that the data has been
properly written to its final destination. Only the success of fclose
provides such a guarantee. I thought that was clear enough, but it
seems that I was overoptimistic.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #24

P: n/a
On 12 Dec 2003 16:20:05 GMT, Da*****@cern.ch (Dan Pop) wrote:
fclose() does more than fflush().

2 A successful call to the fclose function causes the stream
pointed to by stream to be flushed and the associated file to be
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
closed.
^^^^^^
This is what allows fclose to detect what fflush may not be able to
detect. Closing the associated file implies flushing all the buffers
associated to that file, even those stdio (and, implicitly, fflush) has
no control upon.


Sorry, I don't see that. It certainly implies that the file is no
longer associated with the calling program, but I don't know what
prevents the implementation from caching the actual final writes,
directory updates, etc. until it finds a propitious moment. There may
even be more than one system involved, as in the case of a
network-connected file.

--
Al Balmer
Balmer Consulting
re************************@att.net
Nov 14 '05 #25

P: n/a
Chris Torek wrote:
On Thu, 11 Dec 2003 15:07:23 -0500, Eric Sosman wrote:
Even if fflush() succeeds, fclose() can fail.

(snip)
It happened in perfectly ordinary "Unixy" code on Sun workstations
using NFS servers, even with all the hardware working perfectly. Files written to the server would be write-behind cached on the
workstations. On the final fflush()-before-close, the last data
would be transferred from the user process to the client workstation
kernel. The kernel continued to cache the data, not sending any
of it to the NFS server yet.
The sun tradition was to get everything to disk before notifying
the program that it was written. I am not sure now about the
cache on the workstation. There were big questions when
disk drives with write behind cache came out. One couldn't be
sure that the data actually made it to disk in the case of
a power failure.
On the close(), the workstation would realize that it was now
time to send the cached data to the server, which would reject
the write with EDQUOT, "user is over quota".
Systems I used didn't run quota, but disk full was always
possible. I did once lose a 10 line file editing it in vi
(when I was new to vi) when the disk was full. It was
apparently a very important 10 line file.
The close() would return the EDQUOT error to the user process,
alerting the user that his file was incomplete because he was
now out of administrator-assigned disk space. (This kind of failure generally came as a total shock to the users,
whose programs completely ignored the close() failure and often
followed the failed close() by a rename() operation that wiped
out the original backup file. Now they had plenty of disk space
for the data, but no data to go in it.)


Another effect that I saw once was a program that was writing out
a series of numbers that were supposed to be within a certain range.
It seems that the disk got full while writing, but the error was
not noticed. Later, more space became available and writing
continued. Digits from one number were concatenated with digits
from another, resulting in an out of range number.

After that, much more checking was done on writes.

-- glen

Nov 14 '05 #26

P: n/a
In <3l********************************@4ax.com> Alan Balmer <al******@att.net> writes:
On 12 Dec 2003 16:20:05 GMT, Da*****@cern.ch (Dan Pop) wrote:
fclose() does more than fflush().

2 A successful call to the fclose function causes the stream
pointed to by stream to be flushed and the associated file to be
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
closed.
^^^^^^
This is what allows fclose to detect what fflush may not be able to
detect. Closing the associated file implies flushing all the buffers
associated to that file, even those stdio (and, implicitly, fflush) has
no control upon.
Sorry, I don't see that. It certainly implies that the file is no
longer associated with the calling program, but I don't know what
prevents the implementation from caching the actual final writes,
directory updates, etc. until it finds a propitious moment.


If it's delayed, an error may happen when the closing is actually
attempted and there is no way to report it to the fclose() caller.
While the standard says that a successful fclose call cause the file
to be closed.
There may
even be more than one system involved, as in the case of a
network-connected file.


I'm sorry, but I can't find an alternate interpretation for "closing the
associated file", no matter where it is physically located and how many
systems are involved in actually performing this action.

I'm not claiming that each and every implementation actually does what
the standard requires, merely that the requirement is written in
unambiguous terms.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #27

P: n/a
Dan Pop wrote:

(snip)
(someone wrote)
Sorry, I don't see that. It certainly implies that the file is no
longer associated with the calling program, but I don't know what
prevents the implementation from caching the actual final writes,
directory updates, etc. until it finds a propitious moment.
If it's delayed, an error may happen when the closing is actually
attempted and there is no way to report it to the fclose() caller.
While the standard says that a successful fclose call cause the file
to be closed.
(snip)
I'm not claiming that each and every implementation actually does what
the standard requires, merely that the requirement is written in
unambiguous terms.


Traditional NFS was pretty strict on doing things right, though possibly
not fast. In the name of speed, some have added options like
asynchronous writes and soft mounts. Also, some disk drives now buffer
writes internally, without a guarantee that the data actually makes it
to the disk.

In a traditional NFS hard mount the client will wait forever for the
server to reply. Once we had to move a server for some diskless
machines, and it was down an entire weekend. The clients waited
patiently for it to come back, and continued on just fine when it
came back up three days later. Some people are too impatient, though.

-- glen

Nov 14 '05 #28

P: n/a
On 15 Dec 2003 18:37:34 GMT, Da*****@cern.ch (Dan Pop) wrote:

I'm sorry, but I can't find an alternate interpretation for "closing the
associated file", no matter where it is physically located and how many
systems are involved in actually performing this action.


That's the point of my concerns - I can't find (in the standard) *any*
interpretation of "closing the associated file." I don't see that the
standard can require any particular action by the system, any more
than it can guarantee that another process doesn't have the same file
open.

If there is such a guarantee for conforming implementations, I would
be interested, since it would be useful.

--
Al Balmer
Balmer Consulting
re************************@att.net
Nov 14 '05 #29

P: n/a
In <p6********************************@4ax.com> Alan Balmer <al******@att.net> writes:
On 15 Dec 2003 18:37:34 GMT, Da*****@cern.ch (Dan Pop) wrote:

I'm sorry, but I can't find an alternate interpretation for "closing the
associated file", no matter where it is physically located and how many
systems are involved in actually performing this action.
That's the point of my concerns - I can't find (in the standard) *any*
interpretation of "closing the associated file." I don't see that the


Most likely because the semantics of closing a file are not specific to
the C language.
standard can require any particular action by the system, any more
than it can guarantee that another process doesn't have the same file
open.


That's orthogonal to the issue. From the C standard POV there is no
other process. But the same program may (or may not, it's implementation
specific) have more than one stream connected to the same file. Yet,
there is no ambiguity WRT the meaning of closing the file: all the
changes created through that stream that have not yet been physically
applied to the file, must be. There is no point in inventing a set of
semantics for "closing a file" that are specific to the C language.

The important bit for this discussion is that the failure of the file
closing operation needs to be reported to the fclose caller. If fclose
reports success, the changes have been successfully applied to the
physical file (mainly because a later failure in the process can no longer
be reported).

Again, I'm not claiming that all implementations are behaving as
specified. Anyone familiar with the umount command on "slow"
output devices under Linux knows what I'm talking about. Some OS's do
trade the semantics of the file closing operation for increased I/O
speed, rendering the I/O system faster, but less reliable. There is also
the issue of the write caching performed by certain disks, behind the
back of the OS.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #30

P: n/a
> >Right. In general, pay attention to the return value of fclose()
particularly if the file is opened for writing.


If it's opened for input only, you couldn't/shouldn't care less.


That says nothing.
Being pedantic, you should check every return value of every call of all
functions in <stdio.h> even if opened for input only.

For all I know, the OS filing system may be up the creek and the return
value of fclose() might pick this up.
One can never tell.
Better err on the side of caution.

Stephen Howe
Nov 14 '05 #31

P: n/a
Stephen Howe <NO**********@dial.pipex.com> spoke thus:
Being pedantic, you should check every return value of every call of all
functions in <stdio.h> even if opened for input only.


I doubt you check the return value of printf - if you do, I'm glad I
don't have to read your code...

--
Christopher Benson-Manica | I *should* know what I'm talking about - if I
ataru(at)cyberspace.org | don't, I need to know. Flames welcome.
Nov 14 '05 #32

P: n/a
Christopher Benson-Manica wrote:

Stephen Howe <NO**********@dial.pipex.com> spoke thus:
Being pedantic, you should check every return value of every call of all
functions in <stdio.h> even if opened for input only.


I doubt you check the return value of printf - if you do, I'm glad I
don't have to read your code...


#include <stdio.h>
int main(void) {
if (printf("Hello, world!\n") != 14) {
if (fprintf(stderr, "printf failed!\n") != 15) {
if (fprintf(stderr, "fprintf failed!\n") != 16) {
...

--
Er*********@sun.com
Nov 14 '05 #33

P: n/a
Eric Sosman <Er*********@sun.com> writes:
Christopher Benson-Manica wrote:

Stephen Howe <NO**********@dial.pipex.com> spoke thus:
Being pedantic, you should check every return value of every call of all
functions in <stdio.h> even if opened for input only.


I doubt you check the return value of printf - if you do, I'm glad I
don't have to read your code...


#include <stdio.h>
int main(void) {
if (printf("Hello, world!\n") != 14) {
if (fprintf(stderr, "printf failed!\n") != 15) {
if (fprintf(stderr, "fprintf failed!\n") != 16) {
...


Harumph. You call that error checking?

#include <stdio.h>
#include <stdlib.h>

int main(void) {
if (printf("Hello, world!\n") != 14) {
if (fprintf(stderr, "printf failed!\n") != 15) {
if (fprintf(stderr, "fprintf failed!\n") != 16) {
exit(EXIT_FAILURE);
if (fprintf(stderr,
"exit(EXIT_FAILURE) failed!!\n") != 28) {
abort();
if (fprintf(stderr, "abort() failed!!\n") != 17) {
/* ... */
}
}
}
}
}
exit(EXIT_SUCCESS);
if (fprintf(stderr, "exit(EXIT_SUCCESS) failed!!\n") != 28) {
abort();
if (fprintf(stderr, "abort() failed!!\n") != 17) {
/* ... */
}
}
}

(Filling in the "/* ... */"s is left as an exercise.)

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://www.sdsc.edu/~kst>
Schroedinger does Shakespeare: "To be *and* not to be"
(Note new e-mail address)
Nov 14 '05 #34

P: n/a
Christopher Benson-Manica wrote:
Stephen Howe <NO**********@dial.pipex.com> spoke thus:
Being pedantic, you should check every return value of every
call of all functions in <stdio.h> even if opened for input only.


I doubt you check the return value of printf - if you do, I'm glad
I don't have to read your code...


Here is an example. There are easier ways to do this, but notice
the use of the return from sprintf.

/* format doubles and align output */
/* Public domain, by C.B. Falconer */

#include <stdio.h>

#define dformat(r, d, f) fdformat(stdout, r, d, f)

/* output r in field with fpart digits after dp */
/* At least 1 blank before and after the output */
/* Returns neg on param error, else field used */
/* Allows for exponents from -999 to +999. */
/* Too small fields are automatically expanded */
int fdformat(FILE *fp, double r, int fpart, int field)
{
#define CPMAX 100
char cp[CPMAX];
int n, spacebefore, spaceafter, minchars;

/* Protect against evil arguments */
if (fpart < 1) fpart = 1;
if (r < 0.0) minchars = 9;
else minchars = 8;
if (field < (fpart + minchars)) field = fpart + minchars;
if (field >= CPMAX) return -1;

/* Try the effect of "%.*g" and "%.*e" below */
n = sprintf(cp, "%.*e", fpart, r);
if (n < 0) return n;
spacebefore = field - minchars - fpart;
spaceafter = field - spacebefore - n;
return fprintf(fp, "%*c%s%*c",
spacebefore, ' ', cp, spaceafter, ' ');
} /* fdformat */

/* --------------- */

void testit(double r, int places, int field)
{
/* Note use of side effect of calling dformat */
printf(", %d (places=%d, field=%d)\n",
dformat(r, places, field), places, field);
} /* testit */

/* --------------- */

int main(void)
{
size_t i;
double arr[] = { 413.12e+092,
257.90e+102,
257.9011e-103,
43.67e+099,
43.667e-99,
1.0, 0.0};

for (i = 0; i < ((sizeof arr) / (sizeof arr[0])); i++)
testit(arr[i], 2, 12);
for (i = 0; i < ((sizeof arr) / (sizeof arr[0])); i++)
testit(-arr[i], 2, 12);
for (i = 0; i < ((sizeof arr) / (sizeof arr[0])); i++)
testit(arr[i], 3, 12);
for (i = 0; i < ((sizeof arr) / (sizeof arr[0])); i++)
testit(arr[i], 3, 2);
for (i = 0; i < ((sizeof arr) / (sizeof arr[0])); i++)
testit(arr[i], 5, 2);
for (i = 0; i < ((sizeof arr) / (sizeof arr[0])); i++)
testit(-arr[i], 5, 2);
return 0;
} /* main */

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 14 '05 #35

P: n/a
> I doubt you check the return value of printf

I do. stdin could be redirected to disk which means even printf() could fill
a disk.
Even stderr needs checking as that could be redirected as well.

Stephen Howe
Nov 14 '05 #36

P: n/a
> Harumph. You call that error checking?

#include <stdio.h>
#include <stdlib.h>

int main(void) {
if (printf("Hello, world!\n") != 14) {
if (fprintf(stderr, "printf failed!\n") != 15) {
if (fprintf(stderr, "fprintf failed!\n") != 16) {
exit(EXIT_FAILURE);
if (fprintf(stderr,
"exit(EXIT_FAILURE) failed!!\n") != 28) {
abort();
if (fprintf(stderr, "abort() failed!!\n") != 17) {
/* ... */
}
}
}
}
}
exit(EXIT_SUCCESS);
if (fprintf(stderr, "exit(EXIT_SUCCESS) failed!!\n") != 28) {
abort();
if (fprintf(stderr, "abort() failed!!\n") != 17) {
/* ... */
}
}
}


Much much better.

Stephen Howe
Nov 14 '05 #37

This discussion thread is closed

Replies have been disabled for this discussion.