473,396 Members | 1,914 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

file system full errors, proper time to check

Suppose I'm using stdio calls to write to a disk file.
One possible error condition is no space on file system
or even (in unix environment) a ulimit of 0 bytes.

Which calls would be expected to return error codes for
such conditions?

Would it happen on the writes (fwrite, fprintf, fputs ...),
or as they actually write to io buffers, might the errors
not occur until the data is flushed or the file is closed?

I suspect the answer is "any of them" as the io buffers may
be flushed because a write fills them.

--
X the X to email me

Nov 13 '05 #1
9 2859
The fwrite function is supposed to return an error immediately if no space is available for the
write.
All OS caching (buffers, etc) should happen afterwards. Just test if write returns what it should
(the number of elemets written)

Of course all fopen calls should be tested but I suppose you do this anyway already.
"Jon LaBadie" <jg*****@comcast.net> wrote in message news:3F************@comcast.net...
Suppose I'm using stdio calls to write to a disk file.
One possible error condition is no space on file system
or even (in unix environment) a ulimit of 0 bytes.

Which calls would be expected to return error codes for
such conditions?

Would it happen on the writes (fwrite, fprintf, fputs ...),
or as they actually write to io buffers, might the errors
not occur until the data is flushed or the file is closed?

I suspect the answer is "any of them" as the io buffers may
be flushed because a write fills them.

--
X the X to email me

Nov 13 '05 #2
In <3F************@comcast.net> Jon LaBadie <jg*****@comcast.net> writes:
Suppose I'm using stdio calls to write to a disk file.
One possible error condition is no space on file system
or even (in unix environment) a ulimit of 0 bytes.

Which calls would be expected to return error codes for
such conditions?

Would it happen on the writes (fwrite, fprintf, fputs ...),
or as they actually write to io buffers, might the errors
not occur until the data is flushed or the file is closed?

I suspect the answer is "any of them" as the io buffers may
be flushed because a write fills them.


Correct. If you want to check it before writing too much to the file,
call fflush() after your first output call and see if it succeeds.

In theory, the test is not 100% relevant, because fflush is only required
to deliver the bytes in the buffer to the OS, not to force the OS to
actually write them to a file (they may simply be moved from the stdio
buffer to the OS buffer and still not reach the disk). But that's the
best you can do, short of also trying to close the file.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 13 '05 #3
Dan Pop wrote:
In <3F************@comcast.net> Jon LaBadie <jg*****@comcast.net> writes:
Suppose I'm using stdio calls to write to a disk file.
One possible error condition is no space on file system
or even (in unix environment) a ulimit of 0 bytes.

Which calls would be expected to return error codes for
such conditions?

Would it happen on the writes (fwrite, fprintf, fputs ...),
or as they actually write to io buffers, might the errors
not occur until the data is flushed or the file is closed?

I suspect the answer is "any of them" as the io buffers may
be flushed because a write fills them.


Correct. If you want to check it before writing too much to the file,
call fflush() after your first output call and see if it succeeds.

In theory, the test is not 100% relevant, because fflush is only required
to deliver the bytes in the buffer to the OS, not to force the OS to
actually write them to a file (they may simply be moved from the stdio
buffer to the OS buffer and still not reach the disk). But that's the
best you can do, short of also trying to close the file.


How can closing the file give you any more guarantees than flushing the
stdio buffers? I think even after a successfull fclose, there is no
guarantee that the data has made it to disk (or tape, fileing cabinet or
whatever).
Tobias.

--
unix http://www.faqs.org/faqs/by-newsgrou...rogrammer.html
clc http://www.eskimo.com/~scs/C-faq/top.html
fclc (french): http://www.isty-info.uvsq.fr/~rumeau/fclc/
Nov 13 '05 #4

"Tobias Oed" <to****@physics.odu.edu> wrote in message
news:bf************@ID-97389.news.uni-berlin.de...
Dan Pop wrote:
In <3F************@comcast.net> Jon LaBadie <jg*****@comcast.net> writes:
Suppose I'm using stdio calls to write to a disk file.
One possible error condition is no space on file system
or even (in unix environment) a ulimit of 0 bytes.

Which calls would be expected to return error codes for
such conditions?

Would it happen on the writes (fwrite, fprintf, fputs ...),
or as they actually write to io buffers, might the errors
not occur until the data is flushed or the file is closed?

I suspect the answer is "any of them" as the io buffers may
be flushed because a write fills them.


Correct. If you want to check it before writing too much to the file,
call fflush() after your first output call and see if it succeeds.

In theory, the test is not 100% relevant, because fflush is only required to deliver the bytes in the buffer to the OS, not to force the OS to
actually write them to a file (they may simply be moved from the stdio
buffer to the OS buffer and still not reach the disk). But that's the
best you can do, short of also trying to close the file.


How can closing the file give you any more guarantees than flushing the
stdio buffers? I think even after a successfull fclose, there is no
guarantee that the data has made it to disk (or tape, fileing cabinet or
whatever).


First fclose() will fflush() data that may still be in stdio buffers. Then
it should do whatever the OS needs done to close the file and verify that
the data was written out.

Yes, if the OS doesn't report it on close you are stuck. Most do, though,
just for this reason.

Lately there have been some attempts at disk drives that cache internally
and report the write succeeding before actually writing it. There have been
questions about doing that under OS that believe it is really written.

-- glen
Nov 13 '05 #5
>> >>Suppose I'm using stdio calls to write to a disk file.
>>One possible error condition is no space on file system
>>or even (in unix environment) a ulimit of 0 bytes.
.... How can closing the file give you any more guarantees than flushing the
stdio buffers? I think even after a successfull fclose, there is no
guarantee that the data has made it to disk (or tape, fileing cabinet or
whatever).


First fclose() will fflush() data that may still be in stdio buffers. Then
it should do whatever the OS needs done to close the file and verify that
the data was written out.

Yes, if the OS doesn't report it on close you are stuck. Most do, though,
just for this reason.

Lately there have been some attempts at disk drives that cache internally
and report the write succeeding before actually writing it. There have been
questions about doing that under OS that believe it is really written.


I doubt very much that a disk drive will on its own defer *ALLOCATION*
of disk space for a file write. I'm not so sure an OS will either.
It has to keep track of what to do with this data that's going to be
written, and in many file systems it's not necessary to go to the disk
to find a place on the disk to put it. I don't see the advantage of
deferring *ALLOCATION* of a disk block. (Deferring WRITING to the
disk block, yes, there's an advantage, it may be quickly updated
again).

I have heard claims by some disk manufacturers that their data-buffering
drives will, in case of a power failure, manage to stay alive long
enough to get all the data onto the disk, even if they generated
power from the spindle slowing down, and even if they had to map
in alternate sectors for ones that are discovered bad. (The problem
here is you could possibly run out of alternate sectors, at which
point, it's time to get a new drive.) I do not know if this is
true. The bigger the buffer, the more unlikely the claim is accurate.
In any case, a good UPS is a worthwhile investment if you're really
worried about the data getting on to the disk. Also, consider hardware
RAID sets.

Gordon L. Burditt
Nov 13 '05 #6
In <bf**********@news-reader6.wanadoo.fr> "jacob navia" <ja*********@jacob.remcomp.fr> writes:
The fwrite function is supposed to return an error immediately if no space is available for the
write.
All OS caching (buffers, etc) should happen afterwards.


Chapter and verse, please.

The fwrite function is supposed to behave as if all the characters have
been written using the fputc() function, which is subject to the usual
stdio buffering.

Furthermore, the stdio functions are merely required to deliver the data
to the OS (aka the execution environment in C standardese), no guarantee
is made about what the OS is going to do with the data. If it chooses to
cache it, the C standard does nothing to prevent it.

Only if the file has been successfully closed you have some (relatively
vague) guarantees from the standard.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 13 '05 #7

"Gordon Burditt" <go***********@sneaky.lerctr.org> wrote in message
news:bf********@library1.airnews.net...

(snip)
(someone wrote)
First fclose() will fflush() data that may still be in stdio buffers. Thenit should do whatever the OS needs done to close the file and verify that
the data was written out.

Yes, if the OS doesn't report it on close you are stuck. Most do, though,just for this reason.

Lately there have been some attempts at disk drives that cache internally
and report the write succeeding before actually writing it. There have beenquestions about doing that under OS that believe it is really written.
I doubt very much that a disk drive will on its own defer *ALLOCATION*
of disk space for a file write. I'm not so sure an OS will either.
It has to keep track of what to do with this data that's going to be
written, and in many file systems it's not necessary to go to the disk
to find a place on the disk to put it. I don't see the advantage of
deferring *ALLOCATION* of a disk block. (Deferring WRITING to the
disk block, yes, there's an advantage, it may be quickly updated
again).


Allocation is a strange word to apply to disk drives. Disk drives usually
contain a collection if data blocks which may be overwritten by data
supplied by the OS. The OS is responsible for allocating blocks to files,
and storing the appropriate data on the disk, also in disk blocks, to keep
track of the data on the disk.
I have heard claims by some disk manufacturers that their data-buffering
drives will, in case of a power failure, manage to stay alive long
enough to get all the data onto the disk, even if they generated
power from the spindle slowing down, and even if they had to map
in alternate sectors for ones that are discovered bad. (The problem
here is you could possibly run out of alternate sectors, at which
point, it's time to get a new drive.) I do not know if this is
true. The bigger the buffer, the more unlikely the claim is accurate.
In any case, a good UPS is a worthwhile investment if you're really
worried about the data getting on to the disk. Also, consider hardware
RAID sets.


I suppose one could describe the finding of alternate blocks to replace bad
blocks with as allocation.

I don't know how many disk drives do write buffering and then report the
data as written. If they can guarantee to write the data when the disk is
powered down, it should be fine.

-- glen

Nov 13 '05 #8
>> >First fclose() will fflush() data that may still be in stdio buffers.
Then
>it should do whatever the OS needs done to close the file and verify that
>the data was written out.
>
>Yes, if the OS doesn't report it on close you are stuck. Most do,though, >just for this reason.
>
>Lately there have been some attempts at disk drives that cache internally
>and report the write succeeding before actually writing it. There havebeen >questions about doing that under OS that believe it is really written.
I doubt very much that a disk drive will on its own defer *ALLOCATION*
of disk space for a file write. I'm not so sure an OS will either.
It has to keep track of what to do with this data that's going to be
written, and in many file systems it's not necessary to go to the disk
to find a place on the disk to put it. I don't see the advantage of
deferring *ALLOCATION* of a disk block. (Deferring WRITING to the
disk block, yes, there's an advantage, it may be quickly updated
again).


Allocation is a strange word to apply to disk drives.


You're right. The subject line refers to checking for an out of
disk space condition. The question is whether the error reporting
(or discovery) will be delayed by the OS or the disk drive until
after fclose(). I claim it is unlikely to be an issue, as deferring
*ALLOCATION* doesn't buy you much - on the drive *OR IN THE OS
EITHER*. On the other hand, deferring reporting of *WRITE ERRORS*
is an issue.

(I do, however, know of an implementation with deferred *DE*allocation,
so you can delete a 1-gigabyte file, then try to write an 8-kilobyte
file, and run out of space. The space may not come back until a
couple of minutes after the program that did the deletion terminates.
FreeBSD with the "softupdates" option on the particular filesystem
does this. "softupdates" seems to do an excellent job of leaving
things in reasonable shape after a crash (usually power failure or
tripped-over cable))
Disk drives usually
contain a collection if data blocks which may be overwritten by data
supplied by the OS. The OS is responsible for allocating blocks to files,
and storing the appropriate data on the disk, also in disk blocks, to keep
track of the data on the disk.
I have heard claims by some disk manufacturers that their data-buffering
drives will, in case of a power failure, manage to stay alive long
enough to get all the data onto the disk, even if they generated
power from the spindle slowing down, and even if they had to map
in alternate sectors for ones that are discovered bad. (The problem
here is you could possibly run out of alternate sectors, at which
point, it's time to get a new drive.) I do not know if this is
true. The bigger the buffer, the more unlikely the claim is accurate.
In any case, a good UPS is a worthwhile investment if you're really
worried about the data getting on to the disk. Also, consider hardware
RAID sets.
I suppose one could describe the finding of alternate blocks to replace bad
blocks with as allocation.


I would describe the failure to find alternate blocks (because they've
been used up) AFTER the power fails and the drive is trying to save
everything it can as a "write error with deferred (or nonexistent)
reporting".
I don't know how many disk drives do write buffering and then report the
data as written. If they can guarantee to write the data when the disk is
powered down, it should be fine.


No drive can guarantee to write the data. It can try really hard
(including RAID setups with multiple copies of the data), and try
to give advance warnings if there might be problems. There are a
number of failure modes where a disk drive can suddenly be unable
to ever write (and perhaps never read either) anything again. One
of them is commonly called a "head crash". Another one is where
the drive is physically destroyed by, say, military weapons, nuclear
or not. EMP from nuclear weapons is one scenario where a lot of
electronics may burn out all at once.

Gordon L. Burditt
Nov 13 '05 #9
In article <bf********@library2.airnews.net>
Gordon Burditt <go***********@sneaky.lerctr.org> writes:
You're right. The subject line refers to checking for an out of
disk space condition. The question is whether the error reporting
(or discovery) will be delayed by the OS or the disk drive until
after fclose(). I claim it is unlikely to be an issue, as deferring
*ALLOCATION* doesn't buy you much - on the drive *OR IN THE OS
EITHER*.
Perhaps not; but it does (or did) occur in practice, and surprised
quite a few people in the process, when Sun first began using NFS.
It turns out that if you have a file quota on the file server, and
you go over it, stdio generally reports the EDQUOT error on fclose().

You can get the error slightly earlier by fflush()ing output files,
then fsync()ing the underlying file descriptor, before fclose()ing
them; but this is of course not portable to "non-Unixy" systems,
where there is no fsync().
(I do, however, know of an implementation with deferred *DE*allocation,
so you can delete a 1-gigabyte file, then try to write an 8-kilobyte
file, and run out of space. The space may not come back until a
couple of minutes after the program that did the deletion terminates.
FreeBSD with the "softupdates" option on the particular filesystem
does this.


The soft-update code is not supposed to do this any more. In
particular, if block allocation is about to fail, the file system
is suppose to call into an "execute deferred deallocation" routine
that will restore some free space in such cases.

(I have no idea which versions of FreeBSD have the newer code, but
it is not all *that* "newer". I believe I got the code from Kirk
sometime last year. The UFS2 changes were mostly done by then as
well.)
--
In-Real-Life: Chris Torek, Wind River Systems (BSD engineering)
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://67.40.109.61/torek/index.html (for the moment)
Reading email is like searching for food in the garbage, thanks to spammers.
Nov 13 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
by: Mike | last post by:
I am sure that I am making a simple boneheaded mistake and I would appreciate your help in spotting in. I have just installed apache_2.0.53-win32-x86-no_ssl.exe php-5.0.3-Win32.zip...
4
by: Hal Vaughan | last post by:
I want to have a config file for my program, which means I need to know where the config file is. If I type: java myclass and it runs myclass.class, is there any way to obtain the location of...
2
by: steve | last post by:
Is there a way to catch file errors while writing to a file using ofstream? For ex., if the file is deleted or permissions changed by another process after it is opened, or when the disk is full....
5
by: Gregg | last post by:
Hello all, I have been banging my head over a problem that I am having reading a comma seperated file (CSV) that can contain from 1 to 10,000 records. My code snipit is as follows: **Start...
0
by: Lokkju | last post by:
I am pretty much lost here - I am trying to create a managed c++ wrapper for this dll, so that I can use it from c#/vb.net, however, it does not conform to any standard style of coding I have seen....
5
by: Michael C# | last post by:
Hi all, I set up a System.Timers.Time in my app. The code basically just updates the screen, but since the processing performed is so CPU-intensive, I wanted to make sure it gets updated...
1
by: laredotornado | last post by:
Hi, I'm using PHP 4.4.4 on Apache 2 on Fedora Core 5. PHP was installed using Apache's apxs and the php library was installed to /usr/local/php. However, when I set my "error_reporting"...
41
by: Baron Samedi | last post by:
I want to produce a piece of software for embedded systems, generally telecoms based, mostly running on ARM processors, but I can't guarantee that, of course. My software should work along with...
1
by: moondaddy | last post by:
running vs2005 I have a small test website called TestPublish which has default.aspx, ErrorPage.aspx and testpage.htm. The default page just says hello world and no other code and the errorpage...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.