Suppose I'm using stdio calls to write to a disk file.
One possible error condition is no space on file system
or even (in unix environment) a ulimit of 0 bytes.
Which calls would be expected to return error codes for
such conditions?
Would it happen on the writes (fwrite, fprintf, fputs ...),
or as they actually write to io buffers, might the errors
not occur until the data is flushed or the file is closed?
I suspect the answer is "any of them" as the io buffers may
be flushed because a write fills them.
--
X the X to email me 9 2871
The fwrite function is supposed to return an error immediately if no space is available for the
write.
All OS caching (buffers, etc) should happen afterwards. Just test if write returns what it should
(the number of elemets written)
Of course all fopen calls should be tested but I suppose you do this anyway already.
"Jon LaBadie" <jg*****@comcas t.net> wrote in message news:3F******** ****@comcast.ne t... Suppose I'm using stdio calls to write to a disk file. One possible error condition is no space on file system or even (in unix environment) a ulimit of 0 bytes.
Which calls would be expected to return error codes for such conditions?
Would it happen on the writes (fwrite, fprintf, fputs ...), or as they actually write to io buffers, might the errors not occur until the data is flushed or the file is closed?
I suspect the answer is "any of them" as the io buffers may be flushed because a write fills them.
-- X the X to email me
In <3F************ @comcast.net> Jon LaBadie <jg*****@comcas t.net> writes: Suppose I'm using stdio calls to write to a disk file. One possible error condition is no space on file system or even (in unix environment) a ulimit of 0 bytes.
Which calls would be expected to return error codes for such conditions?
Would it happen on the writes (fwrite, fprintf, fputs ...), or as they actually write to io buffers, might the errors not occur until the data is flushed or the file is closed?
I suspect the answer is "any of them" as the io buffers may be flushed because a write fills them.
Correct. If you want to check it before writing too much to the file,
call fflush() after your first output call and see if it succeeds.
In theory, the test is not 100% relevant, because fflush is only required
to deliver the bytes in the buffer to the OS, not to force the OS to
actually write them to a file (they may simply be moved from the stdio
buffer to the OS buffer and still not reach the disk). But that's the
best you can do, short of also trying to close the file.
Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Dan Pop wrote: In <3F************ @comcast.net> Jon LaBadie <jg*****@comcas t.net> writes:
Suppose I'm using stdio calls to write to a disk file. One possible error condition is no space on file system or even (in unix environment) a ulimit of 0 bytes.
Which calls would be expected to return error codes for such conditions?
Would it happen on the writes (fwrite, fprintf, fputs ...), or as they actually write to io buffers, might the errors not occur until the data is flushed or the file is closed?
I suspect the answer is "any of them" as the io buffers may be flushed because a write fills them.
Correct. If you want to check it before writing too much to the file, call fflush() after your first output call and see if it succeeds.
In theory, the test is not 100% relevant, because fflush is only required to deliver the bytes in the buffer to the OS, not to force the OS to actually write them to a file (they may simply be moved from the stdio buffer to the OS buffer and still not reach the disk). But that's the best you can do, short of also trying to close the file.
How can closing the file give you any more guarantees than flushing the
stdio buffers? I think even after a successfull fclose, there is no
guarantee that the data has made it to disk (or tape, fileing cabinet or
whatever).
Tobias.
--
unix http://www.faqs.org/faqs/by-newsgrou...rogrammer.html
clc http://www.eskimo.com/~scs/C-faq/top.html
fclc (french): http://www.isty-info.uvsq.fr/~rumeau/fclc/
"Tobias Oed" <to****@physics .odu.edu> wrote in message
news:bf******** ****@ID-97389.news.uni-berlin.de... Dan Pop wrote:
In <3F************ @comcast.net> Jon LaBadie <jg*****@comcas t.net>
writes:Suppose I'm using stdio calls to write to a disk file. One possible error condition is no space on file system or even (in unix environment) a ulimit of 0 bytes.
Which calls would be expected to return error codes for such conditions?
Would it happen on the writes (fwrite, fprintf, fputs ...), or as they actually write to io buffers, might the errors not occur until the data is flushed or the file is closed?
I suspect the answer is "any of them" as the io buffers may be flushed because a write fills them.
Correct. If you want to check it before writing too much to the file, call fflush() after your first output call and see if it succeeds.
In theory, the test is not 100% relevant, because fflush is only
required to deliver the bytes in the buffer to the OS, not to force the OS to actually write them to a file (they may simply be moved from the stdio buffer to the OS buffer and still not reach the disk). But that's the best you can do, short of also trying to close the file.
How can closing the file give you any more guarantees than flushing the stdio buffers? I think even after a successfull fclose, there is no guarantee that the data has made it to disk (or tape, fileing cabinet or whatever).
First fclose() will fflush() data that may still be in stdio buffers. Then
it should do whatever the OS needs done to close the file and verify that
the data was written out.
Yes, if the OS doesn't report it on close you are stuck. Most do, though,
just for this reason.
Lately there have been some attempts at disk drives that cache internally
and report the write succeeding before actually writing it. There have been
questions about doing that under OS that believe it is really written.
-- glen
>> >>Suppose I'm using stdio calls to write to a disk file. >>One possible error condition is no space on file system >>or even (in unix environment) a ulimit of 0 bytes.
.... How can closing the file give you any more guarantees than flushing the stdio buffers? I think even after a successfull fclose, there is no guarantee that the data has made it to disk (or tape, fileing cabinet or whatever).
First fclose() will fflush() data that may still be in stdio buffers. Then it should do whatever the OS needs done to close the file and verify that the data was written out.
Yes, if the OS doesn't report it on close you are stuck. Most do, though, just for this reason.
Lately there have been some attempts at disk drives that cache internally and report the write succeeding before actually writing it. There have been questions about doing that under OS that believe it is really written.
I doubt very much that a disk drive will on its own defer *ALLOCATION*
of disk space for a file write. I'm not so sure an OS will either.
It has to keep track of what to do with this data that's going to be
written, and in many file systems it's not necessary to go to the disk
to find a place on the disk to put it. I don't see the advantage of
deferring *ALLOCATION* of a disk block. (Deferring WRITING to the
disk block, yes, there's an advantage, it may be quickly updated
again).
I have heard claims by some disk manufacturers that their data-buffering
drives will, in case of a power failure, manage to stay alive long
enough to get all the data onto the disk, even if they generated
power from the spindle slowing down, and even if they had to map
in alternate sectors for ones that are discovered bad. (The problem
here is you could possibly run out of alternate sectors, at which
point, it's time to get a new drive.) I do not know if this is
true. The bigger the buffer, the more unlikely the claim is accurate.
In any case, a good UPS is a worthwhile investment if you're really
worried about the data getting on to the disk. Also, consider hardware
RAID sets.
Gordon L. Burditt
In <bf**********@n ews-reader6.wanadoo .fr> "jacob navia" <ja*********@ja cob.remcomp.fr> writes: The fwrite function is supposed to return an error immediately if no space is available for the write. All OS caching (buffers, etc) should happen afterwards.
Chapter and verse, please.
The fwrite function is supposed to behave as if all the characters have
been written using the fputc() function, which is subject to the usual
stdio buffering.
Furthermore, the stdio functions are merely required to deliver the data
to the OS (aka the execution environment in C standardese), no guarantee
is made about what the OS is going to do with the data. If it chooses to
cache it, the C standard does nothing to prevent it.
Only if the file has been successfully closed you have some (relatively
vague) guarantees from the standard.
Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
"Gordon Burditt" <go***********@ sneaky.lerctr.o rg> wrote in message
news:bf******** @library1.airne ws.net...
(snip)
(someone wrote) First fclose() will fflush() data that may still be in stdio buffers.
Thenit should do whatever the OS needs done to close the file and verify that the data was written out.
Yes, if the OS doesn't report it on close you are stuck. Most do,
though,just for this reason.
Lately there have been some attempts at disk drives that cache internally and report the write succeeding before actually writing it. There have
beenquestions about doing that under OS that believe it is really written. I doubt very much that a disk drive will on its own defer *ALLOCATION* of disk space for a file write. I'm not so sure an OS will either. It has to keep track of what to do with this data that's going to be written, and in many file systems it's not necessary to go to the disk to find a place on the disk to put it. I don't see the advantage of deferring *ALLOCATION* of a disk block. (Deferring WRITING to the disk block, yes, there's an advantage, it may be quickly updated again).
Allocation is a strange word to apply to disk drives. Disk drives usually
contain a collection if data blocks which may be overwritten by data
supplied by the OS. The OS is responsible for allocating blocks to files,
and storing the appropriate data on the disk, also in disk blocks, to keep
track of the data on the disk.
I have heard claims by some disk manufacturers that their data-buffering drives will, in case of a power failure, manage to stay alive long enough to get all the data onto the disk, even if they generated power from the spindle slowing down, and even if they had to map in alternate sectors for ones that are discovered bad. (The problem here is you could possibly run out of alternate sectors, at which point, it's time to get a new drive.) I do not know if this is true. The bigger the buffer, the more unlikely the claim is accurate. In any case, a good UPS is a worthwhile investment if you're really worried about the data getting on to the disk. Also, consider hardware RAID sets.
I suppose one could describe the finding of alternate blocks to replace bad
blocks with as allocation.
I don't know how many disk drives do write buffering and then report the
data as written. If they can guarantee to write the data when the disk is
powered down, it should be fine.
-- glen
>> >First fclose() will fflush() data that may still be in stdio buffers. Then >it should do whatever the OS needs done to close the file and verify that >the data was written out. > >Yes, if the OS doesn't report it on close you are stuck. Most do,though, >just for this reason. > >Lately there have been some attempts at disk drives that cache internally >and report the write succeeding before actually writing it. There havebeen >questions about doing that under OS that believe it is really written. I doubt very much that a disk drive will on its own defer *ALLOCATION* of disk space for a file write. I'm not so sure an OS will either. It has to keep track of what to do with this data that's going to be written, and in many file systems it's not necessary to go to the disk to find a place on the disk to put it. I don't see the advantage of deferring *ALLOCATION* of a disk block. (Deferring WRITING to the disk block, yes, there's an advantage, it may be quickly updated again).
Allocation is a strange word to apply to disk drives.
You're right. The subject line refers to checking for an out of
disk space condition. The question is whether the error reporting
(or discovery) will be delayed by the OS or the disk drive until
after fclose(). I claim it is unlikely to be an issue, as deferring
*ALLOCATION* doesn't buy you much - on the drive *OR IN THE OS
EITHER*. On the other hand, deferring reporting of *WRITE ERRORS*
is an issue.
(I do, however, know of an implementation with deferred *DE*allocation,
so you can delete a 1-gigabyte file, then try to write an 8-kilobyte
file, and run out of space. The space may not come back until a
couple of minutes after the program that did the deletion terminates.
FreeBSD with the "softupdate s" option on the particular filesystem
does this. "softupdate s" seems to do an excellent job of leaving
things in reasonable shape after a crash (usually power failure or
tripped-over cable))
Disk drives usually contain a collection if data blocks which may be overwritten by data supplied by the OS. The OS is responsible for allocating blocks to files, and storing the appropriate data on the disk, also in disk blocks, to keep track of the data on the disk.
I have heard claims by some disk manufacturers that their data-buffering drives will, in case of a power failure, manage to stay alive long enough to get all the data onto the disk, even if they generated power from the spindle slowing down, and even if they had to map in alternate sectors for ones that are discovered bad. (The problem here is you could possibly run out of alternate sectors, at which point, it's time to get a new drive.) I do not know if this is true. The bigger the buffer, the more unlikely the claim is accurate. In any case, a good UPS is a worthwhile investment if you're really worried about the data getting on to the disk. Also, consider hardware RAID sets. I suppose one could describe the finding of alternate blocks to replace bad blocks with as allocation.
I would describe the failure to find alternate blocks (because they've
been used up) AFTER the power fails and the drive is trying to save
everything it can as a "write error with deferred (or nonexistent)
reporting".
I don't know how many disk drives do write buffering and then report the data as written. If they can guarantee to write the data when the disk is powered down, it should be fine.
No drive can guarantee to write the data. It can try really hard
(including RAID setups with multiple copies of the data), and try
to give advance warnings if there might be problems. There are a
number of failure modes where a disk drive can suddenly be unable
to ever write (and perhaps never read either) anything again. One
of them is commonly called a "head crash". Another one is where
the drive is physically destroyed by, say, military weapons, nuclear
or not. EMP from nuclear weapons is one scenario where a lot of
electronics may burn out all at once.
Gordon L. Burditt
In article <bf********@lib rary2.airnews.n et>
Gordon Burditt <go***********@ sneaky.lerctr.o rg> writes: You're right. The subject line refers to checking for an out of disk space condition. The question is whether the error reporting (or discovery) will be delayed by the OS or the disk drive until after fclose(). I claim it is unlikely to be an issue, as deferring *ALLOCATION* doesn't buy you much - on the drive *OR IN THE OS EITHER*.
Perhaps not; but it does (or did) occur in practice, and surprised
quite a few people in the process, when Sun first began using NFS.
It turns out that if you have a file quota on the file server, and
you go over it, stdio generally reports the EDQUOT error on fclose().
You can get the error slightly earlier by fflush()ing output files,
then fsync()ing the underlying file descriptor, before fclose()ing
them; but this is of course not portable to "non-Unixy" systems,
where there is no fsync().
(I do, however, know of an implementation with deferred *DE*allocation, so you can delete a 1-gigabyte file, then try to write an 8-kilobyte file, and run out of space. The space may not come back until a couple of minutes after the program that did the deletion terminates. FreeBSD with the "softupdate s" option on the particular filesystem does this.
The soft-update code is not supposed to do this any more. In
particular, if block allocation is about to fail, the file system
is suppose to call into an "execute deferred deallocation" routine
that will restore some free space in such cases.
(I have no idea which versions of FreeBSD have the newer code, but
it is not all *that* "newer". I believe I got the code from Kirk
sometime last year. The UFS2 changes were mostly done by then as
well.)
--
In-Real-Life: Chris Torek, Wind River Systems (BSD engineering)
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://67.40.109.61/torek/index.html (for the moment)
Reading email is like searching for food in the garbage, thanks to spammers. This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics |
by: Mike |
last post by:
I am sure that I am making a simple boneheaded mistake and I would
appreciate your help in spotting in. I have just installed
apache_2.0.53-win32-x86-no_ssl.exe
php-5.0.3-Win32.zip
Smarty-2.6.7.tar.gz
on a system running WindowsXP SP2.
Apache and PHP tested out fine. After adding Smarty, I ran the
following http://localhost/testphp.php
|
by: Hal Vaughan |
last post by:
I want to have a config file for my program, which means I need to know
where the config file is. If I type:
java myclass
and it runs myclass.class, is there any way to obtain the location of the
file myclass.class? Will this work if it's run from a relative path, like:
java ../progs/myclass
|
by: steve |
last post by:
Is there a way to catch file errors while writing
to a file using ofstream?
For ex., if the file is deleted or permissions changed
by another process after it is opened,
or when the disk is full.
I couldn't figure out which members could be used
to check for these types of errors.
I tried the good(), bad(), fail() etc.,
after writing to a full disk, deleted file etc.
|
by: Gregg |
last post by:
Hello all,
I have been banging my head over a problem that I am having reading a
comma seperated file (CSV) that can contain from 1 to 10,000 records.
My code snipit is as follows:
**Start code snipit**
Dim strCustFullName as string
Dim strCustAddr1 as string
|
by: Lokkju |
last post by:
I am pretty much lost here - I am trying to create a managed c++
wrapper for this dll, so that I can use it from c#/vb.net, however, it
does not conform to any standard style of coding I have seen. It is
almost like it is trying to implement it's own COM interfaces...
below is the header, and a link to the dll+code:
Zip file with header, example, and DLL:...
| |
by: Michael C# |
last post by:
Hi all,
I set up a System.Timers.Time in my app. The code basically just updates
the screen, but since the processing performed is so CPU-intensive, I wanted
to make sure it gets updated regularly; like every 1.5 secs. or so. I only
ran into one issue - the MyTimer_Elapsed event handler was not updating the
screen correctly all the time, often leaving large chunks of the screen
un-painted for several seconds.
On a whim I decided to...
|
by: laredotornado |
last post by:
Hi,
I'm using PHP 4.4.4 on Apache 2 on Fedora Core 5. PHP was installed
using Apache's apxs and the php library was installed to
/usr/local/php. However, when I set my "error_reporting" setting to be
"E_ALL", notices are still not getting reported. The perms on my file
are 664, with owner root and group root. The php.ini file is located
at /usr/local/lib/php/php.ini. Any ideas why the setting does not seem
to be having an effect? ...
|
by: Baron Samedi |
last post by:
I want to produce a piece of software for embedded systems, generally
telecoms based, mostly running on ARM processors, but I can't guarantee
that, of course.
My software should work along with other software which will generally
be written in C or C++ (occasionally in ADA or even assembler).
I suppose that there are C compilers for marginally more processors
than C++, but, realistically, I am not sure that it makes a major
difference.
|
by: moondaddy |
last post by:
running vs2005 I have a small test website called TestPublish which has
default.aspx, ErrorPage.aspx and testpage.htm. The default page just says
hello world and no other code and the errorpage has nothing yet (not sure
how to make an error page yet...). After publishing the site to a remote
server I can browse to testpage.htm, but not Default.aspx. When I hit the
default page I get the server side error telling me I need to enable...
|
by: jinu1996 |
last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth.
The Art of Business Website Design
Your website is...
|
by: Hystou |
last post by:
Overview:
Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
| |
by: tracyyun |
last post by:
Dear forum friends,
With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
|
by: agi2029 |
last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own....
Now, this would greatly impact the work of software developers. The idea...
|
by: isladogs |
last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM).
In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules.
He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms.
Adolph will...
|
by: TSSRALBI |
last post by:
Hello
I'm a network technician in training and I need your help.
I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs.
The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols.
I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
|
by: adsilva |
last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
|
by: muto222 |
last post by:
How can i add a mobile payment intergratation into php mysql website.
| |
by: bsmnconsultancy |
last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...
| | |