473,320 Members | 2,202 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,320 software developers and data experts.

51MB size limit with fopen in append mode

I have a program that writes to a log file. It's compiled on RH Linux 7.3
(Kernel 2.4.18-18.7). It's using fopen in append mode. When the file reaches
51200000 bytes in size, the program will no longer write to the file. When
this happens, fopen and fputs do not return an error. I've been researching
large file support for Linux, and it all has to do with the regular 2-gig file
size limit. If it's something obvious, sorry -- I'm a C newbie. Here's the
code snippet:

/* open logfile */
if ((lop = fopen(log, "a")) == NULL) {
fprintf(op, "\n%s: can't open %s for writing\n", prog, log);
fclose(op);
exit(1);
}

/* write To: line to logfile */
fputs(outputline, lop);
fclose(lop);
if (ferror(lop)) {
fprintf(op, "\n%s: error writing to %s\n", prog, log);
}

I'm all ears if anyone has any suggestions. Thanks.
-Aaron

--
To contact me via email, substitute
'aaron' for 'spam' in my address.
http://www.towerdata.com
Nov 14 '05 #1
14 4704
Aaron Couts wrote:

I have a program that writes to a log file. It's compiled on RH Linux 7.3
(Kernel 2.4.18-18.7). It's using fopen in append mode. When the file reaches
51200000 bytes in size, the program will no longer write to the file. When
this happens, fopen and fputs do not return an error. I've been researching
large file support for Linux, and it all has to do with the regular 2-gig file
size limit. If it's something obvious, sorry -- I'm a C newbie. Here's the
code snippet:

[...]

This is unrelated to the C language, but rather the O/S itself.

Try "man ulimit", and if that doesn't help, you could probably get a
better answer in one of the *nix newsgroups, such as within the
"comp.unix.*" or "comp.os.linux.*" trees.

--

+---------+----------------------------------+-----------------------------+
| Kenneth | kenbrody at spamcop.net | "The opinions expressed |
| J. | http://www.hvcomputer.com | herein are not necessarily |
| Brody | http://www.fptech.com | those of fP Technologies." |
+---------+----------------------------------+-----------------------------+

Nov 14 '05 #2

"Aaron Couts" <sp**@couts.org> wrote in message
It's using fopen in append mode. When the file reaches
51200000 bytes in size, the program will no longer write to the file. > When this happens, fopen and fputs do not return an error.

This is the sort of thing that happens when you start straining the limits
of the platform. fputs() should return an error if the text is not
successfully written.
What you need to do, if you really do need 51MB files, is use lower-level
platform-specific calls to manage your IO.
Nov 14 '05 #3
Aaron Couts wrote:

I have a program that writes to a log file. It's compiled on RH
Linux 7.3 (Kernel 2.4.18-18.7). It's using fopen in append mode.
When the file reaches 51200000 bytes in size, the program will no
longer write to the file. When this happens, fopen and fputs do
not return an error. I've been researching large file support
for Linux, and it all has to do with the regular 2-gig file size
limit. If it's something obvious, sorry -- I'm a C newbie.
Here's the code snippet:

/* open logfile */
if ((lop = fopen(log, "a")) == NULL) {
fprintf(op, "\n%s: can't open %s for writing\n", prog, log);
fclose(op);
exit(1);
}

/* write To: line to logfile */
fputs(outputline, lop);
fclose(lop);
if (ferror(lop)) {
fprintf(op, "\n%s: error writing to %s\n", prog, log);
}

I'm all ears if anyone has any suggestions. Thanks.


I see nothing outstandingly wrong, apart from the exit that should
be "exit(EXIT_FAILURE)" and the failure to check the returns from
fputs and fclose before the ferror check. I believe their is no
guarantee that ferror gets set on those errors.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!

Nov 14 '05 #4


Aaron Couts wrote:
I have a program that writes to a log file. It's compiled on RH Linux 7.3
(Kernel 2.4.18-18.7). It's using fopen in append mode. When the file reaches
51200000 bytes in size, the program will no longer write to the file. When
this happens, fopen and fputs do not return an error. I've been researching
large file support for Linux, and it all has to do with the regular 2-gig file
size limit. If it's something obvious, sorry -- I'm a C newbie. Here's the
code snippet:

/* open logfile */
if ((lop = fopen(log, "a")) == NULL) {
fprintf(op, "\n%s: can't open %s for writing\n", prog, log);
fclose(op);
exit(1);
}

/* write To: line to logfile */
fputs(outputline, lop);
fclose(lop);
if (ferror(lop)) {
fprintf(op, "\n%s: error writing to %s\n", prog, log);
}

I'm all ears if anyone has any suggestions. Thanks.
-Aaron


The fopen() call, if the file exists, did not fail. If the
file exceeded some internal limit and you specified "a", then
one could argue that it should have failed, but I doubt
that any implemention of fopen() does that.

More importantly, you are checking ferror() AFTER
the fclose() call. Fclose() succeeded.

From my man page for fclose() (your mileage may vary)

"Upon successful completion 0 is returned. Otherwise,
EOF is returned and the global variable errno is set
to indicate the error. In either case
no further access to the stream is possible."

You should check ferror() before closing the stream.

(Then you should check errno to see why.)
--
Ñ
"It is impossible to make anything foolproof because fools are so
ingenious" - A. Bloch

Nov 14 '05 #5
Aaron Couts wrote:

I have a program that writes to a log file. It's compiled on RH Linux 7.3
(Kernel 2.4.18-18.7). It's using fopen in append mode. When the file reaches
51200000 bytes in size, the program will no longer write to the file. When
this happens, fopen and fputs do not return an error. I've been researching
large file support for Linux, and it all has to do with the regular 2-gig file
size limit. If it's something obvious, sorry -- I'm a C newbie. Here's the
code snippet:

/* open logfile */
if ((lop = fopen(log, "a")) == NULL) {
fprintf(op, "\n%s: can't open %s for writing\n", prog, log);
fclose(op);
exit(1);
}

/* write To: line to logfile */
fputs(outputline, lop);
fclose(lop);
if (ferror(lop)) {
fprintf(op, "\n%s: error writing to %s\n", prog, log);
}


The problem/limitation/whatever doesn't appear to have
anything to do with C per se, and you'll probably need to
look elsewhere for the solution. However, there's a bug in
the C snippet: You're calling ferror() on a FILE* that's
already been fclose()d -- which means, more or less, that
anything can happen. It's at least possible that "anything"
is a completely spurious error indication, and that you may
have no actual problem at all.

There are at least two ways to check for success of the
output functions: Either check the value returned by each
output-generating call (e.g., `if (fputs(...) == EOF)') or
just ignore the returned values and call ferror() at suitable
moments (prior to fclose()). Also, you should check the
value returned by fclose() itself. Yes, a close operation can
fail, and such a failure once caused me to make a plane trip
to an irate customer's site so I could grovel on his office
floor and pray forgiveness for my then employer.

--
Er*********@sun.com
Nov 14 '05 #6
Malcolm wrote:
"Aaron Couts" <sp**@couts.org> wrote in message
It's using fopen in append mode. When the file reaches
51200000 bytes in size, the program will no longer write to the file. >
When this happens, fopen and fputs do not return an error.

This is the sort of thing that happens when you start straining the limits
of the platform.


Writing a file in excess of 51 MB is not "stretching the limits" by far.
fputs() should return an error if the text is not successfully written.
In particular, it should return an EOF if the write was unsuccesful.
What you need to do, if you really do need 51MB files, is use lower-level
platform-specific calls to manage your IO.


Pardon my french, but this is nonsense. Standard C handled big files
like a charm, even more so on the operating system in question. And
these files aren't even big, just - not small.

What the OP should do is run "quota" and "df", and see if this turns up
anything useful.

Best regards,

Sidney
Nov 14 '05 #7
>What the OP should do is run "quota" and "df", and see if this turns up
anything useful.


Somewhere he might be able to get errno,
EDQUOT and ENOSPC.
Nov 14 '05 #8
I should have specified that there are no disk quotas on the system, and drive
space is not an issue. I've seen the same issue on two different machines.

Thanks for the suggestions. I'll check ferror after each f... call and see if
I can find out more info.

Also, this program is running as the user 'nobody'; perhaps the OS has some
special deal with that. I look into that as well.

--
To contact me via email, substitute
'aaron' for 'spam' in my address.
http://www.towerdata.com
Nov 14 '05 #9


Eric Sosman wrote:

[SNIP]
Also, you should check the
value returned by fclose() itself. Yes, a close operation can
fail, and such a failure once caused me to make a plane trip
to an irate customer's site so I could grovel on his office
floor and pray forgiveness for my then employer.


Somehow, I'm heartened to know that I'm not the only one
who has had to go through this ordeal, Eric!
(And 'tweren't my code either!)

--
Ñ
"It is impossible to make anything foolproof because fools are so
ingenious" - A. Bloch

Nov 14 '05 #10
>Aaron Couts wrote:
[much snippage]
fputs(outputline, lop);
fclose(lop);
if (ferror(lop)) {
fprintf(op, "\n%s: error writing to %s\n", prog, log);
}

In article <news:40**************@sun.com>
Eric Sosman <Er*********@Sun.COM> writes (in part):
[more snippage] There are at least two ways to check for success of the
output functions: Either check the value returned by each
output-generating call (e.g., `if (fputs(...) == EOF)') or
just ignore the returned values and call ferror() at suitable
moments (prior to fclose()). Also, you should check the
value returned by fclose() itself. Yes, a close operation can
fail, and ...


Indeed.

For the lazy (which often includes me :-) ) I recommend something
like the following sequence for "closing a file to which output has
been written":

const char *s;
...
s = NULL;
if (fflush(fp)) {
s = strerror(errno);
fprintf(stderr, "%s: error writing %s: %s\n", progname, filename, s);
}
if (s == NULL && ferror(fp))
fprintf(stderr, "%s: error writing %s (but I can't remember why)\n",
progname, filename);
if (fclose(fp)) {
s = strerror(errno);
fprintf(stderr, "%s: error closing %s: %s\n", progname, filename, s);
}

Although there is no guarantee that errno is set on a failing
fflush() or fclose() (or indeed any output function, if you are
not lazy and check for errors immediately), there is a significant
chance that it *was* set, and including the result from strerror()
can help figure out what went wrong. If you want to be especially
explicit and/or not trust the system to set errno, you might do
something like:

fprintf(stderr, "%s: error writing %s --\n"
"\tmost recent system error recorded was \"%s\",\n"
"\talthough this may have nothing to do with the problem.\n",
progname, filename, strerror(errno));

If you are "non-lazy" and check every output function for success,
you have a better chance of capturing a useful errno value. (Then
you also do not need the "ferror(fp)" test either.)
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Nov 14 '05 #11
Eric Sosman <Er*********@sun.com> spoke thus:
Yes, a close operation can
fail, and such a failure once caused me to make a plane trip
to an irate customer's site so I could grovel on his office
floor and pray forgiveness for my then employer.


Care to elaborate? Or is the memory too painful? ;)

--
Christopher Benson-Manica | I *should* know what I'm talking about - if I
ataru(at)cyberspace.org | don't, I need to know. Flames welcome.
Nov 14 '05 #12


Christopher Benson-Manica wrote:
Eric Sosman <Er*********@sun.com> spoke thus:

Yes, a close operation can
fail, and such a failure once caused me to make a plane trip
to an irate customer's site so I could grovel on his office
floor and pray forgiveness for my then employer.



Care to elaborate? Or is the memory too painful? ;)


Maybe Eric doesn't care to elaborate, but I will.

What happens on the close() of an invalid file
descriptor (ANS: EBABF) and you did not check
the return value from close() ?

It probably indicates that somewhere in your code
the FD was inadvertantly modified, probably due to
poor naming conventions or some such. The FD
that was really open never gets closed. You have
a lot of files open. If you have to run for months at
a time without a reboot, sooner or later you run
out of O/S structures for all those open files
of yours. The result is left as a thought
experiment for the reader :) .

In essence this is equivalent to a "memory leak" in that
you "free()" the wrong resource and wind up using
an unnecessary resources until there are no more
left.

--
Ñ
"It is impossible to make anything foolproof because fools are so
ingenious" - A. Bloch

Nov 14 '05 #13
Christopher Benson-Manica wrote:

Eric Sosman <Er*********@sun.com> spoke thus:
Yes, a close operation can
fail, and such a failure once caused me to make a plane trip
to an irate customer's site so I could grovel on his office
floor and pray forgiveness for my then employer.


Care to elaborate? Or is the memory too painful? ;)


I've told the tale before, so here's an abbreviated synopsis.
The program was a document-processing system, and when the user
saved the new version of an edited document the system did

stm = fopen("doc.tmp", "w");
if (stm == NULL) ...
while (more to write)
if (fwrite(buf, siz, cnt, stm) != cnt) ...
if (fflush(stm) != 0) ...
fclose (stm);
/* Document has been written successfully. */
remove ("doc.bak");
if (rename ("doc.doc", "doc.bak") != 0) ...
if (rename ("doc.tmp", "doc.doc") != 0) ...

Customer loaded his document, did a bunch of editing, and
then saved it. Then he did a little more editing -- possibly
a spell-check or something -- saved again, and went home. When
he came in the next morning the document file was truncated and
unusable, and so was the backup file. Since everything was
checked for errors, how did this happen?

Well, "everything" omitted the fclose(), which the author
apparently thought unnecessary to check -- after all, all the
"actual" output operations were checked, and fclose() is just
sort of a disconnector, right? Wrong. On the system at hand
(as on many systems) there were multiple layers in the I/O
system, and the fflush() did no more than push the data from
one layer to the next. Apparently some buffers still remained
in that further layer, and they weren't actually drained to
disk until fclose() tore down the entire connection to the file.
And when the further layer tried to drain its buffers to the
disk, it found that the user had exhausted his disk space
quota, and could allocate no more space to hold the data. The
further layer returned an error indication back to fclose(), but
the application ignored it.

Result from the first save: doc.bak holds a perfectly good
document without any edits, and doc.doc is a damaged file. After
the second save, both doc.bak and doc.doc are damaged files, and
no intact document -- with or without the user's edits -- remains
on the disk. As Murphy would have it, the user was in fact the
boss of the entire department, the guy who was negotiating with
us to buy upgrades and additional licenses ...

Heed the Sixth Commandment!

http://www.lysator.liu.se/c/ten-commandments.html

--
Er*********@sun.com
Nov 14 '05 #14
Aaron Couts wrote:

I should have specified that there are no disk quotas on the system, and drive
space is not an issue. I've seen the same issue on two different machines.

[...]

Did you check the ulimit?

--

+---------+----------------------------------+-----------------------------+
| Kenneth | kenbrody at spamcop.net | "The opinions expressed |
| J. | http://www.hvcomputer.com | herein are not necessarily |
| Brody | http://www.fptech.com | those of fP Technologies." |
+---------+----------------------------------+-----------------------------+

Nov 14 '05 #15

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

5
by: James Butler | last post by:
Running a CLI script that fopen's a file, parses the lines into an array, checks the array entries against a few regular expression qualifiers (i.e. !eregi("bot",$entry)) and dump the good entries...
5
by: Arturo Ordaz | last post by:
Has anyone run into documentation that describes how fopen(log_file, "a") behaves when it is potentialy used (called/referenced) 1000's of times an hour on a single resource file? I am trying to...
15
by: uremae | last post by:
I tried to open some large files in my computer. (ram 512MB) 1. is there limitation of file size FOPEN? 2. if I have a file that is larger than the maximum size, how can I open the file?
35
by: munish.nr | last post by:
Hi All, I want to know the size of file (txt,img or any other file). i knoe only file name. how i can acheive this. does anybody is having idea about that. plz help. rgrds, Munish Nayyar
14
by: googler | last post by:
Is there any C library function that returns the size of a given file? Otherwise, is there a way in which file size can be determined in a C program? I need to get this for both Linux and Windows...
16
by: Hans Fredrik Nordhaug | last post by:
I'm trying to write to a file in the current directory - no remote files. The subject says it all - I can add that both the directory and the file is wordwritable. This happens on a (quite good)...
3
by: pereges | last post by:
Can I use this mode to create a new file , add some data and then append more data ? How is it different from ab+ ?
0
by: DolphinDB | last post by:
The formulas of 101 quantitative trading alphas used by WorldQuant were presented in the paper 101 Formulaic Alphas. However, some formulas are complex, leading to challenges in calculation. Take...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
0
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.