473,748 Members | 5,849 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

fwrite output is not efficient (fast) ?

Hi,

recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?

regards,

...ab
Oct 24 '08 #1
25 15563
Abubakar <ab*******@gmai l.comwrote:
recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing.
That most probably wasn't a C programmer, but a GNU-almost-POSIX-
with-a-bit-of-C-thrown-in-for-appearance's-sake programmer.
Is that true?
As phrased, it is not true. It may be true for some systems, under some
circumstances; but you should never worry about optimisation until you
_know_ that you have to, and don't just think you might.

Richard
Oct 24 '08 #2
>recently some C programmer told me that using fwrite/fopen functions
>are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?
For some programs I have benchmarked (which were essentially doing
disk-to-disk file copies), using single-character read() and write()
calls to copy files chews up a lot of CPU time. Switching to
getc()/putc()/fopen() calls reduced the CPU used to copy a given
file by a factor of 10 and the wall clock time to finish copying
the file by about 10%. In this particular situation, buffering
helped speed things up a lot.
Oct 24 '08 #3
Abubakar said:
Hi,

recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?
I suggest you ask your friend:

1) why he thinks fopen outputs anything at all to a file;
2) why he thinks the buffering status of a file depends entirely on fopen
or fwrite;
3) what he thinks setbuf and setvbuf are for;
4) what he means by "efficient" .

You may well find the answers to those questions illuminating. On the other
hand, you might not.

1), 2), and 3) should all send him back-pedalling, and 4) should at least
make him stop and think about the real issues here.

--
Richard Heathfield <http://www.cpax.org.uk >
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Oct 24 '08 #4
Abubakar <ab*******@gmai l.comwrites:
Hi,

recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?
It's true that the output is buffered (usually). Depending on your
situation, that may be good or bad. If you are calling fwrite with
large chunks of data (many kbytes), the buffering will not gain you
much, but a sensible implementation will bypass it and you'll only have
a small amount of overhead. If you are writing small chunks or single
characters at a time (for instance, with fprintf() or putc()), the
buffering can speed things up drastically, since it cuts down on calls
to the operating system, which can be expensive.

It is true in this case that some data may not be written as soon as you
call fwrite(), but this doesn't slow down the data transfer in the long
run. If it's important for some reason that a small bit of data go out
immediately, you can call fflush() or use setvbuf() to change the
buffering mode, but at a cost to overall efficiency.
Oct 24 '08 #5
On Oct 24, 2:03*am, Abubakar <ab*******@gmai l.comwrote:
Hi,

recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?
The purpose of buffering is actually efficiency; transferring a large
block of data to disk at once is faster than transferring it byte by
byte (and by this I don't mean that singe-byte functions such as
getc() and putc() are inefficient; these functions also read/write to
or from a buffer in memory).

This is usually true not only for disk files, but also for I/O devices
in general such as network sockets. In some applications, however, it
can be convenient to develop a more specialized I/O layer model that
can provide more functionality and improve efficiency in specific
situations.

(If you're just worried that the data will remain in the buffer
indefinitely, just call fflush() or close the file.)

Sebastian

Oct 24 '08 #6
well he is saying that using the *native* read and write to do the
same task using file descriptors is much faster than the fwrite etc.
He says using the FILE * that is used in case of the fopen/fwrite
buffers data and is not as fast and predictable as read write are.

I have given him the link to this discussion and he is going to be
posting his replies soon hopefully.

On Oct 24, 2:00*pm, Richard Heathfield <r...@see.sig.i nvalidwrote:
Abubakar said:
Hi,
recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?

I suggest you ask your friend:

1) why he thinks fopen outputs anything at all to a file;
2) why he thinks the buffering status of a file depends entirely on fopen
or fwrite;
3) what he thinks setbuf and setvbuf are for;
4) what he means by "efficient" .

You may well find the answers to those questions illuminating. On the other
hand, you might not.

1), 2), and 3) should all send him back-pedalling, and 4) should at least
make him stop and think about the real issues here.

--
Richard Heathfield <http://www.cpax.org.uk >
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Oct 24 '08 #7
Abubakar said:
well he is saying that using the *native* read and write to do the
same task using file descriptors is much faster than the fwrite etc.
I suggest you ask your friend the questions that I asked you to ask him.
He says using the FILE * that is used in case of the fopen/fwrite
buffers data and is not as fast and predictable as read write are.
FILE * is a pointer type. It doesn't have a speed.

--
Richard Heathfield <http://www.cpax.org.uk >
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Oct 24 '08 #8
Well he read the link and now he says that the replies by Nate
Eldredge and the guy whose email starts with "s0s" have said what
proves what he says is right so he says there is no need to reply to
anything. ummmm , i dont know whats up. I am going to be posting more
questions to clear things that he has told me because he has much more
experience in C than me. Thanks for the replies so far, if you guys
have more comments please continue posting.

On Oct 24, 1:49*pm, gordonb.l2...@b urditt.org (Gordon Burditt) wrote:
recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?

For some programs I have benchmarked (which were essentially doing
disk-to-disk file copies), using single-character read() and write()
calls to copy files chews up a lot of CPU time. *Switching to
getc()/putc()/fopen() calls reduced the CPU used to copy a given
file by a factor of 10 and the wall clock time to finish copying
the file by about 10%. *In this particular situation, buffering
helped speed things up a lot.
Oct 24 '08 #9
Abubakar wrote:
Hi,

recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?
It's actually backwards, in general. It would be closer to being
accurate to say that they are more efficient because they are buffered
(but event that statement isn't quite true). The purpose of buffering
the data is to take advantage of the fact that, in most cases, it's more
efficient to transfer many bytes at a time. When transferring a large
number of bytes, using unbuffered writes gets the first byte out earlier
than using buffered writes; but gets the last byte out later.

Which approach is better depends upon your application, but in many
contexts the earlier bytes can't (or at least won't) even be used until
after the later bytes have been written, which makes buffered I/O the
clear winner.

Furthermore, C gives you the option of turning off buffering with
setvbuf(). If you do, then the behavior should be quite similar to that
you would get from using system-specific unbuffered I/O functions (such
as write() on Unix-like systems).
Oct 24 '08 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

11
3618
by: hoopsho | last post by:
Hi Everyone, I am trying to write a program that does a few things very fast and with efficient use of memory... a) I need to parse a space-delimited file that is really large, upwards fo a million lines. b) I need to store the contents into a unique hash. c) I need to then sort the data on a specific field. d) I need to pull out certain fields and report them to the user.
3
4528
by: sumit1680 | last post by:
Hi everyone, I am using the below listed code The code is #include<stdio.h> #include<stdlib.h> #include<string.h>
3
569
by: email.ttindustries | last post by:
Hello, I am a C/C++ newbie and I have a simple question concerning fast data writing to binary files. Are there any other faster methods than the standard write() method to write to binary data files? I ask this question because I have to store a big amount of data coming from an PCI A/D card with a sampling frequency of 20 MB/s. Now I have to implement some additional computations and would need to speed up the data transfer. I am...
11
4303
by: David Mathog | last post by:
In the beginning (Kernighan & Ritchie 1978) there was fprintf, and unix write, but no fwrite. That is, no portable C method for writing binary data, only system calls which were OS specific. At C89 fwrite/fread were added to the C standard to allow portable binary IO to files. I wonder though why the choice was made to extend the unix function write() into a standard C function rather than to extend the existing standard C function...
7
1812
by: Carlo Milanesi | last post by:
Hello, I just completed writing an online book about developing efficient software using the C++ language. You can find it here: http://en.wikibooks.org/wiki/Optimizing_C%2B%2B It is a wiki, that is everyone can change it or only add critical comments to the pages. Everyone is invited to improve it. But before applying major changes, please read these guidelines: http://en.wikibooks.org/wiki/Optimizing_C%2B%2B/Guidelines_for_editors
12
8974
by: arnuld | last post by:
WANTED: Even if I do Ctrl-C in the middle of fgets(), fwrite() should write the previously entered data to a file (except if I hit the file-size limit) PROBLEM: If I do a Ctrl-C in the middle of fgets(). fwrite() does not write the data to the file. #include <stdio.h>
0
1317
by: Malcolm McLean | last post by:
Kind of. When I first wrote adventure games, back in the 1980s, the main thing that limited the game was the amount of text you could store in main RAM. Some authors even experimented with separate data tapes - you couldn't rely on the user having a disk drive. Now of course you couldn't possibly generate enough text to fill a PC memory, even though adventure games now employ teams of writers / designers as well as programmers, musicians,...
0
8984
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9530
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
9363
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
9312
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9238
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8237
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
4593
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
2
2775
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2206
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.