By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
445,812 Members | 1,288 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 445,812 IT Pros & Developers. It's quick & easy.

fwrite output is not efficient (fast) ?

P: n/a
Hi,

recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?

regards,

...ab
Oct 24 '08 #1
Share this Question
Share on Google+
25 Replies


P: n/a
Abubakar <ab*******@gmail.comwrote:
recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing.
That most probably wasn't a C programmer, but a GNU-almost-POSIX-
with-a-bit-of-C-thrown-in-for-appearance's-sake programmer.
Is that true?
As phrased, it is not true. It may be true for some systems, under some
circumstances; but you should never worry about optimisation until you
_know_ that you have to, and don't just think you might.

Richard
Oct 24 '08 #2

P: n/a
>recently some C programmer told me that using fwrite/fopen functions
>are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?
For some programs I have benchmarked (which were essentially doing
disk-to-disk file copies), using single-character read() and write()
calls to copy files chews up a lot of CPU time. Switching to
getc()/putc()/fopen() calls reduced the CPU used to copy a given
file by a factor of 10 and the wall clock time to finish copying
the file by about 10%. In this particular situation, buffering
helped speed things up a lot.
Oct 24 '08 #3

P: n/a
Abubakar said:
Hi,

recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?
I suggest you ask your friend:

1) why he thinks fopen outputs anything at all to a file;
2) why he thinks the buffering status of a file depends entirely on fopen
or fwrite;
3) what he thinks setbuf and setvbuf are for;
4) what he means by "efficient".

You may well find the answers to those questions illuminating. On the other
hand, you might not.

1), 2), and 3) should all send him back-pedalling, and 4) should at least
make him stop and think about the real issues here.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Oct 24 '08 #4

P: n/a
Abubakar <ab*******@gmail.comwrites:
Hi,

recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?
It's true that the output is buffered (usually). Depending on your
situation, that may be good or bad. If you are calling fwrite with
large chunks of data (many kbytes), the buffering will not gain you
much, but a sensible implementation will bypass it and you'll only have
a small amount of overhead. If you are writing small chunks or single
characters at a time (for instance, with fprintf() or putc()), the
buffering can speed things up drastically, since it cuts down on calls
to the operating system, which can be expensive.

It is true in this case that some data may not be written as soon as you
call fwrite(), but this doesn't slow down the data transfer in the long
run. If it's important for some reason that a small bit of data go out
immediately, you can call fflush() or use setvbuf() to change the
buffering mode, but at a cost to overall efficiency.
Oct 24 '08 #5

P: n/a
On Oct 24, 2:03*am, Abubakar <ab*******@gmail.comwrote:
Hi,

recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?
The purpose of buffering is actually efficiency; transferring a large
block of data to disk at once is faster than transferring it byte by
byte (and by this I don't mean that singe-byte functions such as
getc() and putc() are inefficient; these functions also read/write to
or from a buffer in memory).

This is usually true not only for disk files, but also for I/O devices
in general such as network sockets. In some applications, however, it
can be convenient to develop a more specialized I/O layer model that
can provide more functionality and improve efficiency in specific
situations.

(If you're just worried that the data will remain in the buffer
indefinitely, just call fflush() or close the file.)

Sebastian

Oct 24 '08 #6

P: n/a
well he is saying that using the *native* read and write to do the
same task using file descriptors is much faster than the fwrite etc.
He says using the FILE * that is used in case of the fopen/fwrite
buffers data and is not as fast and predictable as read write are.

I have given him the link to this discussion and he is going to be
posting his replies soon hopefully.

On Oct 24, 2:00*pm, Richard Heathfield <r...@see.sig.invalidwrote:
Abubakar said:
Hi,
recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?

I suggest you ask your friend:

1) why he thinks fopen outputs anything at all to a file;
2) why he thinks the buffering status of a file depends entirely on fopen
or fwrite;
3) what he thinks setbuf and setvbuf are for;
4) what he means by "efficient".

You may well find the answers to those questions illuminating. On the other
hand, you might not.

1), 2), and 3) should all send him back-pedalling, and 4) should at least
make him stop and think about the real issues here.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Oct 24 '08 #7

P: n/a
Abubakar said:
well he is saying that using the *native* read and write to do the
same task using file descriptors is much faster than the fwrite etc.
I suggest you ask your friend the questions that I asked you to ask him.
He says using the FILE * that is used in case of the fopen/fwrite
buffers data and is not as fast and predictable as read write are.
FILE * is a pointer type. It doesn't have a speed.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Oct 24 '08 #8

P: n/a
Well he read the link and now he says that the replies by Nate
Eldredge and the guy whose email starts with "s0s" have said what
proves what he says is right so he says there is no need to reply to
anything. ummmm , i dont know whats up. I am going to be posting more
questions to clear things that he has told me because he has much more
experience in C than me. Thanks for the replies so far, if you guys
have more comments please continue posting.

On Oct 24, 1:49*pm, gordonb.l2...@burditt.org (Gordon Burditt) wrote:
recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?

For some programs I have benchmarked (which were essentially doing
disk-to-disk file copies), using single-character read() and write()
calls to copy files chews up a lot of CPU time. *Switching to
getc()/putc()/fopen() calls reduced the CPU used to copy a given
file by a factor of 10 and the wall clock time to finish copying
the file by about 10%. *In this particular situation, buffering
helped speed things up a lot.
Oct 24 '08 #9

P: n/a
Abubakar wrote:
Hi,

recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?
It's actually backwards, in general. It would be closer to being
accurate to say that they are more efficient because they are buffered
(but event that statement isn't quite true). The purpose of buffering
the data is to take advantage of the fact that, in most cases, it's more
efficient to transfer many bytes at a time. When transferring a large
number of bytes, using unbuffered writes gets the first byte out earlier
than using buffered writes; but gets the last byte out later.

Which approach is better depends upon your application, but in many
contexts the earlier bytes can't (or at least won't) even be used until
after the later bytes have been written, which makes buffered I/O the
clear winner.

Furthermore, C gives you the option of turning off buffering with
setvbuf(). If you do, then the behavior should be quite similar to that
you would get from using system-specific unbuffered I/O functions (such
as write() on Unix-like systems).
Oct 24 '08 #10

P: n/a
Abubakar wrote:
well he is saying that using the *native* read and write to do the
same task using file descriptors is much faster than the fwrite etc.
He says using the FILE * that is used in case of the fopen/fwrite
buffers data and is not as fast and predictable as read write are.
fwrite() buffers data only if you don't tell it not to, by calling
setvbuf(). When talking about buffered I/O, he's right about the
predictability, if he's talking about predicting when data gets written
to the file. That depends upon the details of the buffering scheme,
which are in general unknown to the user.

However, what he's saying about speed is true only if he's mainly
concerned with the speed with which the first byte reaches a file. If
he's concerned with the speed with which the last byte reaches the file,
buffering is generally faster, at least with sufficiently large files.
Oct 24 '08 #11

P: n/a
James Kuyper <ja*********@verizon.netwrote:
Abubakar wrote:
well he is saying that using the *native* read and write to do the
same task using file descriptors is much faster than the fwrite etc.
He says using the FILE * that is used in case of the fopen/fwrite
buffers data and is not as fast and predictable as read write are.

fwrite() buffers data only if you don't tell it not to, by calling
setvbuf(). When talking about buffered I/O, he's right about the
predictability, if he's talking about predicting when data gets written
to the file.
Not even that. The OS may have its own buffers, and so may the drive
firmware. Bottom line, if you want absolute file security, you have to
nail it down to the hardware level. If you don't need that, 99+% of the
time, ISO C <stdio.hfunctions are good enough.

Richard
Oct 24 '08 #12

P: n/a
Abubakar said:
Well he read the link and now he says that the replies by Nate
Eldredge and the guy whose email starts with "s0s" have said what
proves what he says is right
They have both argued the opposite point of view. They may or may not have
proved /their/ case, but they certainly haven't proved /his/.
so he says there is no need to reply to anything.
Of course not. If he wishes to continue in blissful ignorance, that's his
choice! :-)

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Oct 24 '08 #13

P: n/a
On 24 Oct, 08:03, Abubakar <abubak...@gmail.comwrote:
recently some C programmer told me that using fwrite/fopen functions
are not efficient because the output that they do to the file is
actually buffered and gets late in writing. Is that true?
No. Sometimes using fwrite() is less efficient than using
(the non-standard function) write(), sometimes it is
more efficient. For example:

fwrite( buf, 1, 8192, fp );

is likely to be less efficient than

write( fd, buf, 8192 );

but

for( i = 0; i < 8192; i++ )
fwrite( buf + i, 1, 1, fp );

is likely to be more efficient than

for( i = 0; i < 8192; i++ )
write( fd, buf + i, 1 );

In the loop, fwrite() is likely to be more efficient
**because of** the buffering. In the first example, fwrite()
is likely to be less efficient because there is (usually)
an extra data move that doesn't occur with write().

However, the portability of fwrite() is a significant
feature to be considered, as is its ease of use. Unless
you have a verified, measured need to replace fwrite()
with write(), don't bother.
Oct 24 '08 #14

P: n/a
Abubakar wrote:
>
well he is saying that using the *native* read and write to do the
same task using file descriptors is much faster than the fwrite etc.
He says using the FILE * that is used in case of the fopen/fwrite
buffers data and is not as fast and predictable as read write are.

I have given him the link to this discussion and he is going to be
posting his replies soon hopefully.
Please do not top-post. Your answer belongs after (or intermixed
with) the quoted material to which you reply, after snipping all
irrelevant material. Your top-posting has lost all continuity from
the thread. See the following links:

<http://www.catb.org/~esr/faqs/smart-questions.html>
<http://www.caliburn.nl/topposting.html>
<http://www.netmeister.org/news/learn2quote.html>
<http://cfaj.freeshell.org/google/ (taming google)
<http://members.fortunecity.com/nnqweb/ (newusers)

--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>
Try the download section.
Oct 24 '08 #15

P: n/a

"Abubakar" <ab*******@gmail.comwrote in message news:
>well he is saying that using the *native* read and write to do the
same task using file descriptors is much faster than the fwrite etc.
He says using the FILE * that is used in case of the fopen/fwrite
buffers data and is not as fast and predictable as read write are.
He's right. fwrite() will call the native write functions, which will call
device drivers and the like. As you go down through the levels closer to the
actual hardware, there is more potential for cutting out unnecessary
operations and making things faster. However only if you know what you are
doing (or maybe if the implementation is very slack).
When you move to a new machine it is likely that the optimisations will
become sub-optimal, or the code might not work at all.

Nowadays it is very seldom worth the extra effort and loss of portability in
not calling fwrite(). However that wasn't always true. General-purpose PCs
used to be slower by a factor of 1000 than modern machines, and then you had
to squeeze every last drop of performance out of the machine to make your
games run fast enough.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm
Oct 25 '08 #16

P: n/a

"Malcolm McLean" <re*******@btinternet.comwrote in message
news:aI******************************@bt.com...
>
"Abubakar" <ab*******@gmail.comwrote in message news:
>>well he is saying that using the *native* read and write to do the
same task using file descriptors is much faster than the fwrite etc.
He says using the FILE * that is used in case of the fopen/fwrite
buffers data and is not as fast and predictable as read write are.
Nowadays it is very seldom worth the extra effort and loss of portability
in not calling fwrite(). However that wasn't always true. General-purpose
PCs used to be slower by a factor of 1000 than modern machines, and then
you had to squeeze every last drop of performance out of the machine to
make your games run fast enough.
But sometimes modern machines are being asked to do 1000 times more work.
Performance can still be an issue.

--
Bartc

Oct 25 '08 #17

P: n/a

"Bartc" <bc@freeuk.comwrote in message
But sometimes modern machines are being asked to do 1000 times more work.
Performance can still be an issue.
But disk IO is less likely to be the bottleneck. My programs take up to two
weeks to run on as many processors as I can lay my hands on, however they
only read and write a few kilobytes of data.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Oct 25 '08 #18

P: n/a
Malcolm McLean wrote, On 25/10/08 16:46:
>
"Bartc" <bc@freeuk.comwrote in message
>But sometimes modern machines are being asked to do 1000 times more
work. Performance can still be an issue.
But disk IO is less likely to be the bottleneck.
Except when it is. You know absolutely nothing about the type of program
this might be used for and some programs are definitely disk bound even
with the fastest disk sub-systems.
My programs take up to
two weeks to run on as many processors as I can lay my hands on, however
they only read and write a few kilobytes of data.
So yours are not IO bound, that says nothing about the situations the OP
is concerned with.
--
Flash Gordon
If spamming me sent it to sm**@spam.causeway.com
If emailing me use my reply-to address
See the comp.lang.c Wiki hosted by me at http://clc-wiki.net/
Oct 25 '08 #19

P: n/a
"Malcolm McLean" <re*******@btinternet.comwrites:
"Bartc" <bc@freeuk.comwrote in message
>But sometimes modern machines are being asked to do 1000 times more
work. Performance can still be an issue.

But disk IO is less likely to be the bottleneck. My programs take up
to two weeks to run on as many processors as I can lay my hands on,
however they only read and write a few kilobytes of data.
If processors increase in speed at a faster rate than
I/O bandwidth does, which happens to be the case, then
I/O can do nothing apart from become more of a bottle-
neck. If you, like me, run /embarassingly parallel/ code,
then more times nothing is nothing, but we're in a very
fortunate minority.

Phil
--
The fact that a believer is happier than a sceptic is no more to the
point than the fact that a drunken man is happier than a sober one.
The happiness of credulity is a cheap and dangerous quality.
-- George Bernard Shaw (1856-1950), Preface to Androcles and the Lion
Oct 25 '08 #20

P: n/a
"Phil Carmody" <th*****************@yahoo.co.ukwrote in message
"Malcolm McLean" <re*******@btinternet.comwrites:
>"Bartc" <bc@freeuk.comwrote in message
>>But sometimes modern machines are being asked to do 1000 times more
work. Performance can still be an issue.

But disk IO is less likely to be the bottleneck. My programs take up
to two weeks to run on as many processors as I can lay my hands on,
however they only read and write a few kilobytes of data.

If processors increase in speed at a faster rate than
I/O bandwidth does, which happens to be the case, then
I/O can do nothing apart from become more of a bottle-
neck. If you, like me, run /embarassingly parallel/ code,
then more times nothing is nothing, but we're in a very
fortunate minority.
However data has got to mean something. 1000 times more processing power
doesn't necessarily mean 1000 times more data. For instance a database with
the address of every taxpayer in the country would comfortably sit on my PC
hard drive. However the cost of collecting and checking that data would be
several milion pounds. The limit is the data itself, not the machine power
needed to process it.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Oct 25 '08 #21

P: n/a
"Malcolm McLean" <re*******@btinternet.comwrites:
"Phil Carmody" <th*****************@yahoo.co.ukwrote in message
>"Malcolm McLean" <re*******@btinternet.comwrites:
>>"Bartc" <bc@freeuk.comwrote in message
But sometimes modern machines are being asked to do 1000 times more
work. Performance can still be an issue.

But disk IO is less likely to be the bottleneck. My programs take up
to two weeks to run on as many processors as I can lay my hands on,
however they only read and write a few kilobytes of data.

If processors increase in speed at a faster rate than
I/O bandwidth does, which happens to be the case, then
I/O can do nothing apart from become more of a bottle-
neck. If you, like me, run /embarassingly parallel/ code,
then more times nothing is nothing, but we're in a very
fortunate minority.
However data has got to mean something. 1000 times more processing
power doesn't necessarily mean 1000 times more data. For instance a
database with the address of every taxpayer in the country would
comfortably sit on my PC hard drive. However the cost of collecting
and checking that data would be several milion pounds. The limit is
the data itself, not the machine power needed to process it.
So when you said "disk IO is less likely to be the bottleneck"
you were really trying to say "form-filling and paperwork is
the bottleneck"? Has one of a.f.c's droolers escaped?

Phil
--
The fact that a believer is happier than a sceptic is no more to the
point than the fact that a drunken man is happier than a sober one.
The happiness of credulity is a cheap and dangerous quality.
-- George Bernard Shaw (1856-1950), Preface to Androcles and the Lion
Oct 25 '08 #22

P: n/a
Malcolm McLean wrote, On 25/10/08 21:56:
"Phil Carmody" <th*****************@yahoo.co.ukwrote in message
>"Malcolm McLean" <re*******@btinternet.comwrites:
>>"Bartc" <bc@freeuk.comwrote in message
But sometimes modern machines are being asked to do 1000 times more
work. Performance can still be an issue.

But disk IO is less likely to be the bottleneck. My programs take up
to two weeks to run on as many processors as I can lay my hands on,
however they only read and write a few kilobytes of data.

If processors increase in speed at a faster rate than
I/O bandwidth does, which happens to be the case, then
I/O can do nothing apart from become more of a bottle-
neck. If you, like me, run /embarassingly parallel/ code,
then more times nothing is nothing, but we're in a very
fortunate minority.
However data has got to mean something. 1000 times more processing power
doesn't necessarily mean 1000 times more data. For instance a database
with the address of every taxpayer in the country would comfortably sit
on my PC hard drive. However the cost of collecting and checking that
data would be several milion pounds. The limit is the data itself, not
the machine power needed to process it.
I can tell you with 100% certainty that there *are* applications that
are running against very high end storage devices on high end servers
configured by people who really do know what they are doing where the
applications *are* I/O bound. I know because I have been sitting there
monitoring server performance seeing the processors bone-idle, the
memory mostly being used to cache data, and the I/O subsystems running
flat out. During certain processes (not performed very often) the
servers can be like this for a couple of days during which all users
have to be locked out of the system. Other tasks are scheduled as
overnight jobs so as not to kill the server performance for a few hours
during the day. Oh, and the code is mostly written in C although in
certain areas extensions and/or other languages are used.
--
Flash Gordon
If spamming me sent it to sm**@spam.causeway.com
If emailing me use my reply-to address
See the comp.lang.c Wiki hosted by me at http://clc-wiki.net/
Oct 25 '08 #23

P: n/a
On 25 Oct, 10:51, "Malcolm McLean" <regniz...@btinternet.comwrote:
"Abubakar" <abubak...@gmail.comwrote in message news:
well he is saying that using the *native* read and write to do the
same task using file descriptors is much faster than the fwrite etc.
He says using the FILE * that is used in case of the fopen/fwrite
buffers data and is not as fast and predictable as read write are.

He's right.
No, he's not. *Sometimes* it is faster to use write() instead
of fwrite(). Other times it's not. Here's the result of a
quick test:

$ time ./a.out 500000 /dev/null # Use putc
real 0m0.015s
user 0m0.012s
sys 0m0.002s

$ time ./a.out 500000 USE_WRITE /dev/null
real 0m0.891s
user 0m0.329s
sys 0m0.558s
(The code used to generate this is below.)

The interpretation of this is simple: using putc
improves performance precisely because of the buffering.
There is no doubt that using write() will often improve
the performance, but NOT if you are making lots of
small writes. When in doubt (and if there is an
actual, observed performance problem) you must
profile the process to determine the bottleneck.
If the problem is IO performance due to fwrite(),
it might be worthwhile to use write() instead.
Might be. Repeated again for emphasis. *Might* be.
If you invest the time re-writing the code to
use write(), you must verify that the performance
gain (or loss) is what you want. You may find
that you have improved throughput by .00001%. Or
maybe you have reduced it by 20%. There are many
managers who will take the 20% performance hit and
call it an improvement and give you a bonus. Take
the money, but find a more competent manager.

Here's the code used to generate the above timings:

/* Unix specific code */

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

int
use_write( int c )
{
int status = c;
char C = c;
if( write( STDOUT_FILENO, &C, 1 ) != 1 )
status = EOF;

return status;
}

int
use_putc( int c )
{
return putchar( c );
}

int
main( int argc, char **argv )
{
unsigned count;
int (*f)(int);
f = ( argc 2 ) ? use_write : use_putc;
count = ( argc 1 ) ? strtoul( argv[ 1 ], NULL, 10 ) : BUFSIZ;

for( ; count; count-- )
if( f( 'y' ) != 'y' )
break;;

return count ? EXIT_FAILURE : EXIT_SUCCESS;
}
Oct 26 '08 #24

P: n/a
On Oct 24, 9:46*am, Abubakar <abubak...@gmail.comwrote:
well he is saying that using the *native* read and write to do the
same task using file descriptors is much faster than the fwrite etc.
He says using the FILE * that is used in case of the fopen/fwrite
*buffers data and is not as fast and predictable as read write are.
Until you have a customer who saves files on a network share and
curses the idiot programmer who used read and write instead of using
the buffered file i/o provided by the Standard C library.

Oct 27 '08 #25

P: n/a
On Oct 26, 9:49 am, William Pursell <bill.purs...@gmail.comwrote:
On 25 Oct, 10:51, "Malcolm McLean" <regniz...@btinternet.comwrote:
"Abubakar" <abubak...@gmail.comwrote in message news:
>well he is saying that using the *native*readand write to do the
>same task using file descriptors is much faster than the fwrite etc.
>He says using the FILE * that is used in case of thefopen/fwrite
>buffers data and is not as fast and predictable asreadwrite are.
He's right.

No, he's not. *Sometimes* it is faster to use write() instead
of fwrite(). Other times it's not. Here's the result of a
quick test:

$ time ./a.out 500000 /dev/null # Use putc
real 0m0.015s
user 0m0.012s
sys 0m0.002s

$ time ./a.out 500000 USE_WRITE /dev/null
real 0m0.891s
user 0m0.329s
sys 0m0.558s

(The code used to generate this is below.)

The interpretation of this is simple: using putc
improves performance precisely because of the buffering.
There is no doubt that using write() will often improve
the performance, but NOT if you are making lots of
small writes. When in doubt (and if there is an
actual, observed performance problem) you must
profile the process to determine the bottleneck.
If the problem is IO performance due to fwrite(),
it might be worthwhile to use write() instead.
Might be. Repeated again for emphasis. *Might* be.
If you invest the time re-writing the code to
use write(), you must verify that the performance
gain (or loss) is what you want. You may find
that you have improved throughput by .00001%. Or
maybe you have reduced it by 20%. There are many
managers who will take the 20% performance hit and
call it an improvement and give you a bonus. Take
the money, but find a more competent manager.

Here's the code used to generate the above timings:

/* Unix specific code */

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

int
use_write( int c )
{
int status = c;
char C = c;
if( write( STDOUT_FILENO, &C, 1 ) != 1 )
status = EOF;

return status;

}

int
use_putc( int c )
{
return putchar( c );

}

int
main( int argc, char **argv )
{
unsigned count;
int (*f)(int);
f = ( argc 2 ) ? use_write : use_putc;
count = ( argc 1 ) ? strtoul( argv[ 1 ], NULL, 10 ) : BUFSIZ;

for( ; count; count-- )
if( f( 'y' ) != 'y' )
break;;

return count ? EXIT_FAILURE : EXIT_SUCCESS;

}
hey thanks for the code. And Thanks to all the guys for discussing, it
was a lot of good information.
Nov 5 '08 #26

This discussion thread is closed

Replies have been disabled for this discussion.