Malcolm wrote:
"Mantorok Redgormor" <ne*****@tokyo.com> wrote
What is fully buffered? I was looking at some archived
posts and some experts were using fully buffered as
though it was different from line based buffering.
Isn't line based buffering synonymous with fully buffered?
You don't need to worry about this. In the olden days there were low-level
(unbuffered) and high-level (buffered|) IO functions. You could tweak the
low-level functions to squeeze out an extra bit of performance. In these
modern times it is no longer easy to dot this, and you may assume that the
stdio routines manage data as efficiently as humanly possible.
There is probably a difference between "line based buffering" and "full
buffering", but I have no motivation to find out what it is.
Since you don't know what the difference is, it
seems a little rash to tell someone else they needn't
worry about it ... "Here's my advice on something I
don't comprehend" is not a preamble that inspires a
lot of confidence.
For the record, there is a difference and it is
sometimes important. The C Standard describes three
kinds of buffering (in 7.19.3, paragraph 3). You can
read the section for yourself (strongly advised, if
you're going to dish out advice about it), or you can
accept this paraphrase:
An *unbuffered* stream deals in single characters.
Output characters are delivered to the destination
as soon as possible, and characters generated at
an input device are made available to an input
stream as soon as possible.
A *fully-buffered* stream deals in "large" blocks
of characters. Output characters are accumulated
in a buffer until it fills, and are then delivered
to the destination all at once. Input characters
are similarly accumulated until "enough" are ready,
and are then made available to an input stream all
at once.
A *line-buffered* stream deals in newline-terminated
lines. Output characters are accumulated in a buffer
until a newline is written, when they're all sent to
the destination at once. Input characters are gathered
until an end-of-line occurs, at which point all the
line's characters are made available to the input
stream.
Since the Standard doesn't try to get into the finicky
details of the host environment's I/O capabilities, the
actual language is filled with phrases like "intended to"
and "implementation-defined" and "as soon as possible."
Still, the intent is clear and an implementor will probably
expend some effort to do something that makes sense for the
platform in light of the Standard's stated intentions.
Lastly, because of the previous thing mentioned, this means that
something like a standard hello world program is not strictly
conforming unless an explicit call to setvbuf is made with
_IOFBF. Is my conclusion correct?
No. IO can fail so strictly you should check the return from printf() to see
if the call succeeded. No-one ever does this. IO can still fail if you set
the buffer with setvbuf(). No-one ever does this unless doing very
specialised and time critical IO.
Now that you understand C's three buffering modes, you
may be able to see why they might be useful in other than
"specialised" and "time critical" situations. Notice that
the Standard specifies (in paragraph 7 of the same section)
the buffering modes of the three standard streams; I think
we can assume that if the Standard takes the trouble to
specify the modes, the modes are far from "specialised."
--
Er*********@sun.com