By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
429,214 Members | 2,072 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 429,214 IT Pros & Developers. It's quick & easy.

Historical question, why fwrite and not binary specifier for fprintf?

P: n/a
In the beginning (Kernighan & Ritchie 1978) there was fprintf, and unix
write, but no fwrite. That is, no portable C method for writing binary
data, only system calls which were OS specific. At C89 fwrite/fread
were added to the C standard to allow portable binary IO to files. I
wonder though why the choice was made to extend the unix function
write() into a standard C function rather than to extend the existing
standard C function fprintf to allow binary operations?

Consider a bit of code like this (error checking and other details omitted):

int ival;
double dval;
char string[10]="not full\0\0";
FILE *fp;

fp = fopen("file.name","w");
(void) fprintf(fp,"%i%f%s",ival,dval,string);

It always seemed to me that the natural extension, if the data needed to
be written in binary, would have been either this (which would have
allowed type checking):

(void) fprintf(fp,"%bi%bf%bs",ival,dval,string);

or perhaps just this (which would not have allowed type checking):

(void) fprintf(fp,"%b%b%b",ival,dval,string);

(Clearly there are some issues in deciding whether to write for string
"not full", or the entire buffer, which could have been handled in the
%bs form using field width, for instance.)

Anyway, in the real world fwrite was chosen. For those of you who were
around for this decision, was extending fprintf considered instead of,
or in addition to fwrite? What was the deciding factor for fwrite?
I'm guessing that it was that everybody had been using write() for years
and it was thought that fwrite was a more natural extension, but that is
just a guess.

Thanks,

David Mathog
Nov 27 '07 #1
Share this Question
Share on Google+
11 Replies


P: n/a
David Mathog wrote:
In the beginning (Kernighan & Ritchie 1978) there was fprintf, and
unix write, but no fwrite. That is, no portable C method for writing
binary data, only system calls which were OS specific. At C89
fwrite/fread were added to the C standard to allow portable binary IO
to files. I wonder though why the choice was made to extend the unix
function write() into a standard C function rather than to extend the
existing standard C function fprintf to allow binary operations?
Perhaps because the UNIX functions were well known? Perhaps for reasons
of efficiency?
Consider a bit of code like this (error checking and other details
omitted):

int ival;
double dval;
char string[10]="not full\0\0";
FILE *fp;

fp = fopen("file.name","w");
(void) fprintf(fp,"%i%f%s",ival,dval,string);

It always seemed to me that the natural extension, if the data needed
to be written in binary, would have been either this (which would have
allowed type checking):

(void) fprintf(fp,"%bi%bf%bs",ival,dval,string);

or perhaps just this (which would not have allowed type checking):

(void) fprintf(fp,"%b%b%b",ival,dval,string);

(Clearly there are some issues in deciding whether to write for string
"not full", or the entire buffer, which could have been handled in the
%bs form using field width, for instance.)

Anyway, in the real world fwrite was chosen. For those of you who
were around for this decision, was extending fprintf considered
instead of, or in addition to fwrite? What was the deciding factor
for fwrite? I'm guessing that it was that everybody had been using
write() for years and it was thought that fwrite was a more natural
extension, but that is just a guess.
Personally I'm glad that direct I/O has separate functions for it. The
*printf()/*scanf() interface is already quite a complicated, bloated
one.

It seems to me that their primary use is when conversion is necessary.
Otherwise a more direct interface should be preferable, at least for
efficiency, if not for anything else.

Nov 27 '07 #2

P: n/a
David Mathog <ma****@caltech.eduwrites:
In the beginning (Kernighan & Ritchie 1978) there was fprintf, and
unix write, but no fwrite. That is, no portable C method for writing
binary data, only system calls which were OS specific. At C89
fwrite/fread
were added to the C standard to allow portable binary IO to files. I
wonder though why the choice was made to extend the unix function
write() into a standard C function rather than to extend the existing
standard C function fprintf to allow binary operations?

Consider a bit of code like this (error checking and other details omitted):

int ival;
double dval;
char string[10]="not full\0\0";
FILE *fp;

fp = fopen("file.name","w");
(void) fprintf(fp,"%i%f%s",ival,dval,string);

It always seemed to me that the natural extension, if the data needed
to be written in binary, would have been either this (which would have
allowed type checking):

(void) fprintf(fp,"%bi%bf%bs",ival,dval,string);

or perhaps just this (which would not have allowed type checking):

(void) fprintf(fp,"%b%b%b",ival,dval,string);
[...]

Neither form really allows type checking, unless the compiler chooses
(as gcc does, for example) to check the arguments against the format
string and issue warning for mismatches. Such checking is not
possible if the format string is not a string literal.

Your ``string'' argument is passed as a pointer to the first character
of the string (&string[0]). fprintf would have no way to know how
many characters to print -- unless it stops at the first '\0', but
that's likely to be inappropriate for a binary file.

The whole purpose of fprintf is to format data into text (the final
'f' stands for format). Binary output specifically doesn't do any
formatting; it just dumps the raw bytes. Having to invoke fprintf,
with all its internal machinery to parse the format string, when you
merely want to dump raw bytes doesn't seem like a good thing.

fwrite() does just what it needs to do, without all that conceptual
overhead.

Speaking of historical questions, were fread() and frwrite() invented
by the C89 committee, or were they based on existing practice? I
suspect the latter, but I'm not sure.

--
Keith Thompson (The_Other_Keith) <ks***@mib.org>
Looking for software development work in the San Diego area.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Nov 27 '07 #3

P: n/a
In article <87************@kvetch.smov.org>,
Keith Thompson <ks***@mib.orgwrote:
>Speaking of historical questions, were fread() and frwrite() invented
by the C89 committee, or were they based on existing practice? I
suspect the latter, but I'm not sure.
They were existing practice.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Nov 27 '07 #4

P: n/a
In article <fi**********@naig.caltech.edu>,
David Mathog <ma****@caltech.eduwrote:
>In the beginning (Kernighan & Ritchie 1978) there was fprintf, and unix
write, but no fwrite. That is, no portable C method for writing binary
data, only system calls which were OS specific. At C89 fwrite/fread
were added to the C standard to allow portable binary IO to files.
No. fwrite() and friends were present in the standard i/o library
introduced in 7th edition unix in 1979.
>I
wonder though why the choice was made to extend the unix function
write() into a standard C function rather than to extend the existing
standard C function fprintf to allow binary operations?
The standard i/o library provides two things: efficient buffering and
formatted i/o. getc(), fwrite(), etc provide buffering. printf() etc
provide formatting on top of that. It makes no sense for you to have
to use the formatting mechanism (and its overhead) just for buffered
i/o, whether text or binary.

The standard i/o library seems to me to be an excellent balance of
simplicity and functionality, unix and C at their best.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Nov 27 '07 #5

P: n/a
Keith Thompson wrote:
David Mathog <ma****@caltech.eduwrites:
>In the beginning (Kernighan & Ritchie 1978) there was fprintf, and
unix write, but no fwrite. That is, no portable C method for writing
binary data, only system calls which were OS specific. At C89
fwrite/fread
were added to the C standard to allow portable binary IO to files. I
wonder though why the choice was made to extend the unix function
write() into a standard C function rather than to extend the existing
standard C function fprintf to allow binary operations?
[...]

...

Speaking of historical questions, were fread() and frwrite() invented
by the C89 committee, or were they based on existing practice? I
suspect the latter, but I'm not sure.
I believe they first appeared in public as part of the 7th Edition UNIX
standard library in January 1979.
Nov 28 '07 #6

P: n/a
Jack Klein wrote:
>
.... snip ...
>
*printf() and *scanf() are primarily designed to convert between
human readable text and binary format. Using them when you want
no such conversion does not even seem intuitive.
However, especially in the embedded field, they are often a
monstrous waste, and also an easy way to inject errors. Simple,
non-variadic functions to output a specific type with possibly a
field width specifier (see Pascal) are much more efficient.

The problem is that those functions, if linked, need to include all
the options, whether used or not. They are just too big and
all-encompassing. It makes much less difference on some OS where
the entire function is in one shared library file. But with static
linking the problem reappears everywhere.

--
Chuck F (cbfalconer at maineline dot net)
<http://cbfalconer.home.att.net>
Try the download section.

--
Posted via a free Usenet account from http://www.teranews.com

Nov 28 '07 #7

P: n/a
David Mathog wrote On 11/27/07 12:18,:
In the beginning (Kernighan & Ritchie 1978) there was fprintf, and unix
write, but no fwrite. That is, no portable C method for writing binary
data, only system calls which were OS specific.
There was putc(), which is portable. In fact, the
various Standards for C describe all the output operations
as operating "as if" by repeated putc() calls.
At C89 fwrite/fread
were added to the C standard to allow portable binary IO to files.
Although they are not mentioned in K&R, they are both
older than the ANSI Standard. ANSI did not "add" them; it
codified existing practice.
I
wonder though why the choice was made to extend the unix function
write() into a standard C function rather than to extend the existing
standard C function fprintf to allow binary operations?
... but fprintf() *can* generate binary output!

FILE *stream = fopen("data.bin", "wb");
double d = 3.14159;
char *p;
for (p = (char*)&d; p < (char*)(&d + 1); ++p)
fprintf (stream, "%c", *p);

(Error-checking omitted for brevity.) putc() would be
a better choice, but fprintf() *can* do it, if desired.
Consider a bit of code like this (error checking and other details omitted):

int ival;
double dval;
char string[10]="not full\0\0";
FILE *fp;

fp = fopen("file.name","w");
(void) fprintf(fp,"%i%f%s",ival,dval,string);

It always seemed to me that the natural extension, if the data needed to
be written in binary, would have been either this (which would have
allowed type checking):

(void) fprintf(fp,"%bi%bf%bs",ival,dval,string);
As Charlie Brown said, "Bleah!" Note that this would
offer no way to output a promotable type without performing
the promotion and a subsequent demotion (I'm not worried
about the speed, but about potential changes in the data,
things like a minus zero float losing its minus sign in
the conversion to double and back). Writing out a struct
would be clumsy in the extreme, as you'd need to enumerate
every element, one by one. I can't see any way to write
a bit-field with this scheme, nor any way to write a union
without foreknowledge of which element was current (short
of repeated "%c" as above -- which requires no extensions).
[...] For those of you who were
around for this decision, was extending fprintf considered instead of,
or in addition to fwrite? What was the deciding factor for fwrite?
I wasn't there and don't know, and the Rationale offers
no hints. But the idea of trying to use fprintf() for this
strikes me as tightening screws with hammers: It's the wrong
interface, that's all. Besides, fread() and fwrite() already
existed; they were not "creatures of the committee" in the
way that <stdlib.hwas, for example.

--
Er*********@sun.com

Nov 28 '07 #8

P: n/a
In article <1196265404.5351@news1nwk>,
Eric Sosman <Er*********@Sun.COMwrote:
>In the beginning (Kernighan & Ritchie 1978) there was fprintf, and unix
write, but no fwrite. That is, no portable C method for writing binary
data, only system calls which were OS specific.
There was putc(), which is portable.
To be pedantic, in 1978 there was indeed putc(), but it wasn't the
putc() we know today. The then-current sixth edition unix used a
pointer to a 518-byte "struct buf", which the programmer had to
create, instead of the opaque FILE struct introduced in seventh
edition (1979) along with most of the stdio functions we use today.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Nov 28 '07 #9

P: n/a
Richard Tobin wrote On 11/28/07 13:00,:
In article <1196265404.5351@news1nwk>,
Eric Sosman <Er*********@Sun.COMwrote:

>>>In the beginning (Kernighan & Ritchie 1978) there was fprintf, and unix
write, but no fwrite. That is, no portable C method for writing binary
data, only system calls which were OS specific.

> There was putc(), which is portable.


To be pedantic, in 1978 there was indeed putc(), but it wasn't the
putc() we know today. The then-current sixth edition unix used a
pointer to a 518-byte "struct buf", which the programmer had to
create, instead of the opaque FILE struct introduced in seventh
edition (1979) along with most of the stdio functions we use today.
To be pedantic back at'cha: putc() and FILE and fopen()
and so on are described in Chapter 7 of "The C Programming
Language" by Brian W. Kernighan and Dennis M. Ritchie, ISBN
0-13-110163-3. The copyright date is 1978, not 1979 or later,
and the putc() description is on page 152.

Perhaps Unix lagged C by a year or so?

--
Er*********@sun.com
Nov 28 '07 #10

P: n/a
In article <1196277214.437935@news1nwk>,
Eric Sosman <Er*********@Sun.COMwrote:
To be pedantic back at'cha: putc() and FILE and fopen()
and so on are described in Chapter 7 of "The C Programming
Language" by Brian W. Kernighan and Dennis M. Ritchie, ISBN
0-13-110163-3. The copyright date is 1978, not 1979 or later,
and the putc() description is on page 152.

Perhaps Unix lagged C by a year or so?
I was relying on the date of the unix manuals. I don't think there
was anything except unix to run C on back then. Perhaps the updated
library was available was available before the new version of unix, or
perhaps the book was written before the corresponding software was
generally available. (I don't seem to have my K&R1 to hand to see if
it says anything about it.)

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Nov 28 '07 #11

P: n/a
Richard Tobin wrote:
In article <1196277214.437935@news1nwk>,
Eric Sosman <Er*********@Sun.COMwrote:
> To be pedantic back at'cha: putc() and FILE and fopen()
and so on are described in Chapter 7 of "The C Programming
Language" by Brian W. Kernighan and Dennis M. Ritchie, ISBN
0-13-110163-3. The copyright date is 1978, not 1979 or later,
and the putc() description is on page 152.

Perhaps Unix lagged C by a year or so?

I was relying on the date of the unix manuals. I don't think there
was anything except unix to run C on back then. Perhaps the updated
library was available was available before the new version of unix, or
perhaps the book was written before the corresponding software was
generally available. (I don't seem to have my K&R1 to hand to see if
it says anything about it.)
UNIX v7 was released in January 1979; I guess that K&R1 was written
based on what was being put together for UNIX v7. Given the difference
between copyright and release dates, it's quite possible that UNIX v7
was finished within Bell Labs before K&R1 was finished.

UNIX v6 had primitive versions of putc() and fopen() which do not match
the definitions in K&R1 or UNIX v7. The Standard I/O library as we know
it today is based on that in UNIX v7 as described in K&R1.
Nov 29 '07 #12

This discussion thread is closed

Replies have been disabled for this discussion.