By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
454,441 Members | 1,446 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 454,441 IT Pros & Developers. It's quick & easy.

Files & dirs: historical reasons?

P: n/a
I was having an interesting discussion about the ANSI C and some
``weird inconsistencies'', or at least what at first sight can be seen
as an imbalance. I hope someone can satisfy my curiosity.

The standard provides means to open files associating a /path/ with a
/stream/. The standard does not provide any means to handle
/directories/. There are three streams defined by the standard, stdin
stdout stderr. Am I right?

Now, what is the reason for /not/ defining the directory counterparts
for files (something like fopen, with a directory dopen)?

We know that there exist platforms (e.g. on firmwares) where we can
hardly use ``files'' (with /paths/) while we can play with the three
standard std* /streams/. Paths of files are highly system dependent
just like directories. Files on the other hand are more likely to exist
on many platforms, or at least it seems to us.

Is there any reason? Is the standard going to change? Are directories
treated as normal files?

I know it might be a stupid question, but I'm curious about the history
behind choices :)

Thanks!

--
Sensei <senseiwa@Apple's mail>

Research (n.): a discovery already published by a chinese guy one month
before you, copying a russian who did it in the 60s.

Dec 18 '06 #1
Share this Question
Share on Google+
46 Replies


P: n/a
Sensei said:
I was having an interesting discussion about the ANSI C and some
``weird inconsistencies'', or at least what at first sight can be seen
as an imbalance. I hope someone can satisfy my curiosity.

The standard provides means to open files associating a /path/ with a
/stream/. The standard does not provide any means to handle
/directories/. There are three streams defined by the standard, stdin
stdout stderr. Am I right?
Almost. The only time the Standard uses the word "path" is in a different
context completely: "Other paths to program termination, such as calling
the abort function, need not close all files properly", which is obviously
not what you are talking about! Otherwise, yes, you are correct.

Now, what is the reason for /not/ defining the directory counterparts
for files (something like fopen, with a directory dopen)?
Not all file systems have the concept of "directory", and this includes some
prominent systems such as VM/CMS and OS390 (aka MVS). CP/M had no concept
of "directory" either. Nor did MS-DOS 1.0. Undoubtedly there are others,
too.

<snip>
[...] Is the standard going to change?
It's very unlikely.
Are directories treated as normal files?
I'm pretty sure Unix thinks of directories as merely a weird kind of file.
For example, IIRC you can fopen them. Windows sees them as being separate
beasts altogether (and you can't fopen them). And, as I said, some systems
don't have the concept at all.
I know it might be a stupid question,
There are such things as stupid questions, but I don't think of this as
being one of them.

<snip>

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Dec 18 '06 #2

P: n/a
In article <Ve******************************@bt.com>,
Richard Heathfield <rj*@see.sig.invalidwrote:
>I'm pretty sure Unix thinks of directories as merely a weird kind of file.
For example, IIRC you can fopen them.
That depends on the Unix version. It was traditionally true in
System V unix with the 14 character path components and 2 character inode
numbers, but as filesystems became more advanced and directory files
become more complex, it wasn't uncommon for Unices to prevent users
from fopen'ing directories.
--
Okay, buzzwords only. Two syllables, tops. -- Laurie Anderson
Dec 18 '06 #3

P: n/a

Richard Heathfield wrote:
Sensei said:
I was having an interesting discussion about the ANSI C and some
``weird inconsistencies'', or at least what at first sight can be seen
as an imbalance. I hope someone can satisfy my curiosity.

The standard provides means to open files associating a /path/ with a
/stream/. The standard does not provide any means to handle
/directories/. There are three streams defined by the standard, stdin
stdout stderr. Am I right?

Almost. The only time the Standard uses the word "path" is in a different
context completely: "Other paths to program termination, such as calling
the abort function, need not close all files properly", which is obviously
not what you are talking about! Otherwise, yes, you are correct.

Now, what is the reason for /not/ defining the directory counterparts
for files (something like fopen, with a directory dopen)?

Not all file systems have the concept of "directory", and this includes some
prominent systems such as VM/CMS and OS390 (aka MVS). CP/M had no concept
of "directory" either. Nor did MS-DOS 1.0. Undoubtedly there are others,
too.
Yeah, the PDP-11 I used with an RT-11 OS had no subdirectories.
All the files were in one place, had to be unique names and limited
to 8 characters (I think). It was a real PITA.

OTOH, you never had to recurse subdirectories.
>
<snip>
[...] Is the standard going to change?

It's very unlikely.
Are directories treated as normal files?

I'm pretty sure Unix thinks of directories as merely a weird kind of file.
For example, IIRC you can fopen them. Windows sees them as being separate
beasts altogether (and you can't fopen them). And, as I said, some systems
don't have the concept at all.
I know it might be a stupid question,

There are such things as stupid questions, but I don't think of this as
being one of them.

<snip>

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Dec 18 '06 #4

P: n/a

Richard Heathfield wrote:
Sensei said:
[snip]
>
Now, what is the reason for /not/ defining the directory counterparts
for files (something like fopen, with a directory dopen)?

Not all file systems have the concept of "directory", and this includes some
prominent systems such as VM/CMS and OS390 (aka MVS). CP/M had no concept
of "directory" either. Nor did MS-DOS 1.0. Undoubtedly there are others,
too.
HP MPE and Encore MPX are two that I worked on personally. IIRC, the
file naming syntax was something like group.account.filename, and none
of the three elements could exceed 8 characters.

C is a product of the early '70s and this is one area where it shows.

Dec 18 '06 #5

P: n/a
Sensei wrote:
There are three streams defined by the standard, stdin
stdout stderr. Am I right?
stdin, stdout and stderr, are macros
which expand to expressions of type FILE *.
The proper names of the streams are "standard input",
"standard output", and "standard error".
However, the streams are frequemtly refered to by their
associated macros.

--
pete
Dec 18 '06 #6

P: n/a
John Bode wrote:
>
Richard Heathfield wrote:
Sensei said:
[...]
Now, what is the reason for /not/ defining the directory counterparts
for files (something like fopen, with a directory dopen)?
Not all file systems have the concept of "directory", and this includes some
prominent systems such as VM/CMS and OS390 (aka MVS). CP/M had no concept
of "directory" either. Nor did MS-DOS 1.0. Undoubtedly there are others,
too.

HP MPE and Encore MPX are two that I worked on personally. IIRC, the
file naming syntax was something like group.account.filename, and none
of the three elements could exceed 8 characters.

C is a product of the early '70s and this is one area where it shows.
On the other hand, why should the C standard care about the layout
of the filesystem on which it's running? Sure, I suppose one could
add some standard library functions (which, I suppose, would fail on
systems without "directories" and the like), but why bother? There
is already a POSIX standard which you can stick with if necessary.

Sometimes, O/S-related things don't necessarily belong in the language
specification.

--
+-------------------------+--------------------+-----------------------+
| Kenneth J. Brody | www.hvcomputer.com | #include |
| kenbrody/at\spamcop.net | www.fptech.com | <std_disclaimer.h|
+-------------------------+--------------------+-----------------------+
Don't e-mail me at: <mailto:Th*************@gmail.com>
Dec 19 '06 #7

P: n/a
pete wrote:
>
Sensei wrote:
There are three streams defined by the standard, stdin
stdout stderr. Am I right?

stdin, stdout and stderr, are macros
which expand to expressions of type FILE *.
The proper names of the streams are "standard input",
"standard output", and "standard error".
However, the streams are frequemtly refered to by their
associated macros.
I don't believe that the standard requires that they be macros. They
simply need to be of type FILE*.

(I'm going by memory here, as I believe someone else quoted C&V about
this in the not-too-distant past.)

That said, all implementations I've bothered looking at were, as I
recall, implemented as macros.

--
+-------------------------+--------------------+-----------------------+
| Kenneth J. Brody | www.hvcomputer.com | #include |
| kenbrody/at\spamcop.net | www.fptech.com | <std_disclaimer.h|
+-------------------------+--------------------+-----------------------+
Don't e-mail me at: <mailto:Th*************@gmail.com>
Dec 19 '06 #8

P: n/a
Thanks to everybody! I can now see some historical reasons for this
reason, I knew there were some :)
--
Sensei <senseiwa@Apple's mail>

Research (n.): a discovery already published by a chinese guy one month
before you, copying a russian who did it in the 60s.

Dec 19 '06 #9

P: n/a

Sensei wrote:
Thanks to everybody! I can now see some historical reasons for this
reason, I knew there were some :)
--
Sensei <senseiwa@Apple's mail>

Research (n.): a discovery already published by a chinese guy one month
before you, copying a russian who did it in the 60s.
Not only historical reasons, but also forward looking reasons. Many OS
vendors have been rumbling for years about eliminating our concepts of
the file system, and replacing them with a real database. Others have
suggested doing things like file type conversions at the OS level, so
you could conveivably see things like this at some point in the future:
(just examples, future syntax would probably look completely
different!)

fopen("!/SELECT NEWEST FILE WHERE CREATOR='bobsmith@local' AND
TYPE='imagefile'");

or

fopen("/home/bobsmith/images/sunflower.jpg"); to open the jpeg file
fopen("/home/bobsmith/images/sunflower.jpg/?convert=PNG"); to have the
system automatically convert the image to PNG, and then open that file.

So, in the first example, there is nothing like a directory. In the
second example, it would be really hard to figure out if sunflower.jpg
is supposed to be treated as a file, or as a directory containing
everything that the system could give you. Some platforms could even
allow http:// addresses right in fopen if they wanted, I think.

So, it would be really handy to have functions for dealing with
directories, but there are potentially so many ways of getting at a
file that if the C standard were to decide on one particular way, it
would probably make it unlikely that anybody would bother to come up
with something better.

At least, that's my personal take on why you will probably never see
directory related functionality in the C standard.

Dec 19 '06 #10

P: n/a
Kenneth Brody wrote On 12/19/06 09:39,:
pete wrote:
>>Sensei wrote:

>>>There are three streams defined by the standard, stdin
stdout stderr. Am I right?

stdin, stdout and stderr, are macros
which expand to expressions of type FILE *.
The proper names of the streams are "standard input",
"standard output", and "standard error".
However, the streams are frequemtly refered to by their
associated macros.


I don't believe that the standard requires that they be macros.
You believe wrongly, infidel.

--
Er*********@sun.com
Dec 19 '06 #11

P: n/a
2006-12-19 <1166547542.735556@news1nwk>,
Eric Sosman wrote:
Kenneth Brody wrote On 12/19/06 09:39,:
>pete wrote:
>>>Sensei wrote:
There are three streams defined by the standard, stdin
stdout stderr. Am I right?

stdin, stdout and stderr, are macros
which expand to expressions of type FILE *.
The proper names of the streams are "standard input",
"standard output", and "standard error".
However, the streams are frequemtly refered to by their
associated macros.


I don't believe that the standard requires that they be macros.

You believe wrongly, infidel.
The standard requires some things that are utterly pointless. Requiring
that stdin/stdout/stderr are macros is one of these. Requiring that FILE
be a complete type is another.
Dec 19 '06 #12

P: n/a
Rg
Yep. And, just for curiosity, there's this amusing piece of code in
Linux stdio.h header:

/* Standard streams. */
extern struct _IO_FILE *stdin; /* Standard input stream. */
extern struct _IO_FILE *stdout; /* Standard output stream. */
extern struct _IO_FILE *stderr; /* Standard error output
stream. */
/* C89/C99 say they're macros. Make them happy. */
#define stdin stdin
#define stdout stdout
#define stderr stderr

On Dec 19, 2:59 pm, Eric Sosman <Eric.Sos...@sun.comwrote:
Kenneth Brody wrote On 12/19/06 09:39,:
pete wrote:
>Sensei wrote:
>>There are three streams defined by the standard, stdin
stdout stderr. Am I right?
>stdin, stdout and stderr, are macros
which expand to expressions of type FILE *.
The proper names of the streams are "standard input",
"standard output", and "standard error".
However, the streams are frequemtly refered to by their
associated macros.
I don't believe that the standard requires that they be macros. You believe wrongly, infidel.

--
Eric.Sos...@sun.com
Dec 19 '06 #13

P: n/a
Random832 wrote On 12/19/06 13:01,:
2006-12-19 <1166547542.735556@news1nwk>,
Eric Sosman wrote:
>>Kenneth Brody wrote On 12/19/06 09:39,:
>>>pete wrote:
Sensei wrote:

>There are three streams defined by the standard, stdin
>stdout stderr. Am I right?

stdin, stdout and stderr, are macros
which expand to expressions of type FILE *.
The proper names of the streams are "standard input",
"standard output", and "standard error".
However, the streams are frequemtly refered to by their
associated macros.
I don't believe that the standard requires that they be macros.

You believe wrongly, infidel.


The standard requires some things that are utterly pointless. Requiring
that stdin/stdout/stderr are macros is one of these. Requiring that FILE
be a complete type is another.
I'd agree with "pointless" but not with "utterly." For
example, #ifdef stdin is a portable way to test whether <stdio.h>
has or has not been included. I can't imagine much utility in
such a test, but my imagination is less than infinite.

However, the question was not about whether the Standard's
requirements are good or bad, but about what is required.

--
Er*********@sun.com
Dec 19 '06 #14

P: n/a
[comp.std.c added, followups set]

2006-12-19 <1166553308.238997@news1nwk>,
Eric Sosman wrote:
Random832 wrote On 12/19/06 13:01,:
>2006-12-19 <1166547542.735556@news1nwk>,
Eric Sosman wrote:
>>>Kenneth Brody wrote On 12/19/06 09:39,:
pete wrote:
>Sensei wrote:
>>There are three streams defined by the standard, stdin
>>stdout stderr. Am I right?
>
>stdin, stdout and stderr, are macros
>which expand to expressions of type FILE *.
>The proper names of the streams are "standard input",
>"standard output", and "standard error".
>However, the streams are frequemtly refered to by their
>associated macros.

I don't believe that the standard requires that they be macros.

You believe wrongly, infidel.

The standard requires some things that are utterly pointless. Requiring
that stdin/stdout/stderr are macros is one of these. Requiring that FILE
be a complete type is another.

I'd agree with "pointless" but not with "utterly." For
example, #ifdef stdin is a portable way to test whether <stdio.h>
has or has not been included. I can't imagine much utility in
such a test, but my imagination is less than infinite.
Well, if that was their reason, then they'd have done something like
"defines _STDIO_H" for all the headers. I think it was just a wording
flaw.
Dec 19 '06 #15

P: n/a
Kenneth Brody wrote:
>
pete wrote:

Sensei wrote:
There are three streams defined by the standard, stdin
stdout stderr. Am I right?
stdin, stdout and stderr, are macros
which expand to expressions of type FILE *.
The proper names of the streams are "standard input",
"standard output", and "standard error".
However, the streams are frequemtly refered to by their
associated macros.

I don't believe that the standard requires that they be macros. They
simply need to be of type FILE*.

(I'm going by memory here, as I believe someone else quoted C&V about
this in the not-too-distant past.)
I did.

Within this part of the ISO/IEC 9899: 1990 standard:
7.9 Input/output <stdio.h>
7.9.1 Introduction
you should locate this phrase: "The macros are",
and then you should keep reading until
you come to the period at the end of the sentence.

--
pete
Dec 19 '06 #16

P: n/a
In article <11**********************@a3g2000cwd.googlegroups. com"Rg" <rg*****@gmail.comwrites:
Yep. And, just for curiosity, there's this amusing piece of code in
Linux stdio.h header:

/* Standard streams. */
extern struct _IO_FILE *stdin; /* Standard input stream. */
extern struct _IO_FILE *stdout; /* Standard output stream. */
extern struct _IO_FILE *stderr; /* Standard error output
stream. */
/* C89/C99 say they're macros. Make them happy. */
#define stdin stdin
#define stdout stdout
#define stderr stderr
And from Solaris (similar to older versions of Unix):
#define stdin (&__iob[0])
#define stdout (&__iob[1])
#define stderr (&__iob[2])

Try to do that without macros. Traditionally many systems did use
macros for them for the above reason. Now I disremember why it is
a requirement, probably it is stated in the Rationale. Probably
to make the statements in that part of the standard concise.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Dec 20 '06 #17

P: n/a
On 19 Dec 2006 18:01:12 GMT, Random832 wrote:
>The standard requires some things that are utterly pointless. Requiring
that stdin/stdout/stderr are macros is one of these. Requiring that FILE
be a complete type is another.
FILE needs to be a complete type because otherwise you could not e.g.
implement getc as a macro.

Best regards,
Roland Pibinger
Dec 20 '06 #18

P: n/a
2006-12-20 <45***************@news.utanet.at>,
Roland Pibinger wrote:
On 19 Dec 2006 18:01:12 GMT, Random832 wrote:
>>The standard requires some things that are utterly pointless. Requiring
that stdin/stdout/stderr are macros is one of these. Requiring that FILE
be a complete type is another.

FILE needs to be a complete type because otherwise you could not e.g.
implement getc as a macro.
#define getc(F) fgetc(F) // I refute it thus.
Dec 20 '06 #19

P: n/a
Roland Pibinger wrote:
On 19 Dec 2006 18:01:12 GMT, Random832 wrote:
>The standard requires some things that are utterly pointless. Requiring
that stdin/stdout/stderr are macros is one of these. Requiring that FILE
be a complete type is another.

FILE needs to be a complete type because otherwise you could not e.g.
implement getc as a macro.
Counterexample (non-conforming because of the requirement):

typedef _builtin_compiler_magic_file FILE;
#define getc(fp) _builtin_compiler_magic_getc(fp)

.... the point being that a conforming C implementation need not
be written in conforming C, or in C at all.

Like pretty much everyone else I don't see why the Committee
chose to require that FILE be a complete type. However, the
requirement is "harmless" in the sense that it doesn't limit the
implementation's freedom much. For example

typedef struct _file *FILE;

.... allows the implementation to hide its treasures in the
incomplete `struct _file' type while revealing the complete
type FILE (pointer to `struct _file') to the program at the
cost of an extra indirection (FILE* is now `struct _file **').
An implementation with a yen for privacy could use even more
densely obfuscated techniques if it wanted.

--
Eric Sosman
es*****@acm-dot-org.invalid
Dec 20 '06 #20

P: n/a
Random832 wrote:
Roland Pibinger wrote:
>Random832 wrote:
>>The standard requires some things that are utterly pointless.
Requiring that stdin/stdout/stderr are macros is one of these.
Requiring that FILE be a complete type is another.

FILE needs to be a complete type because otherwise you could
not e.g. implement getc as a macro.

#define getc(F) fgetc(F) // I refute it thus.
Thus giving up the considerable advantages of implementing getc as
a proper macro. The point is to avoid the overhead of function
calls, and to directly access the streams internal buffers.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Dec 20 '06 #21

P: n/a
In article <45***************@yahoo.com>,
CBFalconer <cb********@maineline.netwrote:
>>FILE needs to be a complete type because otherwise you could
not e.g. implement getc as a macro.
>#define getc(F) fgetc(F) // I refute it thus.
>Thus giving up the considerable advantages of implementing getc as
a proper macro. The point is to avoid the overhead of function
calls, and to directly access the streams internal buffers.
FILE still doesn't need to be a complete type. There could instead
be an implementation-private type:

#define getc(F) (((struct _FILE *)(F))->_count 0 ? ...)

Of course that has the disadvantage that

char *p;
getc(p);

will not produce a compile-time error. The compile-time error is
likely to be rather obscure, though.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Dec 20 '06 #22

P: n/a
2006-12-20 <45***************@yahoo.com>,
CBFalconer wrote:
Random832 wrote:
>Roland Pibinger wrote:
>>Random832 wrote:

The standard requires some things that are utterly pointless.
Requiring that stdin/stdout/stderr are macros is one of these.
Requiring that FILE be a complete type is another.

FILE needs to be a complete type because otherwise you could
not e.g. implement getc as a macro.

#define getc(F) fgetc(F) // I refute it thus.

Thus giving up the considerable advantages of implementing getc as
a proper macro. The point is to avoid the overhead of function
calls, and to directly access the streams internal buffers.
That's no reason to _require_ FILE to be a complete type - if an
implementation wants to do the sort of trickery that requires that,
fine, it can make FILE a complete type. If it doesn't, let it use an
incomplete type, or even void.

There is absolutely no reason for the following to be considered
a conforming program.

#include <stdio.h>
int main() {
FILE x = *stdin;
return 0;
}
Dec 20 '06 #23

P: n/a
pete wrote:
>
Kenneth Brody wrote:
[...]
I don't believe that the standard requires that they be macros. They
simply need to be of type FILE*.

(I'm going by memory here, as I believe someone else quoted C&V about
this in the not-too-distant past.)

I did.

Within this part of the ISO/IEC 9899: 1990 standard:
7.9 Input/output <stdio.h>
7.9.1 Introduction
you should locate this phrase: "The macros are",
and then you should keep reading until
you come to the period at the end of the sentence.
Apparently, my mind isn't as sharp as it once was. (At least, I
think it once was sharp.)

Do you know how hard it is to stick a size 14 foot in one's mouth?

--
+-------------------------+--------------------+-----------------------+
| Kenneth J. Brody | www.hvcomputer.com | #include |
| kenbrody/at\spamcop.net | www.fptech.com | <std_disclaimer.h|
+-------------------------+--------------------+-----------------------+
Don't e-mail me at: <mailto:Th*************@gmail.com>
Dec 20 '06 #24

P: n/a
Random832 wrote:
>
That's no reason to _require_ FILE to be a complete type
Perfectly true, but what of it? Any practical implementation is going
to
declare FILE as a complete type anyway.
- if an
implementation wants to do the sort of trickery that requires that,
fine, it can make FILE a complete type. If it doesn't, let it use an
incomplete type, or even void.
But what implementation would go out its way to make FILE opaque?

You seem to be under the delusion that the purpose of the standards
is to (re)invent C as some perfect abstract language.

The real purpose of standardisation, particularly C89, was to define
existing common practice. That included leaving in a great many number
of warts. Despite (in some cases even because of) those warts, C has
become a very successful language.

Of course it's not too late to change it, but to quote Doug Gwyn on the
issue:

"to do anything with an actual structure the type has to be
complete."

"Before trying to 'fix' anything, we need a good demonstration
of what is *broken*. Just because some legacy interface does
not meet your current idea of good style is insufficient
reason to change it - *especially* stdio."

--
Peter

Dec 20 '06 #25

P: n/a
rp*****@yahoo.com (Roland Pibinger) writes:
On 19 Dec 2006 18:01:12 GMT, Random832 wrote:
>>The standard requires some things that are utterly pointless. Requiring
that stdin/stdout/stderr are macros is one of these. Requiring that FILE
be a complete type is another.

FILE needs to be a complete type because otherwise you could not e.g.
implement getc as a macro.
Yes, you could. FILE could be a typedef for void, making FILE*
equivalent to void*. A getc() macro could convert its FILE* argument
to a pointer to some object type. Or it could all be done by compiler
magic.

There are good reasons to make FILE a complete type; it makes getc()
and friends easier to write. There are no good reasons to *require*
FILE to be a complete type in all implementations. On the other hand,
the requirement, though useless, is not burdensome.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Dec 20 '06 #26

P: n/a
"Peter Nilsson" <ai***@acay.com.auwrites:
Random832 wrote:
>>
That's no reason to _require_ FILE to be a complete type

Perfectly true, but what of it? Any practical implementation is
going to declare FILE as a complete type anyway.
>- if an
implementation wants to do the sort of trickery that requires that,
fine, it can make FILE a complete type. If it doesn't, let it use an
incomplete type, or even void.

But what implementation would go out its way to make FILE opaque?
Poorly written code might attempt to refer to system-specific members
of FILE. Such code is non-portable, but the compiler most likely will
not be able to diagnose it. An implementation might make FILE opaque
to prevent users from making this error.

Note: I am not arguing that an implementation is required to do this,
or even that it should; merely that it's not unreasonable to do so.
You seem to be under the delusion that the purpose of the standards
is to (re)invent C as some perfect abstract language.

The real purpose of standardisation, particularly C89, was to define
existing common practice. That included leaving in a great many number
of warts. Despite (in some cases even because of) those warts, C has
become a very successful language.
[...]

Sure, but this particular wart, though minor, is a completely
unnecessary one. If the standard has removed the requirement for FILE
to be an object type, it needn't have affected any implementation or
any reasonable program.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Dec 21 '06 #27

P: n/a
Keith Thompson wrote:
rp*****@yahoo.com (Roland Pibinger) writes:
>On 19 Dec 2006 18:01:12 GMT, Random832 wrote:
>>The standard requires some things that are utterly pointless.
Requiring that stdin/stdout/stderr are macros is one of these.
Requiring that FILE be a complete type is another.

FILE needs to be a complete type because otherwise you could not
e.g. implement getc as a macro.

Yes, you could. FILE could be a typedef for void, making FILE*
equivalent to void*. A getc() macro could convert its FILE*
argument to a pointer to some object type. Or it could all be
done by compiler magic.
And how does that 'object type' know what, where and how big the
fields are?
>
There are good reasons to make FILE a complete type; it makes
getc() and friends easier to write. There are no good reasons to
*require* FILE to be a complete type in all implementations. On
the other hand, the requirement, though useless, is not burdensome.
s/easier to write/possible to write as macros/

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Dec 21 '06 #28

P: n/a
In article <45***************@yahoo.com>,
CBFalconer <cb********@maineline.netwrote:
>Yes, you could. FILE could be a typedef for void, making FILE*
equivalent to void*. A getc() macro could convert its FILE*
argument to a pointer to some object type. Or it could all be
done by compiler magic.
>And how does that 'object type' know what, where and how big the
fields are?
By being a struct exactly the same as FILE would have been if it hadn't
been void instead.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Dec 21 '06 #29

P: n/a
Peter Nilsson wrote:
>
Random832 wrote:

That's no reason to _require_ FILE to be a complete type

Perfectly true, but what of it? Any practical implementation is going
to declare FILE as a complete type anyway.
- if an
implementation wants to do the sort of trickery that requires that,
fine, it can make FILE a complete type. If it doesn't, let it use an
incomplete type, or even void.

But what implementation would go out its way to make FILE opaque?
If, for some obscure reason, an implementation wanted/needed to have
an opaque FILE, it could always do something like:

typedef struct
{
int foo;
char *bar;
struct opaque_FILE *opaque;
}
FILE;

Or is this considered "incomplete" because of the opaque pointer
within it?

[...]
Of course it's not too late to change it, but to quote Doug Gwyn on the
issue:

"to do anything with an actual structure the type has to be
complete."

"Before trying to 'fix' anything, we need a good demonstration
of what is *broken*. Just because some legacy interface does
not meet your current idea of good style is insufficient
reason to change it - *especially* stdio."

--
+-------------------------+--------------------+-----------------------+
| Kenneth J. Brody | www.hvcomputer.com | #include |
| kenbrody/at\spamcop.net | www.fptech.com | <std_disclaimer.h|
+-------------------------+--------------------+-----------------------+
Don't e-mail me at: <mailto:Th*************@gmail.com>

Dec 21 '06 #30

P: n/a
ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
In article <45***************@yahoo.com>,
CBFalconer <cb********@maineline.netwrote:
>>Yes, you could. FILE could be a typedef for void, making FILE*
equivalent to void*. A getc() macro could convert its FILE*
argument to a pointer to some object type. Or it could all be
done by compiler magic.
>>And how does that 'object type' know what, where and how big the
fields are?

By being a struct exactly the same as FILE would have been if it hadn't
been void instead.
Exactly.

User code trying to refer to the fields of this structure would have
to make explicit references to some obviously internal name like
"struct __FILE"; programmers doing so would have no grounds for
complaint when their code stops working. The implementation of the
getc() would have to do the same thing, but it's *expected* to be
system-specific.

For that matter, even with the current rules, FILE could be some
opaque type. It could even be a typedef for, say char; since char*
and void* are similar in many ways, this would be just about like
making FILE a typedef for void; a sufficiently perverse programmer
could try to play with the value of, say, *stdin, but would most
likely get the unpredictable results he deserves.

On the other hand, I've rarely seen user code that tries to look at
the innards of type FILE anyway. I don't even know what's in it on
the systems I use. Type FILE is *effectively* opaque as long as
programmers don't try to look at it.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Dec 21 '06 #31

P: n/a
Richard Tobin wrote:
CBFalconer <cb********@maineline.netwrote:
>>Yes, you could. FILE could be a typedef for void, making FILE*
equivalent to void*. A getc() macro could convert its FILE*
argument to a pointer to some object type. Or it could all be
done by compiler magic.
>And how does that 'object type' know what, where and how big the
fields are?

By being a struct exactly the same as FILE would have been if it
hadn't been void instead.
Please describe exactly how you do that, in a portable manner.

Please don't remove attributions for material you quote.

--
Some informative links:
<news:news.announce.newusers
<http://www.geocities.com/nnqweb/>
<http://www.catb.org/~esr/faqs/smart-questions.html>
<http://www.caliburn.nl/topposting.html>
<http://www.netmeister.org/news/learn2quote.html>
<http://cfaj.freeshell.org/google/>
Dec 22 '06 #32

P: n/a
CBFalconer <cb********@yahoo.comwrote:
Richard Tobin wrote:
CBFalconer <cb********@maineline.netwrote:
>Yes, you could. FILE could be a typedef for void, making FILE*
equivalent to void*. A getc() macro could convert its FILE*
argument to a pointer to some object type. Or it could all be
done by compiler magic.
And how does that 'object type' know what, where and how big the
fields are?
By being a struct exactly the same as FILE would have been if it
hadn't been void instead.

Please describe exactly how you do that, in a portable manner.
Who cares about portable? getc() is part of the implementation. A getc()
macro must behave in a portable manner, but it needn't be implemented
portably. It must work with the implementation it is part of, and
exhibit portable behaviour, that's all.
(IOW, it could be as simple as

#define getc(FILE *f) ( ((char *)f)[4]? -42: ((unsigned char *)f)[3] )

for an implementation for which those magic numbers incant the correct
spell.)

Richard
Dec 22 '06 #33

P: n/a
Richard Bos wrote:
CBFalconer <cb********@yahoo.comwrote:
>Richard Tobin wrote:
>>CBFalconer <cb********@maineline.netwrote:

Yes, you could. FILE could be a typedef for void, making FILE*
equivalent to void*. A getc() macro could convert its FILE*
argument to a pointer to some object type. Or it could all be
done by compiler magic.

And how does that 'object type' know what, where and how big the
fields are?

By being a struct exactly the same as FILE would have been if it
hadn't been void instead.

Please describe exactly how you do that, in a portable manner.

Who cares about portable? getc() is part of the implementation. A
getc() macro must behave in a portable manner, but it needn't be
implemented portably. It must work with the implementation it is
part of, and exhibit portable behaviour, that's all.
(IOW, it could be as simple as

#define getc(FILE *f) ( ((char *)f)[4]? -42: ((unsigned char *)f)[3] )

for an implementation for which those magic numbers incant the
correct spell.)
You are right. But it would be a maintenance nightmare.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Dec 22 '06 #34

P: n/a
CBFalconer <cb********@yahoo.comwrites:
Richard Tobin wrote:
>CBFalconer <cb********@maineline.netwrote:
>>>Yes, you could. FILE could be a typedef for void, making FILE*
equivalent to void*. A getc() macro could convert its FILE*
argument to a pointer to some object type. Or it could all be
done by compiler magic.
>>And how does that 'object type' know what, where and how big the
fields are?

By being a struct exactly the same as FILE would have been if it
hadn't been void instead.

Please describe exactly how you do that, in a portable manner.

Please don't remove attributions for material you quote.
(I wrote the stuff above starting with "Yes, you could.")

The internals of type FILE are inherently system-specific, so I can't
describe *exactly* how to do that "in a portable manner". But I can
give an example.

Here's a definition of the getc() macro (from Solaris):

#define getc(p) (--(p)->_cnt < 0 ? __filbuf(p) : (int)*(p)->_ptr++)

So apparently type FILE has members called "_cnt" and "_ptr"; __filbuf
is a function that takes a FILE* argument.

Suppose that (in violationof the C standard) this implementation had:

struct __FILE { blah blah };
typedef void FILE;

where struct __FILE has the same fields that FILE has in real life.
Assume also that __filbuf takes a struct __FILE* argument.

Then the definition of getc() could be:

#define getc(p) (--((struct __FILE*)p)->_cnt < 0 ? \
__filbuf((struct __FILE*)p) : \
(int)*((struct __FILE*)p)->_ptr++)

(*if* I've gotten it right).

The actual functions that take FILE* arguments would also have to be
modified.

It makes <stdio.hmore complicated, but who cares? And it makes it
slightly more difficult for perverse user code to use internals that
it shouldn't know about.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Dec 22 '06 #35

P: n/a
CBFalconer <cb********@yahoo.comwrote:
Richard Bos wrote:
CBFalconer <cb********@yahoo.comwrote:
Richard Tobin wrote:
CBFalconer <cb********@maineline.netwrote:

Yes, you could. FILE could be a typedef for void, making FILE*
equivalent to void*. A getc() macro could convert its FILE*
argument to a pointer to some object type. Or it could all be
done by compiler magic.

And how does that 'object type' know what, where and how big the
fields are?

By being a struct exactly the same as FILE would have been if it
hadn't been void instead.

Please describe exactly how you do that, in a portable manner.
Who cares about portable? getc() is part of the implementation. A
getc() macro must behave in a portable manner, but it needn't be
implemented portably. It must work with the implementation it is
part of, and exhibit portable behaviour, that's all.
(IOW, it could be as simple as

#define getc(FILE *f) ( ((char *)f)[4]? -42: ((unsigned char *)f)[3] )

for an implementation for which those magic numbers incant the
correct spell.)

You are right. But it would be a maintenance nightmare.
True. Then again, reading some of the implementation headers on my
machine, I'm not sure any of their authors would notice; and you could
make it more readable with a few judicious #defines.

Richard
Dec 22 '06 #36

P: n/a
In article <45***************@yahoo.com>,
CBFalconer <cb********@maineline.netwrote:
>>>Yes, you could. FILE could be a typedef for void, making FILE*
equivalent to void*. A getc() macro could convert its FILE*
argument to a pointer to some object type. Or it could all be
done by compiler magic.
>>And how does that 'object type' know what, where and how big the
fields are?
>By being a struct exactly the same as FILE would have been if it
hadn't been void instead.
>Please describe exactly how you do that, in a portable manner.
I've no idea what you're getting at. We're talking about the
implementation of the standard library, so where does "portable" come
into it?

Anyway, as I said before, instead of defining a complete type FILE,
you would define, say, _FILE (which is in the implementation
namespace), and cast getc()'s FILE * argument to _FILE *. The details
of _FILE would be completely implementation dependent.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Dec 22 '06 #37

P: n/a
2006-12-22 <45***************@yahoo.com>,
CBFalconer wrote:
Richard Tobin wrote:
>CBFalconer <cb********@maineline.netwrote:
>>>Yes, you could. FILE could be a typedef for void, making FILE*
equivalent to void*. A getc() macro could convert its FILE*
argument to a pointer to some object type. Or it could all be
done by compiler magic.
>>And how does that 'object type' know what, where and how big the
fields are?

By being a struct exactly the same as FILE would have been if it
hadn't been void instead.

Please describe exactly how you do that, in a portable manner.
"portable" has nothing to do with how an implementation may do things.
We're talking about how a getc() macro can be implemented when FILE *
itself is an opaque pointer (i.e. typedef void FILE;)

There's no reason the getc macro can't cast its argument to a "struct
_FILE *" and then reference things through it that way, without FILE
itself being a typedef for struct _FILE.
>
Please don't remove attributions for material you quote.
Dec 22 '06 #38

P: n/a

Random832 wrote:
The standard requires some things that are utterly pointless. Requiring
that stdin/stdout/stderr are macros is one of these. Requiring that FILE
be a complete type is another.
Useless for modern programs, yes.
But, utterly pointless, no.

The standard specifies such things for compatibility with legacy code.
Probably small amounts of legacy code. But, since it doesn't harm,
maintaining backward compatibility is a good thing.

For stdin, stdout and stderr, requiring that they be macros (instead of
simply allowing it) allow some programs to do things such as:

#include <stdio.h>

#undef stdin
#define stdin my_stdin

If the standard had specified that stdin can be a macro but is not
guaranteed to be one, an easy workaround would have been:

#include <stdio.h>

#ifdef stdin
#undef stdin
#endif

#define stdin my_stdin

But it would have broken legacy code.... Plus, it simplifies a bit more
modern programs that want to #undef stdin... And it avoids that, guys
who doesn't know well the standard, accidentally write non-portable
code, relying on the fact that their particular compiler define stdin
as a macro.

Such code does exist:

http://www.google.com/codesearch?hl=...ef%5C+stdin%24

The fact that FILE has to be a complete type, is probably used in
legacy code for things like:

FILE Special; /* won't be initialized to anything particular, but that
doesn't matter */
/* this FILE is only designed to have an address distinct from all
addresses of real FILE structures */
void Function(FILE* f) {
if (f==&Special) {
/* handle this case specially */
}
}

/* invocation examples */
Function(stdout);
Function(NULL); /* does another thing */
Function(&Special); /* again, treated differently */
This programming style is *very* ugly, yet the standard doesn't want to
break gratuitously legacy code, even when it has very bad style.

Of course, FILE being a complete type, doesn't implies that it can't be
opaque.
The definition of FILE may be:

typedef struct {char __dummy;} FILE;

Note: If getc is implemented as a macro, it may use a typecast from
FILE* to _FILE* where _FILE is a complete type containing the real
fields used by the compiler internally.
It won't really hide the structure from malicious programmers, but it
will prevent the innocent programmer from accidentally accessing fields.

Dec 26 '06 #39

P: n/a
SuperKoko wrote:
Random832 wrote:
The standard requires some things that are utterly pointless. Requiring
that stdin/stdout/stderr are macros is one of these. Requiring that FILE
be a complete type is another.

Useless for modern programs, yes.
But, utterly pointless, no.

The standard specifies such things for compatibility with legacy code.
Probably small amounts of legacy code. But, since it doesn't harm,
maintaining backward compatibility is a good thing.

For stdin, stdout and stderr, requiring that they be macros (instead of
simply allowing it) allow some programs to do things such as:

#include <stdio.h>

#undef stdin
#define stdin my_stdin

If the standard had specified that stdin can be a macro but is not
guaranteed to be one, an easy workaround would have been:

#include <stdio.h>

#ifdef stdin
#undef stdin
#endif

#define stdin my_stdin
Even if stdin is not defined as a macro, the first form is valid.
#undef has no effect if the identifier is not a macro name.

Dec 26 '06 #40

P: n/a
Eric Sosman <Er*********@sun.comwrites:
Kenneth Brody wrote On 12/19/06 09:39,:
>pete wrote:
>>>Sensei wrote:
There are three streams defined by the standard, stdin
stdout stderr. Am I right?

stdin, stdout and stderr, are macros
which expand to expressions of type FILE *.
The proper names of the streams are "standard input",
"standard output", and "standard error".
However, the streams are frequemtly refered to by their
associated macros.


I don't believe that the standard requires that they be macros.

You believe wrongly, infidel.
Why? The only relevant place i so far managed to find does not require
them to be macros.

<quote>
7.19.1[#3]
....
stderr
stdin
stdout

which are expressions of type pointer to FILE that point to the FILE
objects associated, respectively, with the standard error, input, and
output streams.
</quote>

The other relevant piece is:

<quote>
7.19.5.4[#2]
....
The primary use of the freopen function is to change the file
associated with a standard text stream (stderr, stdin, or stdout),
as those identifiers need not be modifiable lvalues to which the
value returned by the fopen function may be assigned.
</quote>

--
vale
Dec 26 '06 #41

P: n/a
malc wrote:
Eric Sosman <Er*********@sun.comwrites:
Kenneth Brody wrote On 12/19/06 09:39,:
pete wrote:
stdin, stdout and stderr, are macros
which expand to expressions of type FILE *.
The proper names of the streams are "standard input",
"standard output", and "standard error".
However, the streams are frequemtly refered to by their
associated macros.

I don't believe that the standard requires that they be macros.
You believe wrongly, infidel.

Why? The only relevant place i so far managed to find does not require
them to be macros.

<quote>
7.19.1[#3]
...
stderr
stdin
stdout

which are expressions of type pointer to FILE that point to the FILE
objects associated, respectively, with the standard error, input, and
output streams.
</quote>
<quote>
7.19.1[#3]
The macros are
....
stderr
stdin
stdout

which are expressions of type pointer to FILE that point to the FILE
objects associated, respectively, with the standard error, input, and
output streams.
</quote>

Dec 26 '06 #42

P: n/a
malc wrote:
Eric Sosman <Er*********@sun.comwrites:
>Kenneth Brody wrote On 12/19/06 09:39,:
>>pete wrote:

I don't believe that the standard requires that they be macros.
You believe wrongly, infidel.

Why? The only relevant place i so far managed to find does not require
them to be macros.

<quote>
7.19.1[#3]
...
The first three words you've elided are "The macros are."
stderr
stdin
stdout
[...]
.... and these three identifiers are part of the list -- indeed,
part of the same sentence -- that begins with "The macros are."

Not "The identifiers are."

Not "The keywords are."

Not "The names of the fattest Whos in Whoville are."

... but "The macros are."

--
Eric Sosman
es*****@acm-dot-org.invalid
Dec 26 '06 #43

P: n/a
Eric Sosman <es*****@acm-dot-org.invalidwrites:
malc wrote:
>Eric Sosman <Er*********@sun.comwrites:
>>Kenneth Brody wrote On 12/19/06 09:39,:
pete wrote:

I don't believe that the standard requires that they be macros.
You believe wrongly, infidel.
Why? The only relevant place i so far managed to find does not
require them to be macros.
[..snip..]
... and these three identifiers are part of the list -- indeed,
part of the same sentence -- that begins with "The macros are."
Indeed. Thank you for pointing that out.

--
vale
Dec 26 '06 #44

P: n/a
2006-12-26 <11*********************@42g2000cwt.googlegroups.c om>,
SuperKoko wrote:
Useless for modern programs, yes.
But, utterly pointless, no.

The standard specifies such things for compatibility with legacy code.
Probably small amounts of legacy code. But, since it doesn't harm,
maintaining backward compatibility is a good thing.

For stdin, stdout and stderr, requiring that they be macros (instead of
simply allowing it) allow some programs to do things such as:

#include <stdio.h>

#undef stdin
#define stdin my_stdin
This falls under the category of "not legal anyway". And, regardless,
it's ok, in general, to #undef something that's not defined.
Dec 26 '06 #45

P: n/a
In article <m2************@pulsesoft.commalc <ma**@pulsesoft.comwrites:
Eric Sosman <Er*********@sun.comwrites:
....
You believe wrongly, infidel.

Why? The only relevant place i so far managed to find does not require
them to be macros.

<quote>
7.19.1[#3]
Did you read the first line of that section?
...
stderr
stdin
stdout
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Dec 27 '06 #46

P: n/a
On 18 Dec 2006 13:21:16 -0800, "me********@aol.com"
<me********@aol.comwrote:
>
Richard Heathfield wrote:
<snip>
Not all file systems have the concept of "directory", and this includes some
prominent systems such as VM/CMS and OS390 (aka MVS). CP/M had no concept
of "directory" either. Nor did MS-DOS 1.0. Undoubtedly there are others,
too.
As I've said before, I do consider CMS, CP/M and early MS-DOS -- and
RT-11 -- to have single, fixed directories. OS/360 et seq is more
problematic; it does (at least did) have catalogs which provide the
function of directories and even directory trees, but aren't organized
into separate directories the way we are now used to.
Yeah, the PDP-11 I used with an RT-11 OS had no subdirectories.
All the files were in one place, had to be unique names and limited
to 8 characters (I think). It was a real PITA.
6.3, and drawn from the same 'RAD50' character set used for
object-file symbols: 26 letters -- uppercase only, although most
programs and certainly the 'standard' command parser allowed you to
enter lowercase and upshifted it for you; 10 digits; dollarsign;
underscore; period, which you couldn't actually use in a filename*
because it was the separator; and space (trailing only). (* I don't
recall testing if you could put period in the extension, where the
parse would be unambiguous, and no longer have the opportunity.)

And just to be clear, all the _directory entries_ were in one place,
in the (one-per-volume) directory. The file contents were of course in
different places, in general spread over the volume.

AIR there was a DECUS driver, I think later added to the official
system, which could 'mount' a (single large) file on a (real) disk as
a virtual disk volume, and thus have a (single) directory and files
within that. That provided the effect of nesting, but clumsily, and
without allowing practical recursion/treewalking. (The same, pretty
obvious, concept has been invented in many other places as well.)
- David.Thompson1 at worldnet.att.net
Jan 3 '07 #47

This discussion thread is closed

Replies have been disabled for this discussion.