By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
446,413 Members | 998 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 446,413 IT Pros & Developers. It's quick & easy.

older standard drafts for download?

P: n/a
Is there a place I can download the last public draft for C89? What
about a draft for C95?

What I really need is a list of all standard library functions, macros,
types, etc. for a "keyword file", which is used for syntax highlighting.
I want to create one for both major C standards, with the C89/90 version
including the 1995 amendment. I already have n869 for C99.

Thanks.

-Kevin
--
My email address is valid, but changes periodically.
To contact me please use the address from a recent posting.

Nov 13 '05 #1
Share this Question
Share on Google+
17 Replies


P: n/a
"Kevin Goodsell" <us*********************@neverbox.com> wrote:
Is there a place I can download the last public draft for C89?
What about a draft for C95?

What I really need is a list of all standard library functions,
macros, types, etc. for a "keyword file", which is used for
syntax highlighting. I want to create one for both major C
standards, with the C89/90 version including the 1995 amendment.
I already have n869 for C99.


Hmm, I have a copy of the C89 draft -- here it is on my website:
http://members.optushome.com.au/sbib...nsic89.txt.bz2
That's a 496 kilobyte ASCII text file, compressed with BZip2 to
95.5 kilobytes.

--
Simon.
Nov 13 '05 #2

P: n/a
Simon Biber wrote:

Hmm, I have a copy of the C89 draft -- here it is on my website:
http://members.optushome.com.au/sbib...nsic89.txt.bz2
That's a 496 kilobyte ASCII text file, compressed with BZip2 to
95.5 kilobytes.


Thank you very much, Simon. Do you happen to know if this was the last
public draft, or something earlier?

-Kevin
--
My email address is valid, but changes periodically.
To contact me please use the address from a recent posting.

Nov 13 '05 #3

P: n/a
In <%9*****************@newsread2.news.pas.earthlink. net> Kevin Goodsell <us*********************@neverbox.com> writes:
Simon Biber wrote:
Hmm, I have a copy of the C89 draft -- here it is on my website:
http://members.optushome.com.au/sbib...nsic89.txt.bz2
That's a 496 kilobyte ASCII text file, compressed with BZip2 to
95.5 kilobytes.


Thank you very much, Simon. Do you happen to know if this was the last
public draft, or something earlier?


The last public draft. It is the document on which the first printing
of K&R2 was based.

For C95, have a look at http://www.lysator.liu.se/c/na1.html

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 13 '05 #4

P: n/a
jtp
"Simon Biber" <ne**@ralminNOSPAM.cc> wrote in message news:<3f***********************@news.optusnet.com. au>...
"Kevin Goodsell" <us*********************@neverbox.com> wrote:
Is there a place I can download the last public draft for C89?
What about a draft for C95?

What I really need is a list of all standard library functions,
macros, types, etc. for a "keyword file", which is used for
syntax highlighting. I want to create one for both major C
standards, with the C89/90 version including the 1995 amendment.
I already have n869 for C99.


Hmm, I have a copy of the C89 draft -- here it is on my website:
http://members.optushome.com.au/sbib...nsic89.txt.bz2
That's a 496 kilobyte ASCII text file, compressed with BZip2 to
95.5 kilobytes.


Is there any chance to get it compressed with WinZip. ;)
Nov 13 '05 #5

P: n/a
"jtp" <jt*****@hotmail.com> wrote:
"Simon Biber" <ne**@ralminNOSPAM.cc> wrote:
"Kevin Goodsell" <us*********************@neverbox.com> wrote:
Is there a place I can download the last public draft for C89?
What about a draft for C95?

What I really need is a list of all standard library functions,
macros, types, etc. for a "keyword file", which is used for
syntax highlighting. I want to create one for both major C
standards, with the C89/90 version including the 1995 amendment.
I already have n869 for C99.


Hmm, I have a copy of the C89 draft -- here it is on my website:
http://members.optushome.com.au/sbib...nsic89.txt.bz2
That's a 496 kilobyte ASCII text file, compressed with BZip2 to
95.5 kilobytes.


Is there any chance to get it compressed with WinZip. ;)


You could try to convince WinZip to support the BZip2 format first,
they support its predecessor GZip. Or, move over to a better
shell-integrated archive and compression program, PowerArchiver,
which does support BZip2... http://www.powerarchiver.com/

Or, you can download a small utility bzip2-102-x86-win32.exe (72 KB)
from ftp://sources.redhat.com/pub/bzip2/v...-x86-win32.exe

--
Simon.
Nov 13 '05 #6

P: n/a
Simon Biber wrote:
"jtp" <jt*****@hotmail.com> wrote:

.... snip ...

Is there any chance to get it compressed with WinZip. ;)


You could try to convince WinZip to support the BZip2 format first,
they support its predecessor GZip. Or, move over to a better
shell-integrated archive and compression program, PowerArchiver,
which does support BZip2... http://www.powerarchiver.com/

Or, you can download a small utility bzip2-102-x86-win32.exe (72 KB)
from ftp://sources.redhat.com/pub/bzip2/v...-x86-win32.exe


The point is that bzip2 compresses significantly more than does
zip, although it takes longer to do so. However decompression
speed is comparable. So there is a significant gain in using
bzip2 whenever files are to be downloaded, or when decompression
is used more often than compression.

jtp should have been looking into how to get the system on his
machine, which Mr. Biber has made easy for him. I believe a
google for bzip2 will turn up more.

People posting such compressed files should ensure that their
servers properly characterize the files as binary (the mime
type). Failure to do so can fatally harm the transmission.

--
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 13 '05 #7

P: n/a
In <3F***************@yahoo.com> CBFalconer <cb********@yahoo.com> writes:
Simon Biber wrote:
"jtp" <jt*****@hotmail.com> wrote:

... snip ...
>
> Is there any chance to get it compressed with WinZip. ;)


You could try to convince WinZip to support the BZip2 format first,
they support its predecessor GZip. Or, move over to a better
shell-integrated archive and compression program, PowerArchiver,
which does support BZip2... http://www.powerarchiver.com/

Or, you can download a small utility bzip2-102-x86-win32.exe (72 KB)
from ftp://sources.redhat.com/pub/bzip2/v...-x86-win32.exe


The point is that bzip2 compresses significantly more than does
zip, although it takes longer to do so. However decompression
speed is comparable. So there is a significant gain in using
bzip2 whenever files are to be downloaded, or when decompression
is used more often than compression.

jtp should have been looking into how to get the system on his
machine, which Mr. Biber has made easy for him. I believe a
google for bzip2 will turn up more.

People posting such compressed files should ensure that their
servers properly characterize the files as binary (the mime
type). Failure to do so can fatally harm the transmission.


That's sheer hypocrisy! ;-) The very person claiming that plain text is
the only truly portable file format is now arguing the merits of one
binary file format over another.

BTW, the document is still available in plain text format in the place it
was originally posted... Don't ask me where, search the c.l.c archives.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 13 '05 #8

P: n/a
Dan Pop wrote:
CBFalconer <cb********@yahoo.com> writes:
Simon Biber wrote:

... snip ...

Or, you can download a small utility bzip2-102-x86-win32.exe (72 KB)
from ftp://sources.redhat.com/pub/bzip2/v...-x86-win32.exe


The point is that bzip2 compresses significantly more than does
zip, although it takes longer to do so. However decompression
speed is comparable. So there is a significant gain in using
bzip2 whenever files are to be downloaded, or when decompression
is used more often than compression.

jtp should have been looking into how to get the system on his
machine, which Mr. Biber has made easy for him. I believe a
google for bzip2 will turn up more.

People posting such compressed files should ensure that their
servers properly characterize the files as binary (the mime
type). Failure to do so can fatally harm the transmission.


That's sheer hypocrisy! ;-) The very person claiming that plain
text is the only truly portable file format is now arguing the
merits of one binary file format over another.


You actually have a valid point there :-) However you might do
well to think of zip, bzip2, lhz, arj, arc, etc. more as means of
packing for transmission (or storage), since in all cases the
transition TEXT -> <bin> -> TEXT is lossless and well defined.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 13 '05 #9

P: n/a
In <3F***************@yahoo.com> CBFalconer <cb********@yahoo.com> writes:
Dan Pop wrote:
CBFalconer <cb********@yahoo.com> writes:
>Simon Biber wrote:
>>
>... snip ...
>>
>> Or, you can download a small utility bzip2-102-x86-win32.exe (72 KB)
>> from ftp://sources.redhat.com/pub/bzip2/v...-x86-win32.exe
>
>The point is that bzip2 compresses significantly more than does
>zip, although it takes longer to do so. However decompression
>speed is comparable. So there is a significant gain in using
>bzip2 whenever files are to be downloaded, or when decompression
>is used more often than compression.
>
>jtp should have been looking into how to get the system on his
>machine, which Mr. Biber has made easy for him. I believe a
>google for bzip2 will turn up more.
>
>People posting such compressed files should ensure that their
>servers properly characterize the files as binary (the mime
>type). Failure to do so can fatally harm the transmission.
That's sheer hypocrisy! ;-) The very person claiming that plain
text is the only truly portable file format is now arguing the
merits of one binary file format over another.


You actually have a valid point there :-) However you might do
well to think of zip, bzip2, lhz, arj, arc, etc. more as means of
packing for transmission (or storage), since in all cases the

^^^^^^^^^^^^transition TEXT -> <bin> -> TEXT is lossless and well defined.


You're missing my point. The transition

local.text -> local.bin -> remote.bin -> remote.text

is not well defined at all, even if remote.bin can be decoded.
The point is that the compressor treats local.text as a *binary* file
completely ignoring its line structured format. When decoding it, you
get the original bytes of the text file, but they may be completely
meaningless as a text file on the remote host, if it uses an incompatible
format for representing text files.

OTOH, the transition local.text -> remote.text is (normally) aware of the
text nature of the data being transferred and the information allowing to
separate the data into lines of text is sent in a format understood by
both parties, so the textual information is safely transferred.

As a trivial example, take a text file on a Unix box, zip it, transfer the
binary to a Windows box and unzip it. The result is not a valid Windows
text file. The fix is trivial in such a case, but it's less trivial if
you transfer the binary to an IBM mainframe instead of a Windows box.
Even less trivial when dealing with record based text file formats...

OTOH, any text file transfer protocol (e.g. FTP) will do the right thing
when transfering text files in text mode, character set conversions
included.

On a related note, there is a subtle trap awaiting the unsuspecting Unix
programmer. The Unix convention is that each line of text is terminated
by an ASCII LF character, but most network protocols dealing with text
data use the CR + LF pair as line terminator. So, our unsuspecting Unix
programmer gets a line of text from a remote host using one such protocol,
carefully strips the LF character at the end and then displays it
like this:

printf("line starts here -->%s<-- line ends here\n", line);

and is completely baffled by the program's output ;-)

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 13 '05 #10

P: n/a
Dan Pop wrote:

You're missing my point. The transition

local.text -> local.bin -> remote.bin -> remote.text

is not well defined at all, even if remote.bin can be decoded.
The point is that the compressor treats local.text as a *binary* file
completely ignoring its line structured format. When decoding it, you
get the original bytes of the text file, but they may be completely
meaningless as a text file on the remote host, if it uses an incompatible
format for representing text files.


Some decompressors will attempt to convert text files to the correct
format. I had a problem with this recently that confused me for a while.
I downloaded the JPEG library from the Independent JPEG Group, which is
a tarred + gzipped archive and extracted it without incident using
WinZip. The programs compiled just fine (I was a little surprised to
find the source files in MS-DOS format, but the reason became clear later).

The archive also contained some images for testing purposes. The tests
included things like compressing a bitmap image and comparing it to a
JPEG image, with the expectation that the file should be identical if
the program compiled correctly. But some of the tests failed. The test
images were a red rose, but one of the files (a .ppm file) showed a
bright green rose instead. Obviously, when this green rose was
compressed it was not identical to the compressed red rose.

It turns out that WinZip attempts to "fix" text files when it extracts
them from a tar archive. It incorrectly identified the .ppm file as
text, and inserted some extra bytes. This offset the color channels,
putting the red intensities into the green channel, and making the red
rose turn green. Somehow the result was still close enough to a valid
..ppm that the programs I was using didn't complain.

So WinZip tried to correct for the problem you described above, but
corrected incorrectly (though it did the right thing with the source
files). You can turn off this option in WinZip, by the way, and most
people probably should do so.

-Kevin
--
My email address is valid, but changes periodically.
To contact me please use the address from a recent posting.

Nov 13 '05 #11

P: n/a
Kevin Goodsell wrote:
Dan Pop wrote:

You're missing my point. The transition

local.text -> local.bin -> remote.bin -> remote.text

is not well defined at all, even if remote.bin can be decoded.
The point is that the compressor treats local.text as a *binary*
file completely ignoring its line structured format. When
decoding it, you get the original bytes of the text file, but
they may be completely meaningless as a text file on the remote
host, if it uses an incompatible format for representing text
files.


Some decompressors will attempt to convert text files to the
correct format. I had a problem with this recently that confused
me for a while. I downloaded the JPEG library from the Independent
JPEG Group, which is a tarred + gzipped archive and extracted it
without incident using WinZip. The programs compiled just fine (I
was a little surprised to find the source files in MS-DOS format,
but the reason became clear later).

The archive also contained some images for testing purposes. The
tests included things like compressing a bitmap image and
comparing it to a JPEG image, with the expectation that the file
should be identical if the program compiled correctly. But some
of the tests failed. The test images were a red rose, but one of
the files (a .ppm file) showed a bright green rose instead.
Obviously, when this green rose was compressed it was not
identical to the compressed red rose.

It turns out that WinZip attempts to "fix" text files when it
extracts them from a tar archive. It incorrectly identified the
.ppm file as text, and inserted some extra bytes. This offset
the color channels, putting the red intensities into the green
channel, and making the red rose turn green. Somehow the result
was still close enough to a valid .ppm that the programs I was
using didn't complain.

So WinZip tried to correct for the problem you described above,
but corrected incorrectly (though it did the right thing with
the source files). You can turn off this option in WinZip, by
the way, and most people probably should do so.


To preserve my disgust with most things gui, I use zip and unzip
(from Infozip), which are available for most platforms. The
following is an excerpt from the unzip manual.

-a convert text files. Ordinarily all files are
extracted exactly as they are stored (as ``binary''
files). The -a option causes files identified by
zip as text files (those with the `t' label in zip-
info listings, rather than `b') to be automatically
extracted as such, converting line endings, end-of-
file characters and the character set itself as
necessary. (For example, Unix files use line feeds
(LFs) for end-of-line (EOL) and have no end-of-file
(EOF) marker; Macintoshes use carriage returns
(CRs) for EOLs; and most PC operating systems use
CR+LF for EOLs and control-Z for EOF. In addition,
IBM mainframes and the Michigan Terminal System use
EBCDIC rather than the more common ASCII character
set, and NT supports Unicode.) Note that zip's
identification of text files is by no means
perfect; some ``text'' files may actually be binary
and vice versa. unzip therefore prints ``[text]''
or ``[binary]'' as a visual check for each file it
extracts when using the -a option. The -aa option
forces all files to be extracted as text, regard-
less of the supposed file type.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 13 '05 #12

P: n/a
In <oB*****************@newsread1.news.pas.earthlink. net> Kevin Goodsell <us*********************@neverbox.com> writes:
Dan Pop wrote:

You're missing my point. The transition

local.text -> local.bin -> remote.bin -> remote.text

is not well defined at all, even if remote.bin can be decoded.
The point is that the compressor treats local.text as a *binary* file
completely ignoring its line structured format. When decoding it, you
get the original bytes of the text file, but they may be completely
meaningless as a text file on the remote host, if it uses an incompatible
format for representing text files.


Some decompressors will attempt to convert text files to the correct
format. I had a problem with this recently that confused me for a while.
I downloaded the JPEG library from the Independent JPEG Group, which is
a tarred + gzipped archive and extracted it without incident using
WinZip. The programs compiled just fine (I was a little surprised to
find the source files in MS-DOS format, but the reason became clear later).

The archive also contained some images for testing purposes. The tests
included things like compressing a bitmap image and comparing it to a
JPEG image, with the expectation that the file should be identical if
the program compiled correctly. But some of the tests failed. The test
images were a red rose, but one of the files (a .ppm file) showed a
bright green rose instead. Obviously, when this green rose was
compressed it was not identical to the compressed red rose.

It turns out that WinZip attempts to "fix" text files when it extracts
them from a tar archive. It incorrectly identified the .ppm file as
text, and inserted some extra bytes. This offset the color channels,
putting the red intensities into the green channel, and making the red
rose turn green. Somehow the result was still close enough to a valid
.ppm that the programs I was using didn't complain.

So WinZip tried to correct for the problem you described above, but
corrected incorrectly (though it did the right thing with the source
files). You can turn off this option in WinZip, by the way, and most
people probably should do so.


Identifying the nature of a file from its extension is a brain dead idea.
Different platforms use completely different conventions.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 13 '05 #13

P: n/a
In <3F***************@yahoo.com> CBFalconer <cb********@yahoo.com> writes:
To preserve my disgust with most things gui, I use zip and unzip
(from Infozip), which are available for most platforms. The
following is an excerpt from the unzip manual.

-a convert text files. Ordinarily all files are
extracted exactly as they are stored (as ``binary''
files). The -a option causes files identified by
zip as text files (those with the `t' label in zip-
info listings, rather than `b') to be automatically
extracted as such, converting line endings, end-of-
file characters and the character set itself as
necessary. (For example, Unix files use line feeds
(LFs) for end-of-line (EOL) and have no end-of-file
(EOF) marker; Macintoshes use carriage returns
(CRs) for EOLs; and most PC operating systems use
CR+LF for EOLs and control-Z for EOF. In addition,
IBM mainframes and the Michigan Terminal System use
EBCDIC rather than the more common ASCII character
set, and NT supports Unicode.) Note that zip's
identification of text files is by no means
perfect; some ``text'' files may actually be binary
and vice versa. unzip therefore prints ``[text]''
or ``[binary]'' as a visual check for each file it
extracts when using the -a option. The -aa option
forces all files to be extracted as text, regard-
less of the supposed file type.


It's not clear how it identifies the format of the original text file,
in order to be able to convert it to the local format. E.g. how does it
decide when to convert from EBCDIC to the local character set, or how
it decides which ISO-8859 flavour was used by the original when converting
to Unicode.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 13 '05 #14

P: n/a
Dan Pop wrote:
In <oB*****************@newsread1.news.pas.earthlink. net> Kevin Goodsell <us*********************@neverbox.com> writes:

Identifying the nature of a file from its extension is a brain dead idea.
Different platforms use completely different conventions.


Agreed. Actually, WinZip examines the first N bytes (don't recall what N
is exactly, but the help documents explain the method) and guesses based
on that. Obviously, this method doesn't always work. I don't believe
there is any general way to identify the nature of a file based only on
the name and content.

-Kevin
--
My email address is valid, but changes periodically.
To contact me please use the address from a recent posting.

Nov 13 '05 #15

P: n/a
Dan Pop wrote:

<snip>
Identifying the nature of a file from its extension is a brain dead idea.
Different platforms use completely different conventions.


Hallelujah, amen, etc. It makes a pleasant change to agree with you.

Semi-topical semi-relevant mini-saga follows. Switch off now if you aren't
up for it.

A few years ago, I was working on a (Windows) site where a considerable
number of people used an application which named its files with a .pdf
extension ("program description format" or something). Then someone had the
bright idea of writing a project-wide information-sharing system (where you
could find out things like pizza company phone numbers), and since they
were doing a lot of Portable Document Format stuff for their PhD, they
decided to use it here too. (You're all ahead of me, I know...)

Of course, nobody consulted the existing .pdf users; the software was just
installed on their machines without them even being told about it. So now,
when they double-clicked their myprogramdescriptionformat.pdf files, the
wrong application fired up and complained that their (perfectly correct)
files contained errors.

My suggested fix was to install Linux, of course, but since (for some
reason) that wasn't considered acceptable, I ended up writing a program (in
ISO C!!!) that would peek at the .pdf file and launch the appropriate
application depending on the file *contents*. We simply slid this into the
mix as an extra level of indirection, and everyone was happy.

It was a bodge, of course, and it should not have been necessary. File
associations are an unnecessary evil.

--
Richard Heathfield : bi****@eton.powernet.co.uk
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
K&R answers, C books, etc: http://users.powernet.co.uk/eton
Nov 13 '05 #16

P: n/a
Richard Heathfield wrote:
.... snip ...
My suggested fix was to install Linux, of course, but since (for
some reason) that wasn't considered acceptable, I ended up writing
a program (in ISO C!!!) that would peek at the .pdf file and
launch the appropriate application depending on the file
*contents*. We simply slid this into the mix as an extra level of
indirection, and everyone was happy.

It was a bodge, of course, and it should not have been necessary.
File associations are an unnecessary evil.


Not too long ago under MsDos there were various decompressor
handling programs that did exactly that. Some were even table
driven, and could specify magic id phrases and where they were to
be found. They also had provisions for translating command lines
for the appropriate decompressor. SHEZ combined this with a
windowed display.

These systems were especially handy in decompressing and
distributing FidoNet mail, and enabled easy installation of better
compression methods.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 13 '05 #17

P: n/a
In <MC*******************@newsread2.news.pas.earthlin k.net> Kevin Goodsell <us*********************@neverbox.com> writes:
Dan Pop wrote:
In <oB*****************@newsread1.news.pas.earthlink. net> Kevin Goodsell <us*********************@neverbox.com> writes:

Identifying the nature of a file from its extension is a brain dead idea.
Different platforms use completely different conventions.


Agreed. Actually, WinZip examines the first N bytes (don't recall what N
is exactly, but the help documents explain the method) and guesses based
on that. Obviously, this method doesn't always work. I don't believe
there is any general way to identify the nature of a file based only on
the name and content.


It is possible to develop a set of platform specific rules that works
with a probability close to 100% for each file larger than, say, 1K,
but they would not be portable to another system. For example, on Unix
systems, assuming only 8-bit characters (ISO-8859 character sets) it
would look like that:

Only a few control characters in the range 0-31 accepted.
No characters in the range 128 - 159 accepted.
No line larger than, say, 200 characters, accepted.

Even ignoring Unicode files, this wouldn't work on Windows, because
Microsoft has populated the range reserved to extended control characters
(128 - 159) with printable characters.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 13 '05 #18

This discussion thread is closed

Replies have been disabled for this discussion.