By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
448,652 Members | 1,701 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 448,652 IT Pros & Developers. It's quick & easy.

[RFC] C token counter

P: n/a

Request for comments on the following program:
http://www.contrib.andrew.cmu.edu/~a...kshop/tokens.c

It's a token counter for the C programming language, following
the outline kind-of-described here:
http://www.kochandreas.com/home/lang...sts/TOKENS.HTM
Basically, it's supposed to give a reasonable approximation of
the number of "atomic tokens" present in a C source file.

To those reading in c.l.c: Are there any glaring mistakes or
poor coding styles in the program itself? And did I miss any
corner cases --- does the program produce a wrong number of
tokens for any valid C99 program?

To those reading in c.l.m: Andreas, this is for you. (:
I'm just posting it generally in case anyone has any comments
along the lines of "gee, C is nifty!" or "gee, C can't do
anything!"

Please set follow-ups appropriately in your reply: comp.lang.c
probably doesn't care about Practical Language Comparison ;) and
comp.lang.misc definitely doesn't want long pedantic debates about
whether #define is one token or two.[1] ;)

-Arthur

[1] - I'm counting it as one token in my program, even though
it looks like technically it's not a "token" in any sense of the
word, Standard-ly speaking.
Nov 14 '05 #1
Share this Question
Share on Google+
19 Replies


P: n/a
In comp.lang.misc Arthur J. O'Dwyer <aj*@nospam.andrew.cmu.edu> wrote:
comp.lang.misc definitely doesn't want long pedantic debates about
whether #define is one token or two.[1] ;)


Why not? They /are/ two tokens:

# define x y

Nils

--
Nils M Holm <nm*@despammed.com> -- http://www.t3x.org/nmh/
Nov 14 '05 #2

P: n/a
I'm reading this in comp.lang.c, followup set.

"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> writes:
Request for comments on the following program:
http://www.contrib.andrew.cmu.edu/~a...kshop/tokens.c

It's a token counter for the C programming language, following
the outline kind-of-described here:
http://www.kochandreas.com/home/lang...sts/TOKENS.HTM
Basically, it's supposed to give a reasonable approximation of
the number of "atomic tokens" present in a C source file.

To those reading in c.l.c: Are there any glaring mistakes or
poor coding styles in the program itself? And did I miss any
corner cases --- does the program produce a wrong number of
tokens for any valid C99 program?


I have just quickly skimmed your code, so please take my comments with a
grain of salt.

I prefer `int main (void)' over `int main ()', especially since you
don't use pre-C89-style definitions for all other functions.

When `getchar ()' returns `EOF', you should check if this is due to an
error or due to end-of-file condition.

The goal is apparently to count preprocessing-tokens, right? In that
case, you seem to parse numbers on a too detailed level. Preprocessing-
numbers are defined in much more general terms, see 6.4.8 (of the C99
standard) for details.

Martin
--
,--. Martin Dickopp, Dresden, Germany ,= ,-_-. =.
/ ,- ) http://www.zero-based.org/ ((_/)o o(\_))
\ `-' `-'(. .)`-'
`-. Debian, a variant of the GNU operating system. \_/
Nov 14 '05 #3

P: n/a
Arthur J. O'Dwyer wrote:
Request for comments on the following program:
http://www.contrib.andrew.cmu.edu/~a...kshop/tokens.c [...] To those reading in c.l.c: Are there any glaring mistakes or
poor coding styles in the program itself? And did I miss any
corner cases --- does the program produce a wrong number of
tokens for any valid C99 program?
It doesn't handle digraphs correctly: the following produces an error:

int main()<%%>

I think the task isn't well defined for a language with several
translation phases, like C, however. You have to decide whether to
count tokens or preprocessing tokens. In order to count tokens you
either need to specify that the input is already preprocessed, or
implement most of a preprocessor yourself. The following program has
35 preprocessing tokens, but only 11 tokens:

#define str(x) # x
int main() { puts(
str(
int main() { puts
("Hello, world.");
return 0;
}));}

Your program gives "34" for this (counting "#define" as a single
token), so it seems to be counting preprocessing tokens. However,
some valid preprocessing tokens, such as certain preprocessing
numbers, are rejected:

#define str(x) # x
int main() { str(3p+); }

I think all the programs above are strictly conforming.
comp.lang.misc definitely doesn't want long pedantic debates about
whether #define is one token or two.[1] ;) [...] [1] - I'm counting it as one token in my program, even though
it looks like technically it's not a "token" in any sense of the
word, Standard-ly speaking.


"#" is a punctuator, which is a token, and "define" is an identifier,
also a token.

Jeremy.
Nov 14 '05 #4

P: n/a
Nils M Holm <nm*@despammed.com> writes:
In comp.lang.misc Arthur J. O'Dwyer <aj*@nospam.andrew.cmu.edu> wrote:
comp.lang.misc definitely doesn't want long pedantic debates about
whether #define is one token or two.[1] ;)

Since this posting might be the start of a long pedantic debate ;) about
`#define' being two tokens, I ignored the Followup-To: comp.lang.misc,
and set a Followup-To: comp.lang.c.
Why not? They /are/ two tokens:

# define x y


<pedantic>
While you're right, the fact that spaces are allowed between `#' and
`define' doesn't prove anything. ;) The language /could/ have defined
the sequence of `#', spaces, and `define' as a single preprocessing-
token. String literals are an example of preprocessing-tokens which may
include spaces.
</pedantic>

Martin
--
,--. Martin Dickopp, Dresden, Germany ,= ,-_-. =.
/ ,- ) http://www.zero-based.org/ ((_/)o o(\_))
\ `-' `-'(. .)`-'
`-. Debian, a variant of the GNU operating system. \_/
Nov 14 '05 #5

P: n/a


Combined response to Martin's and Jeremy's replies.
Thanks (so far...)!

On Fri, 30 Apr 2004, Arthur J. O'Dwyer wrote:

Request for comments on the following program:
http://www.contrib.andrew.cmu.edu/~a...kshop/tokens.c
Things I fixed:

int main(void) replaces int main(). Style issue.
Comments inside string and character literals weren't being handled.
Fixed now.
I had forgotten digraphs entirely. Fixed now.
Comments add spaces; e.g., makes a/**/b two tokens instead of one.
"# define" et al. are now two tokens instead of one. Jeremy's
argument convinced me, I guess. :) The fact that you can stick a
comment in between the "#" and the "define" contributed to my conviction,
too, although I knew that before.
Unrecognized or probably-invalid symbols are passed through anyway,
thus doing away with the NULL return from 'mystate'. This is to deal
with problems involving stringizing macros: basically *anything* could
be passed to them!
Martin Dickopp wrote:
The goal is apparently to count preprocessing-tokens, right? In that
case, you seem to parse numbers on a too detailed level. Preprocessing-
numbers are defined in much more general terms, see 6.4.8 (of the C99
standard) for details.
It seems at a quick glance that the pp-number definition would
make
0xDE+0xAD
into one token, where a more comprehensive approach would suggest it's
"really" three tokens --- 0xDE, +, and 0xAD. So while I agree I'm
doing too much with numbers, I don't yet see a better way that jibes
with the way I'm trying to define "tokens."
Jeremy Yallop wrote:
I think the task isn't well defined for a language with several
translation phases, like C, however. You have to decide whether to
count tokens or preprocessing tokens.


Right. At least, I have to make up a definition of "token"
that makes sense for most applications. Post-preprocessing tokens
are too complicated to handle on this program's scale; and you
pointed out the "stringizing" issue, which I hadn't considered before
either.

Well, new version uploaded; same request for comments on this one. :)
Particularly, I'm not entirely sure that all numbers are handled
appropriately; and I'm not entirely sure that I did the digraph stuff
right --- especially 'PercentOp', which has to discriminate between
%, %:, %=, %>, and %:%:. FSMs are hard. ;-)

-Arthur
Nov 14 '05 #6

P: n/a
In comp.lang.misc Martin Dickopp <ex****************@zero-based.org> wrote:
While you're right, the fact that spaces are allowed between `#' and
`define' doesn't prove anything. ;) The language /could/ have defined
the sequence of `#', spaces, and `define' as a single preprocessing-
token.
While you are right, let's apply occam's razor: what is more likely,
that the parts of /some/ specific tokens can be separated by
(otherwise useless) spaces or that these space separate individual
tokens? :-)
String literals are an example of preprocessing-tokens which may
include spaces.


True. However, these spaces do have a purpose.

Nils

--
Nils M Holm <nm*@despammed.com> -- http://www.t3x.org/nmh/
Nov 14 '05 #7

P: n/a
"Arthur J. O'Dwyer" wrote:
.... snip ...
Please set follow-ups appropriately in your reply: comp.lang.c
probably doesn't care about Practical Language Comparison ;) and
comp.lang.misc definitely doesn't want long pedantic debates about
whether #define is one token or two.[1] ;)

[1] - I'm counting it as one token in my program, even though
it looks like technically it's not a "token" in any sense of the
word, Standard-ly speaking.


I suggest you could have better set follow-ups yourself. How does
your code handle:

# if whatever
# define blah
# else
# undefine foo
# endif

which is perfectly legal, and necessary for some older compilers.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 14 '05 #8

P: n/a
"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> writes:
Martin Dickopp wrote:

The goal is apparently to count preprocessing-tokens, right? In that
case, you seem to parse numbers on a too detailed level. Preprocessing-
numbers are defined in much more general terms, see 6.4.8 (of the C99
standard) for details.
It seems at a quick glance that the pp-number definition would
make
0xDE+0xAD
into one token,


Yes, it's one pp-token.
where a more comprehensive approach would suggest it's "really" three
tokens --- 0xDE, +, and 0xAD.


It's a pp-token which cannot be converted to a token. Compare the
following two programs (difference underlined):

#include <stdio.h>
int main (void) { printf ("%d\n", 0xDE + 0xAD); return 0; }
/* ^^^^^^^^^^^ */

and

#include <stdio.h>
int main (void) { printf ("%d\n", 0xDE+0xAD); return 0; }
/* ^^^^^^^^^ */

The first one displays the number 395. The second one violates the
constraint of 6.4#2, so the implementation is free to interpret it in
any way it likes (after issuing at least one diagnostic).

Martin
PS: C specific discussion, therefore Followup-To: comp.lang.c.
--
,--. Martin Dickopp, Dresden, Germany ,= ,-_-. =.
/ ,- ) http://www.zero-based.org/ ((_/)o o(\_))
\ `-' `-'(. .)`-'
`-. Debian, a variant of the GNU operating system. \_/
Nov 14 '05 #9

P: n/a
Nils M Holm <nm*@despammed.com> writes:
In comp.lang.misc Martin Dickopp <ex****************@zero-based.org> wrote:
While you're right, the fact that spaces are allowed between `#' and
`define' doesn't prove anything. ;) The language /could/ have defined
the sequence of `#', spaces, and `define' as a single preprocessing-
token.


While you are right, let's apply occam's razor: what is more likely,
that the parts of /some/ specific tokens can be separated by
(otherwise useless) spaces or that these space separate individual
tokens? :-)


There's not even a need to apply Occam's razor, since this aspect of the
C language is precisely defined: The likelihoods of the variants are
exactly 0 and 1, respectively. :)

Of course, I agree with your point: Defining the language such that the
sequence of `#', spaces, and `define' are a single token would have been
utterly stupid.

Martin
--
,--. Martin Dickopp, Dresden, Germany ,= ,-_-. =.
/ ,- ) http://www.zero-based.org/ ((_/)o o(\_))
\ `-' `-'(. .)`-'
`-. Debian, a variant of the GNU operating system. \_/
Nov 14 '05 #10

P: n/a

On Fri, 30 Apr 2004, Martin Dickopp wrote:

"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> writes:
Martin Dickopp wrote:

The goal is apparently to count preprocessing-tokens, right? In that
case, you seem to parse numbers on a too detailed level. Preprocessing-
numbers are defined in much more general terms, see 6.4.8 (of the C99
standard) for details.


It seems at a quick glance that the pp-number definition would
make
0xDE+0xAD
into one token,


Yes, it's one pp-token.
where a more comprehensive approach would suggest it's "really" three
tokens --- 0xDE, +, and 0xAD.


It's a pp-token which cannot be converted to a token. Compare the
following two programs (difference underlined):

#include <stdio.h>
int main (void) { printf ("%d\n", 0xDE + 0xAD); return 0; }
/* ^^^^^^^^^^^ */

and

#include <stdio.h>
int main (void) { printf ("%d\n", 0xDE+0xAD); return 0; }
/* ^^^^^^^^^ */


% gcc test.c
test.c: In function `main':
test.c:2: invalid suffix on integer constant

Wow. I learn something new every day!
This looks like *very* weird behavior to me... what's C's
rationale for parsing numbers this way rather than the "common-
sense" approach I *thought* it used? It would not be hard to
make 0xDE+0xAD the addition of the hex constants 0xDE and 0xAD,
rather than a syntax error; why did C decide to do the latter,
then? Just to make the formal lexing spec simpler?

-Arthur,
confused but slightly more enlightened
Nov 14 '05 #11

P: n/a
Arthur J. O'Dwyer wrote:
This looks like *very* weird behavior to me... what's C's rationale
for parsing numbers this way rather than the "common- sense"
approach I *thought* it used?
Dennis Ritchie's explanation seems plausible:

"In truth, I think that preprocessing numbers are the most
conspicuously incorrect thing X3J11 did. [...] Reportedly the idea
was worked out over lunch; it bears more signs of a late-night bar
session."
<news:Cy********@research.att.com>
It would not be hard to make 0xDE+0xAD the addition of the hex
constants 0xDE and 0xAD, rather than a syntax error; why did C
decide to do the latter, then? Just to make the formal lexing spec
simpler?


Yes. The C89 Rationale has the details:

3.1.8 Preprocessing numbers

The notion of preprocessing numbers has been introduced to simplify
the description of preprocessing. It provides a means of talking
about the tokenization of strings that look like numbers, or
initial substrings of numbers, prior to their semantic
interpretation. In the interests of keeping the description
simple, occasional spurious forms are scanned as preprocessing
numbers --- 0x123E+1 is a single token under the rules. The
Committee felt that it was better to tolerate such anomalies than
burden the preprocessor with a more exact, and exacting, lexical
specification. It felt that this anomaly was no worse than the
principle under which the characters a+++++b are tokenized as a ++
++ + b (an invalid expression), even though the tokenization a ++ +
++ b would yield a syntactically correct expression. In both cases,
exercise of reasonable precaution in coding style avoids surprises.

I don't see how it simplifies anything much, really, unless you're
writing a stand-alone preprocessor. Numbers have to be parsed
properly at some stage: the addition of preprocessing numbers means
that two number parsers (with conflicting behaviour) are needed rather
than one.

Jeremy.
Nov 14 '05 #12

P: n/a
"Arthur J. O'Dwyer" wrote:
On Fri, 30 Apr 2004, Martin Dickopp wrote:
.... snip ...
It's a pp-token which cannot be converted to a token. Compare
the following two programs (difference underlined):

#include <stdio.h>
int main (void) { printf ("%d\n", 0xDE + 0xAD); return 0; }
/* ^^^^^^^^^^^ */

and

#include <stdio.h>
int main (void) { printf ("%d\n", 0xDE+0xAD); return 0; }
/* ^^^^^^^^^ */


% gcc test.c
test.c: In function `main':
test.c:2: invalid suffix on integer constant

Wow. I learn something new every day!
This looks like *very* weird behavior to me... what's C's
rationale for parsing numbers this way rather than the "common-
sense" approach I *thought* it used? It would not be hard to
make 0xDE+0xAD the addition of the hex constants 0xDE and 0xAD,
rather than a syntax error; why did C decide to do the latter,
then? Just to make the formal lexing spec simpler?


Consider 0xD. This is a hex value. The following E signifies a
floating point value, with the exponent following, which is
+0xAD. No different than 2e-23 in principle.

This is the sort of thing idiots who economize on blanks are
subject to, and that they impose on the poor suffering maintenance
programmer.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 14 '05 #13

P: n/a

On Fri, 30 Apr 2004, CBFalconer wrote:

"Arthur J. O'Dwyer" wrote:
On Fri, 30 Apr 2004, Martin Dickopp wrote:

#include <stdio.h>
int main (void) { printf ("%d\n", 0xDE+0xAD); return 0; }


test.c:2: invalid suffix on integer constant

It would not be hard to
make 0xDE+0xAD the addition of the hex constants 0xDE and 0xAD,
rather than a syntax error; why did C decide to do the latter,
then? Just to make the formal lexing spec simpler?


Consider 0xD. This is a hex value. The following E signifies a
floating point value, with the exponent following, which is
+0xAD. No different than 2e-23 in principle.


Wrong. "E" (or "e") signifies a floating-point exponent *only*
when used with decimal and octal constants. The floating-point
exponent signifier for hexadecimal constants is "P" (or "p"),
because "E" is a hex digit itself. 0xDE+1 is just as invalid a
C construct as 0xDE+0xAD. See, isn't that weird, now? ;)

-Arthur
Nov 14 '05 #14

P: n/a
"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> wrote in message
news:Pi***********************************@unix40. andrew.cmu.edu...


Combined response to Martin's and Jeremy's replies.
Thanks (so far...)!

On Fri, 30 Apr 2004, Arthur J. O'Dwyer wrote:

Request for comments on the following program:
http://www.contrib.andrew.cmu.edu/~a...kshop/tokens.c


If you're considering C99 digraphs, then...

g\u00E5

....should be reported as a single token.

--
Peter
Nov 14 '05 #15

P: n/a

On Sat, 1 May 2004, Peter Nilsson wrote:

"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> wrote...

Request for comments on the following program:
http://www.contrib.andrew.cmu.edu/~a...kshop/tokens.c


If you're considering C99 digraphs, then...

g\u00E5

...should be reported as a single token.


Request for clarification: First, universal character
names have nothing to do with digraphs, right? you just meant
that they're both obscure C99 features?
Second, are there any pitfalls involving these universal
characters? I just have to accept \u or \U in identifiers?
As I understand N869 6.4.3#2, I don't ever have to worry about
universal characters filling in for, say, digits or other
C tokens.
I don't think it's worth enumerating all those valid identifier
characters in Annex D of N869, for my program.

-Arthur
Nov 14 '05 #16

P: n/a

On Sat, 1 May 2004, Peter Nilsson wrote:

"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> wrote...

Request for comments on the following program:
http://www.contrib.andrew.cmu.edu/~a...kshop/tokens.c


If you're considering C99 digraphs, then...

g\u00E5

...should be reported as a single token.


Request for clarification: First, universal character
names have nothing to do with digraphs, right? you just meant
that they're both obscure C99 features?
Second, are there any pitfalls involving these universal
characters? I just have to accept \u or \U in identifiers?
As I understand N869 6.4.3#2, I don't ever have to worry about
universal characters filling in for, say, digits or other
C tokens.
I don't think it's worth enumerating all those valid identifier
characters in Annex D of N869, for my program.

-Arthur
Nov 14 '05 #17

P: n/a
"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> wrote in message
news:Pi***********************************@unix49. andrew.cmu.edu...

On Sat, 1 May 2004, Peter Nilsson wrote:

"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> wrote...
>
> Request for comments on the following program:
> http://www.contrib.andrew.cmu.edu/~a...kshop/tokens.c
If you're considering C99 digraphs, then...

g\u00E5

...should be reported as a single token.


Request for clarification: First, universal character
names have nothing to do with digraphs, right? you just meant
that they're both obscure C99 features?


Yup. I should have typed: '(e.g. digraphs)', sorry.
Second, are there any pitfalls involving these universal
characters? I just have to accept \u or \U in identifiers?
The \u or \U must be followed by either 4 or 8 hex characters.

The only other constraint is...

A universal character name shall not specify a character whose
short identifier is less than 00A0 other than 0024 ($), 0040 (@),
or 0060 ('), nor one in the range D800 through DFFF inclusive.

[This differs fron N869.]
As I understand N869 6.4.3#2, I don't ever have to worry about
universal characters filling in for, say, digits or other
C tokens.
Yes.
I don't think it's worth enumerating all those valid identifier
characters in Annex D of N869, for my program.


Your call, but fair enough. [If you do, the standard (+TC1) didn't change any of the
characters listed in appendix D from N869.]

--
Peter
Nov 14 '05 #18

P: n/a

On Sun, 2 May 2004, Peter Nilsson wrote:

"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> wrote...
On Sat, 1 May 2004, Peter Nilsson wrote:
"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> wrote...
> >
> > Request for comments on the following program:
> > http://www.contrib.andrew.cmu.edu/~a...kshop/tokens.c

g\u00E5
<snip> Second, are there any pitfalls involving these universal
characters? I just have to accept \u or \U in identifiers?
The \u or \U must be followed by either 4 or 8 hex characters.


....but if it's not, then we have undefined behavior in the C program,
so I'm allowed to do whatever I like. So the semantics of the token
counter don't need to include the counting of hex digits.
The only other constraint is...

A universal character name shall not specify a character whose
short identifier is less than 00A0 other than 0024 ($), 0040 (@),
or 0060 ('), nor one in the range D800 through DFFF inclusive.

[This differs from N869.]


Eek! You scared me there for a moment! s/'/`/ in the above text!
If 0027 (') could be replaced by \u0027, *that* would have been really
icky. But 0060 (`) doesn't do anything in C, so it's okay.

I don't think it's worth enumerating all those valid identifier
characters in Annex D of N869, for my program.


Your call, but fair enough.


Again, as far as I can tell if the user enters an invalid universal
character name in the middle of an identifier, we get undefined
behavior. My current token-counter has gone ahead with the "pp-number"
semantics for counting numeric constants, so 0x0E4+D..P-xg27 counts as
one token; I don't see why FOO\u4D99BAR should be counted (or not) any
differently.

Thanks,
-Arthur

Nov 14 '05 #19

P: n/a
Jeremy Yallop wrote:
I think the task isn't well defined for a language with several
translation phases, like C, however. You have to decide whether to
count tokens or preprocessing tokens.


Good point. I'll add that. We want preprocessing tokens; neither
macros expanded nor files included etc.

--
Andreas
Andreas' practical language comparison:
http://www.kochandreas.com/home/language/lang.htm
Nov 14 '05 #20

This discussion thread is closed

Replies have been disabled for this discussion.