In article <br*************@news.t-online.com>,
ra******@t-online.de
says...
[ ... ]
There are already context sensitivity issues. That's the reason why you
can't write vector<vector<int>>. The "greater" token already is
"overloaded".
That's not context sensitivity. Context sensitivity is when your
grammar contains at least one production like:
xA ::= whatever
where an 'A' is recognized as a particular syntactic element ONLY in the
context of an 'x'. Otherwise, it's recognized as some other syntactic
element.
In the case of '<<' or '>>', there's no such thing -- distinguishing
between '>' and '>>' is done entirely at the lexical level, before the
grammar sees either one at all. By the time the parser sees any of
these, the lexer has converted each one to a token. The lexer doesn't
use any context sensitivity either -- it just creates a token out of the
longest sequence of input characters that it can. I.e. it reads in
characters until it encounters one that can't possibly be part of any
token that started with the characters that have already been read. At
that point, it does one of two things: returns the characters its
already read as a token, or else signals an error because what it's read
isn't a token, and the next character in the input can't be part of any
token that could start with the characters that have already been read
either.
There are a few parts of C++ that involve context sensitivity, but
they're mostly there to resolve ambiguities in the grammar proper --
e.g. in some cases, the choice between a declaration and a definition is
context sensitive.
--
Later,
Jerry.
The universe is a figment of its own imagination.