welcome back at the sequel of the parsers article chapter. This part is dedicated
to the ExpressionParse r, the largest parser class for our little language.
This class parses a complete expression and just like the other parser classes
calls the Generator on the fly. When the parse has completed successfully, the
generated code is returned. Otherwise the parse is aborted and an exception is
thrown telling the reason of the abort.
The ExpressionParse r
The obligatory preamble of the ExpressionParse r class is boring again:
Expand|Select|Wrap|Line Numbers
- public class ExpressionParser extends AbstractParser {
- public ExpressionParser(Tokenizer tz, User user) { super(tz, user); }
are passed to the constructor of this class. Here's the main entry point for
this object:
Expand|Select|Wrap|Line Numbers
- public Code parse(Generator gen) throws InterpreterException {
- binaryExpression(gen, 0);
- return gen.getCode();
- }
it when it needs an entire expression to be parsed for the user defined function
body. The method calls another private method that does the job; it'll come a
bit later (see below). The method returns the compiled code that has been
generated on the fly while parsing the expression.
Binary expressions
Here are the grammar rules again for a binary expression:
Expand|Select|Wrap|Line Numbers
- expression0: expression1 ( operator0 expression1 )*
- expression1: expression2 ( operator1 expression2 )*
- expression2: expression3 ( operator2 expression3 )*
- expression3: expression4 ( operator3 expression4 )*
- expression4: expression5 ( operator4 expression5 )*
- expression4: unary ( operator5 unary )*
- operaor 0: ':'
- operator1: '==' | '!='
- operator2: '<=' | '<' | '>' | '>='
- operator3: '+' | '-'
- operator4: '*' | '/'
- operator5: '^'
numbers for the binary expression. In general rule 'i' looks like this:
Expand|Select|Wrap|Line Numbers
- expression(i): expression(i+1) ( operator(i) expression(i+1) )*
to Java code like this:
Expand|Select|Wrap|Line Numbers
- private void binaryExpression(Generator gen, int i)
- throws InterpreterException {
- Token token;
- if (i == ParserTable.binops.size())
- unaryExpression(gen);
- else {
- for (binaryExpression(gen, i+1);
- ParserTable.binops.get(i).
- contains((token= tz.getToken()).getStr()); ) {
- tz.skip();
- binaryExpression(gen, i+1);
- gen.makeFunctionInstruction(token, 2);
- }
- }
- }
precedence that are concerned in this binary expression. At the highest precedence
level a binary expression is just a unary expression; otherwise at precedence
level 'i' just binary expressions that take higher precedence level operands
are parsed, separated by precedence level 'i' operators.
The 'makeFunctionIn struction' method in the Generator class generates a binary
operator instruction and adds it to the compiled code list. Also see the
explanation in the 'Grammar' part of this article. In a bird eye view this
recursive (!) method parses binary expressions like described in the indexed
grammar rule above. If you understand this method the rest of the ExpressionParse r's
method will be a piece of cake. This method takes care of the binary operator
precedence rules, i.e. recursively it parses expressions that take higher and
higher precedence operators until unary expressions are the only thing that's
left. Unary expressions are a bit of a mess syntactically speaking because there
can be some many forms of them. We'll split things up a bit.
Unary expressions
Here are the grammar rules for unary expressions again:
Expand|Select|Wrap|Line Numbers
- unary: ( unaryop unary ) | atomic
- unaryop: '+' | '-' | '!' | '++' | '--'
followed by an atomic expression. The operators are applied (or evaluated) from
right to left although they are parsed from left to right. Here's what the
unaryExpression looks like:
Expand|Select|Wrap|Line Numbers
- private void unaryExpression(Generator gen) throws InterpreterException {
- Token token= tz.getToken();
- if (ParserTable.unaops.contains(token.getStr())) {
- tz.skip();
- unaryExpression(gen);
- gen.makeFunctionInstruction(token.mark(), 1);
- }
- else
- atomicExpression(gen);
- }
in the token stream this method calls itself recursively and generated compiled
code for the unary operator afterwards. If no unary expression is present in the
token stream (anymore), the work is delegated to the atomicExpressio n method.
Here's an example: suppose the input stream is '-!x', first the '-' operator
is parsed and the method calls itself again. The input stream now contains '!x'.
The '!' operator is parsed and the method calls itself again. The input stream
now is 'x' which is handled by the atomicExpressio n method. In the second call
of the unaryExpression method code is generated for the '!' operator; the method
returns and in the first call of this method code is generated for the '-'
operator. The code can symbolically be represented as 'x ! -'.
When we can play a bit with the parsers we'll see that compiled code resembles
a postfix representation of the infix expressions to be compiled.
When we've reached this deep in our recursive descent parse there are no more
binary nor unary operators to parse in this part of the expressions. All that's
left are 'atomic' expressions.
Atomic expressions
Atomic expressions are a mess syntactically speaking. Basically an atomic expression
start with a constant, a left parenthesis, a left curly bracket or a name.
The first options are relatively easy; it the 'name' that's the cumbersome part.
All the Tokenizer knows is that it has scanned a name; but a name can be
anything:
1) a built-in function;
2) a user defined function or listfunc;
3) a quoted object (see the first part of this chapter);
4) a simple variable, or
5) an assignment to a variable.
Let's dig into this mess; here's the atomicExpressio n method:
Expand|Select|Wrap|Line Numbers
- private void atomicExpression(Generator gen) throws InterpreterException {
- Token token= tz.getToken();
- switch (type(token)) {
- case TokenTable.T_NUMB: constantExpression(token, gen); break;
- case TokenTable.T_NAME: nameExpression(token, gen); break;
- case ParserTable.T_USER: userExpression(token, gen); break;
- case ParserTable.T_FUNC: functionExpression(token, gen); break;
- case ParserTable.T_QUOT: quoteExpression(token, gen); break;
- default:
- if (expect("("))
- nestedExpression(gen);
- else if (expect("{"))
- listExpression(gen);
- else
- throw new ParserException(tz,
- "expression expected");
- }
- }
It delegates all the work to yet other methods that are supposed to do the work.
Constant expressions
Constant expressions are easy: simply generate the Instruction that handles the
current constant. In our little language the only constants are doubles. Here's
the method that handles a constant:
Expand|Select|Wrap|Line Numbers
- private void constantExpression(Token token, Generator gen) {
- gen.makeConstantInstruction(token);
- tz.skip();
- }
that may be) and it considers the current token as processed so it skips it.
Parenthesized expressions
Parenthesized expressions are relatively easy to parse too: a left parenthesis
has already been processed (see the atomicExpressio n method), so just a binary
expression followed by a right parenthesis needs to be parsed:
Expand|Select|Wrap|Line Numbers
- private void nestedExpression(Generator gen) throws InterpreterException {
- binaryExpression(gen, 0);
- demand(")");
- }
an entire binary expression with all its precedence levels is parsed again using
just one recursive method call. We've reached the heart of recursive descent
parsing and we will reach it again a couple of times.
For an expression as simple as '(1+1)' the following methods are recursively
called:
- binaryExpressio n(0)
- binaryExpressio n(1)
- binaryExpressio n(2)
- binaryExpressio n(3)
- binaryExpressio n(4)
- unaryExpression
- atomicExpressio n
- nestedExpressio n
and then only that left parenthesis has been recognized and parsed. For that
first '1' we have to do the whole trip again:
- binaryExpressio n(0)
- binaryExpressio n(1)
- binaryExpressio n(2)
- binaryExpressio n(3)
- binaryExpressio n(4)
- unaryExpression
- atomicExpressio n
- constantExpress ion
Then recursion unwinds to the level of the binaryExpressio n(2) that recognizes
the '+' sign in the token stream and the entire circus starts all over again.
There's no need to feel sad about it or to be afraid of it: recursion is fast
nowadays and the depth of stacks is no problem either anymore. Let's go for the
nasty final parts:
Name expressions
When a name is parsed that is not some sort of a function or a quoted object
it can be either just a name or an assignment. Here's how it's parsed:
Expand|Select|Wrap|Line Numbers
- private void nameExpression(Token token, Generator gen)
- throws InterpreterException {
- tz.skip();
- String assign= tz.getToken().getStr();
- if (ParserTable.asgns.contains(assign)) {
- tz.skip();
- gen.preAssignmentInstruction(token, assign);
- binaryExpression(gen, 0);
- gen.makeAssignmentInstruction(token, assign);
- }
- else {
- gen.makeNameInstruction(token);
- if (ParserTable.pstops.contains(assign)) {
- tz.skip();
- gen.makeConstantInstruction(Token.ONE);
- gen.makeAssignmentInstruction(token, assign.charAt(0)+"=");
- }
- }
- }
checks whether or not the name is followed by an assignment operator. If so,
it skips the token and invokes that topmost binaryExpressio n method again for
the right hand side value of the assignment. Next it lets the code Generator
generate an AssignmentInstr uction for it.
If no assignment operator was present it makes the code Generator generate a
simple NameInstruction for the name that has just been parsed unless a postfix
++ or -- token follows the name: in this case the appropriate expression and
assignment is generated.
As you might see later, our postfix decrement and increment operators behave
identical to C/C++/Java's prefix increment and decrement operators. Confusing,
admitted but it takes less parsing this way for our simple, little programming
language; and we're free to define and implement whatever we want after all ;-)
User function expressions
User function calls are quite simple to parse; here's how it's done:
Expand|Select|Wrap|Line Numbers
- private void userExpression(Token token, Generator gen)
- throws InterpreterException {
- parseArguments(gen, getUserArity(token));
- gen.makeInstruction(user.get(token.getStr()));
- }
the user function. Then it makes the code Generator create an instruction which
it retrieves from the User object (see the first part of this chapter).
The 'getUserArity' method looks like this:
[code=java]
private int getUserArity(To ken token) throws InterpreterExce ption {
skipDemand("(") ;
return user.get(token. getStr()).getAr ity();
}
[code]
First it demands a left parenthesis after having skipped the name of the user
defined function. Then it consults that User object again to get the arity of
the user defined function.
Parsing the actual arguments of the user defined function is done like this:
Expand|Select|Wrap|Line Numbers
- private int parseArguments(Generator gen, int n) throws InterpreterException {
- for (int i= 0; i < n; i++) {
- binaryExpression(gen, 0);
- if (i < n-1) demand(",");
- }
- demand(")");
- return n;
- }
again. Following every argument expression needs to be a comma except for the
last argument in which case a right parenthesis is expected.
Built-in function expressions
Parsing built-in functions, no matter listfuncs or functions is done in a
similar way as we parsed user defined functions. Have a look:
Expand|Select|Wrap|Line Numbers
- private void functionExpression(Token token, Generator gen)
- throws InterpreterException {
- gen.makeFunctionInstruction(token,
- parseArguments(gen, getArity(token)));
- }
by the code Generator. The only difference is that the parse knows already how
many arguments that built-in function needs, i.e. it has to retrieve it from
the ParserTable where the arities for built-in tokens are predefined:
Expand|Select|Wrap|Line Numbers
- private int getArity(Token token) throws InterpreterException {
- skipDemand("(");
- return ParserTable.arity.get(token.getStr());
- }
the only difference is the way they find the arity for the function.
Quoted object expressions
The mysterious quoted objects (think of the 'if' object) look similar to
ordinary function calls but they aren't: the code generated for their arguments
is not being generated by the normal code Generator but another Generator is
used for the purpose and the compiled code for the arguments is passed to the
compiled quoted object. Here's how it's done:
Expand|Select|Wrap|Line Numbers
- private void quoteExpression(Token token, Generator gen)
- throws InterpreterException {
- List<Code> args= new ArrayList<Code>();
- for (int i= 0, n= getArity(token); i < n; i++) {
- Generator arg= new Generator();
- binaryExpression(arg, 0);
- if (i < n-1) demand(",");
- args.add(arg.getCode());
- }
- demand(")");
- gen.makeQuoteInstruction(token, args);
- }
in a list of compiled code. The syntax checks are similar to the syntax checks
for the user defined or built-in functions. Finally a QuoteInstructio n is created
by the 'normal' main code Generator. The use of this mysterious behaviour has
been partly explained in the previous part of this chapter and we'll get back to
it when we discuss the Instructions.
List expressions
There just one more part to explain. Our little language can handle lists.
A list is a bunch of expressions in curly brackets. Note that an expression
is a list so a list can have lists as their elements which can have lists as
their elements etc. etc.
This is how a list is parsed:
Expand|Select|Wrap|Line Numbers
- private void listExpression(Generator gen) throws InterpreterException {
- int arity;
- for (arity= 0;; ) {
- if(expect("}"))
- if (arity == 0) break;
- else throw new ParserException(tz,
- "list expression error");
- binaryExpression(gen, 0);
- arity++;
- if (expect(",")) continue;
- demand("}");
- break;
- }
- gen.makeListInstruction(arity);
- }
collect all the elements of the list. It takes care that the syntax is correct
by calling the expect and demand methods at the right places (check it) and
finally the code Generator is asked to create a ListInstruction given the number
of elements in the list that has just been parsed. Note that the list can contain
other lists as its elements just because of all the recursion that is used.
Code
We've seen this mysterious Code object a couple of times. A Code object is
managed by the code Generator. A Code object is just this:
Expand|Select|Wrap|Line Numbers
- public class Code extends ArrayList<Instruction> {
- private static final long serialVersionUID = 3728486712655477743L;
- public static Code read(InputStream is) throws InterpreterException {
- try {
- return (Code)new ObjectInputStream(is).readObject();
- }
- catch (Exception e) {
- throw new InterpreterException("can't read code", e);
- }
- }
- public static Code read(String name) throws InterpreterException {
- FileInputStream fis= null;
- try {
- return read(fis= new FileInputStream(name));
- }
- catch (IOException ioe) {
- throw new InterpreterException("can't read code from: "+name, ioe);
- }
- finally {
- try { fis.close(); } catch (Exception e) { }
- }
- }
- public void write(OutputStream os) throws InterpreterException {
- try {
- new ObjectOutputStream(os).writeObject(this);
- }
- catch (Exception e) {
- throw new InterpreterException("can't write code");
- }
- }
- public void write(String name) throws InterpreterException {
- FileOutputStream fos= null;
- try {
- write(fos= new FileOutputStream(name));
- }
- catch (IOException ioe) {
- throw new InterpreterException("can't write code to: "+name, ioe);
- }
- finally {
- try { fos.close(); } catch (Exception e) { }
- }
- }
- }
contains a couple of convenience methods, i.e. it can serialize itself to an
OutputStream or a file and two static methods can read a serialized Code object
from an InputStream or a file again. Besides these methods the Code object just
is a List of Instructions. We'll talk about Instructions in a next part of this
article.
Note that whenever reading or writing fails, the convenience methods nicely wrapthe thrown exception in another InterpreterExce ption and let that one bubble up
to the callers of these methods. If the methods are passed a String it is supposed
to be the name of a file; otherwise the streams are used for reading or writing
but they're not closed. It's up to the caller to take care of that.
That's normal behaviour: if an object 'owns' something, it's responsible for it,
otherwise some other object (the owner) should take responsibility. When a file
name was passed in the Code object is the owner of the stream it creates and it
should close it when done with it.
Of course the read methods are static methods (there is no Code object yet) but
the write methods are not static: a Code object can serialize itself. Last, note
that funny number: it is needed for the Serialization framework in case of
different versions of the Code class.
Concluding remarks
As you might have noticed, Parsers can be reused, i.e. they can parse several
token streams given a Reader; they reuse the same Tokenizer over and over again.
We need this behaviour because a parser stores the information w.r.t. user
defined function and we want to use our parsers in an interactive environment
too where the user enters pieces of the text line by line. Parsers construct
the Tokenizer they use themselve; only classes (object thereof) in the same
package as where the Tokenizer class lives can instantiate a Tokenizer.
There is a nice convenience method available that runs in a one shot mode:
given the name of a file, it opens a stream and parses the entire text. We
also need that behaviour when we want to use the parser as a regular compiler
that compiles files and possibly saves the generated code to a file again.
That was a lot of code in this part of the Compilers article, but it was worth
it. We have implemented the entire parser for our compiler. We still can't do
anything usefull with this little language because there's the Generator to be
implemeented and finally there's an Interpreter; the thing that shows us whether
or not we've done things correctly in the previous stages of the construction of
the compiler system.
I hope you're still with me here after that entire avalanche of code for the
parsers. If you still are: congratulations ; you have seen how parsers can be
constructed from formal grammar rules on a one-to-one basis with a bit of
detail sprinkled in. We already made our tokenizer so we can parse source text
and check for syntactical correctness now. The next section of this article
will include the source code we have so far and a bit more. We'll see how those
expressions are compiled to something equivalent to postfix form expressions and
we're going to start working on the instructions that actually perform what we
want them to perform, i.e. evaluate the expressions that make up our program.
See you next week and
kind regards,
Jos