Building a parser (Part I)

Generally, you want to separate the functions of the tokeniser (also called a lexer) from other stages of your compiler or interpreter. The reason for this is basic modularity: each pass consumes one kind of thing (e.g., characters) and produces another one (e.g., tokens).

So you’ve converted your characters to tokens. Now you want to convert your flat list of tokens to meaningful nested expressions, and this is what is conventionally called parsing. For a JavaScript-like language, you should look into recursive descent parsing. For parsing expressions with infix operators of different precedence levels, Pratt parsing is very useful, and you can fall back on ordinary recursive descent parsing for special cases.

Just to give you a more concrete example based on your case, I’ll assume you can write two functions: accept(token) and expect(token), which test the next token in the stream you’ve created. You’ll make a function for each type of statement or expression in the grammar of your language. Here’s Pythonish pseudocode for a statement() function, for instance:

def statement():

  if accept("if"):
    x = expression()
    y = statement()
    return IfStatement(x, y)

  elif accept("return"):
    x = expression()
    return ReturnStatement(x)

  elif accept("{")
    xs = []
    while True:
      xs.append(statement())
      if not accept(";"):
        break
    expect("}")
    return Block(xs)

  else:
    error("Invalid statement!")

This gives you what’s called an abstract syntax tree (AST) of your program, which you can then manipulate (optimisation and analysis), output (compilation), or run (interpretation).

Leave a Comment

tech