Communication between lexer and parser

While I wouldn’t classify much of the above as incorrect, I do believe several items are misleading.

  1. Lexing an entire input before running a parser has many advantages over other options. Implementations vary, but in general the memory required for this operation is not a problem, especially when you consider the type of information that you’d like to have available for reporting compilation errors.

    • Benefits
      • Potentially more information available during error reporting.
      • Languages written in a way that allows lexing to occur before parsing are easier to specify and write compilers for.
    • Drawbacks
      • Some languages require context-sensitive lexers that simply cannot operate before the parsing phase.

    Language implementation note: This is my preferred strategy, as it results in separable code and is best suited for translation to implementing an IDE for the language.

    Parser implementation note: I experimented with ANTLR v3 regarding memory overhead with this strategy. The C target uses over 130 bytes per token, and the Java target uses around 44 bytes per token. With a modified C# target, I showed it’s possible to fully represent the tokenized input with only 8 bytes per token, making this strategy practical for even quite large source files.

    Language design note: I encourage people designing a new language to do so in a way that allows this parsing strategy, whether or not they end up choosing it for their reference compiler.

  2. It appears you’ve described a “push” version of what I generally see described as a “pull” parser like you have in #3. My work emphasis has always been on LL parsing, so this wasn’t really an option for me. I would be surprised if there are benefits to this over #3, but cannot rule them out.

  3. The most misleading part of this is the statement about C++. Proper use of iterators in C++ makes it exceptionally well suited to this type of behavior.

  4. A queue seems like a rehash of #3 with a middleman. While abstracting independent operations has many advantages in areas like modular software development, a lexer/parser pair for a distributable product offering is highly performance-sensitive, and this type of abstraction removes the ability to do certain types of optimization regarding data structure and memory layout. I would encourage the use of option #3 over this.

    As an additional note on multi-core parsing: The initial lexer/parser phases of compilation for a single compilation unit generally cannot be parallelized, nor do they need to be considering how easy it is to simply run parallel compilation tasks on different compilation units (e.g. one lexer/parser operation on each source file, parallelizing across the source files but only using a single thread for any given file).

Regarding other options: For a compiler intended for widespread use (commercial or otherwise), generally implementers choose a parsing strategy and implementation which provides the best performance under the constraints of the target language. Some languages (e.g. Go) can be parsed exceptionally quickly with a simple LR parsing strategy, and using a “more powerful” parsing strategy (read: unnecessary features) would only serve to slow things down. Other languages (e.g. C++) are extremely challenging or impossible to parse with typical algorithms, so slower but more powerful/flexible parsers are employed.

Leave a Comment

tech