Expression parsing: how to tokenize
Asked Answered
E

3

5

I'm looking to tokenize Java/Javascript-like expressions in Javascript code. My input will be a string containing the expression, and the output needs to be an array of tokens.

What's the best practice for doing something like this? Do I need to iterate the string or is there a regular expression that will do this for me?

I need this to be able to support:

  • Number and String literals (single and double quoted, with quote escaping)
  • Basic mathematical and boolean operators and comparators (+, -, *, /, !, and, not, <, >, etc)
  • Dot and bracket notation for object access with recursion (foo.bar, foo['bar'], foo[2][prop])
  • Parenthesis with nesting
  • Ternary operator (foo ? bar : 'baz')
  • Function calls (foo(bar))

I specifically want to avoid using eval() or anything of the sort for security reasons. Besides, eval() wouldn't tokenize the expression for me anyway.

Ejective answered 22/5, 2009 at 17:30 Comment(1)
Have you ever done any lexing/parsing before?Garate
Z
12

Learn to write a recursive-descent parser. Once you understand the concepts, you can do it in any language: Java, C++, JavaScript, SystemVerilog, ... whatever. If you can handle strings then you can parse.

Recursive-descent parsing is a basic technique for parsing that can easily be coded by hand. This is useful if you don't have access to (or don't want to fool with) a parser generator.

In a recursive-descent parser, every rule in your grammar is translated to a procedure that parses the rule. If you need to refer to other rules, then you do so by calling them - they're just procedures.

A simple example: expressions involving numbers, addition and multiplication (this illustrates operator precedence). First, the grammar:

expr ::= term
         | expr "+" term

term ::= factor
         | term "*" factor

factor ::= /[0-9/+ (I'm using a regexp here)

Now to write the parser (which includes the lexer; with recursive-descent you can throw the two together). I've never used JavaScript, so let's try this in (my rusty) Java:

class Parser {
  string str;
  int idx; // index into string

  Node parseExpr() throws ParseException
  {
    Node op1 = parseTerm();
    Node op2;

    while (idx < str.size() && str.charAt(idx) == '+') {
      idx++;
      op2 = parseTerm();
      op1 = new AddNode(op1, op2);
    }
    return op1;
  }

  Node parseTerm() throws ParseException
  {
    Node op1 = parseFactor();
    Node op2;

    while (idx < str.size() && str.charAt(idx) == '*') {
      idx++;
      op2 = parseFactor();
      op1 = new MultNode(op1, op2);
    }
    return op1;
  }

  Node parseFactor() throws ParseException
  {
    StringBuffer sb = new StringBuffer();
    int old_idx = idx;

    while (idx < str.size() && str.charAt(idx) >= '0' && str.charAt(idx) <= '9') {
      sb.append(str.charAt(idx));
      idx++;
    }
    if (idx == old_idx) {
      throw new ParseException();
    }
    return new NumberNode(sb.toString());
  }
}

You can see how each grammar rule translates into a procedure. I haven't tested this; that's an exercise for the reader.

You also need to worry about error detection. A real-world compiler needs to recover from parse errors to try to parse the remainder of its input. A one-line expression parser like this one does not need to try recovery at all, but it does need to determine that a parse error exists and flag it. The easiest way to do this if your language allows it is to throw an exception, and catch it at the entry point to the parser. I haven't detected all possible parse errors in my example above.

For more info, look up "LL parser" and "Recursive descent parser" in Wikipedia. As I said at the beginning, if you can understand the concepts (and they're simple compared to the concepts behind LALR(1) state machine configuration closures) then you are empowered to write a parser for small tasks in any language, as long as you have some rudimentary string capability. Enjoy the power.

Zone answered 22/5, 2009 at 18:14 Comment(0)
P
1

For simple lexers where speed isn't critical, I usually write a regex for each kind of token and repeatedly attempt to match each one in turn with the start of the input. (Make sure you don't wind up with an O(n^2) algorithm!) A tool like lex will yield a more efficient lexer because it combines the regexes into one state machine.

Passus answered 22/5, 2009 at 17:44 Comment(0)
S
0

You need to implement a lexical analyzer. You can use js/cc to do it or you can implement a finite automata by your own.

Since, formally, the language you will be manipulating is regular, you may use a regular expression. But I don't recommed it to you.

Althougth I have never used js/cc, I would try with it first, and if it doesn't work I would try to build a lexical analyzer by myself.

Snell answered 22/5, 2009 at 17:44 Comment(2)
The language he is manipulating is not regular. "Parenthesis with nesting" is required. Any language that allows that has an infinite amount of equivalence classes, hence not regular. He would need a PDA, not an FA to parse/tokenize the entire language. Now, recognizing individual tokens within the language can be done with FA's (regex).Jaynejaynell
He doesn't need to evaluate if parenthesis are balanced, as I understood. What he wants is to match parenthesis as symbols of his language. And thatis a regular language.Snell

© 2022 - 2024 — McMap. All rights reserved.