I am working on a simple SQL select like query parser and I need to be able to capture subqueries that can occur at certain places literally. I found lexer states are the best solution and was able to do a POC using curly braces to mark the start and end. However, the subqueries will be delimited by parenthesis, not curlys, and the parenthesis can occur at other places as well, so I can't being the state with every open-paren. This information is readily available with the parser, so I was hoping to call begin and end at appropriate locations in the parser rules. This however didn't work because lexer seem to tokenize the stream all at once, and so the tokens get generated in the INITIAL state. Is there a workaround for this problem? Here is an outline of what I tried to do:
def p_value_subquery(p):
"""
value : start_sub end_sub
"""
p[0] = "( " + p[1] + " )"
def p_start_sub(p):
"""
start_sub : OPAR
"""
start_subquery(p.lexer)
p[0] = p[1]
def p_end_sub(p):
"""
end_sub : CPAR
"""
subquery = end_subquery(p.lexer)
p[0] = subquery
The start_subquery() and end_subquery() are defined like this:
def start_subquery(lexer):
lexer.code_start = lexer.lexpos # Record the starting position
lexer.level = 1
lexer.begin('subquery')
def end_subquery(lexer):
value = lexer.lexdata[lexer.code_start:lexer.lexpos-1]
lexer.lineno += value.count('\n')
lexer.begin('INITIAL')
return value
The lexer tokens are simply there to detect the close-paren:
@lex.TOKEN(r"\(")
def t_subquery_SUBQST(t):
lexer.level += 1
@lex.TOKEN(r"\)")
def t_subquery_SUBQEN(t):
lexer.level -= 1
@lex.TOKEN(r".")
def t_subquery_anychar(t):
pass
I would appreciate any help.
B
, but the lookahead would beLBRACE
right? – Kortneykoruna