Finite State Machine parser
Asked Answered
S

3

7

I would like to parse a self-designed file format with a FSM-like parser in C++ (this is a teach-myself-c++-the-hard-way-by-doing-something-big-and-difficult kind of project :)). I have a tokenized string with newlines signifying the end of a euh... line. See here for an input example. All the comments will and junk is filtered out, so I have a std::string like this:

global \n { \n SOURCE_DIRS src \n HEADER_DIRS include \n SOURCES bitwise.c framing.c \n HEADERS ogg/os_types.h ogg/ogg.h \n } \n ...

Syntax explanation:

  • { } are scopes, and capitalized words signify that a list of options/files is to follow.
  • \n are only important in a list of options/files, signifying the end of the list.

So I thought that a FSM would be simple/extensible enough for my needs/knowledge. As far as I can tell (and want my file design to be), I don't need concurrent states or anything fancy like that. Some design/implementation questions:

  1. Should I use an enum or an abstract class + derivatives for my states? The first is probably better for small syntax, but could get ugly later, and the second is the exact opposite. I'm leaning to the first, for its simplicity. enum example and class example. EDIT: what about this suggestion for goto, I thought they were evil in C++?
  2. When reading a list, I need to NOT ignore \n. My preferred way of using the string via stringstream, will ignore \n by default. So I need simple way of telling (the same!) stringstream to not ignore newlines when a certain state is enabled.
  3. Will the simple enum states suffice for multi-level parsing (scopes within scopes {...{...}...}) or would that need hacky implementations?
  4. Here's the draft states I have in mind:
    • upper: reads global, exe, lib+ target names...
    • normal: inside a scope, can read SOURCES..., create user variables...
    • list: adds items to a list until a newline is encountered.

Each scope will have a kind of conditional (e.g. win32:global { gcc:CFLAGS = ... }) and will need to be handled in the exact same fashion eveywhere (even in the list state, per item).

Thanks for any input.

Struthious answered 21/6, 2010 at 13:31 Comment(2)
goto is not evil. Misused goto is evil. Used properly goto can make code easier to read and follow and one of the proper use cases I know of for them is exactly in coding FSMs. Why do so many insist on goto being evil? Religion.Oleta
The problem is that the abuse creates monsters indecipherable.Corpuz
E
17

If you have nesting scopes, then a Finite State Machine is not the right way to go, and you should look at a Context Free Grammar parser. An LL(1) parser can be written as a set of recursive funcitons, or an LALR(1) parser can be written using a parser generator such as Bison.

If you add a stack to an FSM, then you're getting into pushdown automaton territory. A nondeterministic pushdown automaton is equivalent to a context free grammar (though a deterministic pushdown automaton is strictly less powerful.) LALR(1) parser generators actually generate a deterministic pushdown automaton internally. A good compiler design textbook will cover the exact algorithm by which the pushdown automaton is constructed from the grammar. (In this way, adding a stack isn't "hacky".) This Wikipedia article also describes how to construct the LR(1) pushdown automaton from your grammar, but IMO, the article is not as clear as it could be.

If your scopes nest only finitely deep (i.e. you have the upper, normal and list levels but you don't have nested lists or nested normals), then you can use a FSM without a stack.

Execution answered 21/6, 2010 at 13:42 Comment(3)
Seems to me that a FSM would work fine if you maintain a stack of machine states (the scopes).Phenylamine
@Zan I've edited my answer to respond to your comment. You're suggesting modifications that have very good theoretical basis, and have been formalized in great detail. I suggest reading up on compiler design to find out more about how things are implemented in today's parser generators.Execution
I'm planning on supporting unlimited nesting, something like SOURCES main.cpp win32:{ msvc:class_winmsvc.cpp gcc:class_wingcc.cpp} mac:{ class_mac.cpp otherstuff.cpp }. Maybe not the best coded project that will need this, but it seems pretty powerful to me. This does raise the question of the need to keep a sequence in the stack, to know exactly where you are in the scopes. Would this take care of itself, or need special attention? ThanksStruthious
S
3

There are two stages to analyzing a text input stream for parsing:

Lexical Analysis: This is where your input stream is broken into lexical units. It looks at a sequence of characters and generates tokens (analagous to word in spoken or written languages). Finite state machines are very good at lexical analysis provided you've made good design decision about the lexical structure. From your data above, individal lexemes would be things like your keywords (e.g. "global"), identifiers (e.g. "bitwise", "SOURCES"), symbolic tokesn (e.g. "{" "}", ".", "/"), numeric values, escape values (e.g. "\n"), etc.

Syntactic / Grammatic Analysis: Upon generating a sequence of tokens (or perhaps while you're doing so) you need to be able to analyze the structure to determine if the sequence of tokens is consistent with your language design. You generally need some sort of parser for this, though if the language structure is not very complicated, you may be able to do it with a finite state machine instead. In general (and since you want nesting structures in your case in particular) you will need to use one of the techniques Ken Bloom describes.

So in response to your questions:

Should I use an enum or an abstract class + derivatives for my states?

I found that for small tokenizers, a matrix of state / transition values is suitable, something like next_state = state_transitions[current_state][current_input_char]. In this case, the next_state and current_state are some integer types (including possibly an enumerated type). Input errors are detected when you transition to an invalid state. The end of an token is identified based on the state identification of valid endstates with no valid transition available to another state given the next input character. If you're concerned about space, you could use a vector of maps instead. Making the states classes is possible, but I think that's probably making thing more difficult than you need.

When reading a list, I need to NOT ignore \n.

You can either create a token called "\n", or a more generalize escape token (an identifier preceded by a backslash. If you're talking about identifying line breaks in the source, then those are simply characters you need to create transitions for in your state transition matrix (be aware of the differnce between Unix and Windows line breaks, however; you could create a FSM that operates on either).

Will the simple enum states suffice for multi-level parsing (scopes within scopes {...{...}...}) or would that need hacky implementations?

This is where you will need a grammar or pushdown automaton unless you can guarantee that the nesting will not exceed a certain level. Even then, it will likely make your FSM very complex.

Here's the draft states I have in mind: ...

See my commments on lexical and grammatical analysis above.

Sazerac answered 21/6, 2010 at 14:52 Comment(4)
I do not need any tokenizing anymore. Each non-whitespace word is a token in my example and my states will change according to a string/token, not a single char. #2 was actually a iostream specific question: is it possible to set the "whitespace" definition of a stream (when eg using operator>> to read a new token). Thanks for your input.Struthious
@rubenvb: I guess my point is that you need to make a clear distinction between what's possible with FSM based lexical analysis and what's not. If you've already done the tokenization, then, based on your question, you need to start working on something to perform the syntactic / grammatic analysis. This probably means a grammar, not an FSM.Sazerac
How about a grammar implemented in a FSM(+stack=pushdown automaton)? I don't want to write yet another description of the grammar, I want it read and producing output in C++. I think (surely for my example) that a FSM(+stack) is more than possible for the grammar stuff, no?Struthious
@rubenvb: Based on what you've described, a pushdown automaton will work. The question is whether you can develop a deterministic PDA which will be easier to implement and use than a non-deterministic one. Based on what I see, you probably can define a deterministic PDA for it, but I don't know for sure.Sazerac
W
1

For parsing I always try to use something already proven to work: ANTLR with ANTLRWorks which is of great help for designing and testing a grammar. You can generate code for C/C++ (and other languages) but you need to build the ANTLR runtime for those languages.

Of course if you find flex or bison easier to use you can use them too (I know that they generate only C and C++ but I may be wrong since I didn't use them for some time).

Warmedover answered 21/6, 2010 at 14:19 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.