I'm building a spreadsheet-like application, where a lot of small calculations needs to be stitched together in a tree-structure. These calculations are user-defined and I need a way for the user to enter them at runtime.
My current approach is to write a small "expression DSL" in F#, where I parse the input with FParsec, build a syntax tree based on a discriminated union and then can evaluate the expression. This works pretty well.
However, I'm thinking of looking in to basing the language on the DLR instead. Are there any upsides to go down this road (parse the input, generate the AST using the Scripting.AST stuff instead of my own, and let the DLR handle the execution of the calculation)?
Each calculation will probably be pretty small. The dependency between the calculations will be taken care of on a higher level.
Can I expect better performance since the DLR will generate CIL code for the expression or will the overhead eat that up?
(as for using an existing language like IronPython, it will probably be hard since I'm planning to add a lot of slice-and-dice operators and dimensionality-handling stuff to the language syntax)