Syntax Highlighting / Lexical analysis Algorithms
Asked Answered
E

1

8

What is the general algorithm used by a syntax highlighter? I have implemented a simple approach using alternation in regex:

STRING_PATTERN|COMMENT_PATTERN|KEYWORD_PATTERNS

Since detecting whether something is a string or a pattern depends on which comes first:

// This is a "comment"

"This is a // string"

But it gets a bit more complicated with keywords. This approach is working in my current implementation, but I'm not convinced it's optimal.

Another problem is the order you highlight in. If you highlight numbers before identifiers/keywords then you might accidentally highlight a number within a keyword...

EDIT:

My plugin is now here: http://wordpress.org/extend/plugins/crayon-syntax-highlighter/

Evangelist answered 19/7, 2011 at 1:51 Comment(0)
T
10

You might struggle to do this with regex because it doesn't help your syntax highlighting understand the context, i.e. a regex will match something that appears anywhere, irrespective of whether it's part of a larger possible match.

You need to investigate parser generators such as Antlr which - given a valid, unambiguous grammar - are capable of giving you tokens that take these details into account. E.g. if a comment is defined as "//" up to an EOL, it will return a comment token, which will supersede any string characters or whatever inside.

The standard approach for parsers like this is to read in the stream of characters (or tokens, more specifically) one at a time, so the highlighting depends not on the order of the rules you define but the order of their appearance in the stream.

For example, a string could be two double-quote marks and everything in between (except for another double quote). A comment is two slashes and everything up to the end-of-line.

When parsing, if you find a double-quote then your program goes into a "I think it's a string" mode and once it finds the matching end-quote it confirms a string token, and returns it for highlighting. Similarly, if it finds two slashes then it searches until it finds an end-of-line (or end-of-file, in reality), then returns that as a token for highlighting.

It gets more complicated when there are multiple possible matching rules, e.g. for single and multi line comments. If you grab a single slash character, your program needs to read another character before it can reject some of those options, i.e. until it gets either a second slash or * then it won't know what sort of token it's in.

Fundamentally it all comes down to state machines. You could try building your own, or you could get something like Antlr, feed it a grammar, and let it do all your work for you.

Torrential answered 19/7, 2011 at 4:52 Comment(4)
my approach worked well for what you mention about what comes first because the alternation that takes place is the one that occurs first, like an OR statement. I think i'll take a stab at making one myself, after all that's what I set out to do when making a syntax highlighter. Regex handles all the logic that you explain as well, like what to consider a string. It's easy to do multiline comments and strings in the same manner. My question wasn't about how to find occurrences of tokens, but what to do with them once they're found. I'll work on it I guess.Evangelist
Ah OK, I see what you're asking. Again, the problem you'll face is that you have a single regex that is looking for tokens, and the same regex will return a match but it WON'T tell you what sort of token you've received back. If you want to continue trying the regex approach you could look into named groups (this is generally a platform-specific syntax, though, and some languages e.g. Java don't even support it). If the regex can tell you which named group it matched on then you can use that name to do a lookup into a table containing your highlighting format rules (e.g. "keyword"->bold).Torrential
Also, although they go with the grammar approach to lexing/parsing, you could have a look at the Eclipse and XText APIs for how they handle syntax highlighting. It's not simple to get your head around but it might give you some ideas about a nice clean, structural separation between the parsing and the highlighting.Torrential
Thanks Chris, I've successfully implemented the grouping method, so I think I'll go with that. It's simple and there's no need for complicated logic once the tokens are found.Evangelist

© 2022 - 2024 — McMap. All rights reserved.