writing a fast parser in python
Asked Answered
C

2

11

I've written a hands-on recursive pure python parser for a some file format (ARFF) we use in one lecture. Now running my exercise submission is awfully slow. Turns out by far the most time is spent in my parser. It's consuming a lot of CPU time, the HD is not the bottleneck.

I wonder what performant ways are there to write a parser in python? I'd rather not rewrite it in C. I tried to use jython, but that decreased performance a lot! The files I parse are partially huge (> 150 MB) with very long lines.

My current parser only needs a look-ahead of one character. I'd post the source here but I don't know if that's such a good idea. After all the submission deadline has not ended yet. But then, the focus in this exercise is not the parser. You can choose whatever language you want to use and there already is a parser for Java.

Note: I've a x86_64 system so psyco (and it seems also PyPy) is no option.

Update: I now uploaded my parser/writer to bitbucket.

Colpin answered 27/4, 2010 at 16:24 Comment(4)
Have you profiled your parser? Chances are it's just one bottleneck that's holding everything up.Rowles
Without a code example it is impossible to give decent advice. You could be using a sound technique with one major flaw, or your entire approach might need to be reworked, we have no way of knowing.Kruse
Have you tried using psyco with it?Crawford
I use scipy.io.arff.loadarff for parsing arff files - not sure if it's up to the scale, but it's served me well.Peters
P
10

You could use ANTLR or pyparsing, they might speed up your parsing process.

And if you want to keep your current code, you might want to look at Cython/PyPy, which increases your perfomance (sometimes upto 4x).

Pyuria answered 27/4, 2010 at 16:30 Comment(5)
Unlikely that pyparsing will speed things up, but might shed some insights on where the bottlenecks are. Also, I believe an ARFF pyparsing parser has already been written, and is out in the ether somewhere.Mite
That's true -- and I don't know how pyparsing fits with PyPy or Cython.Pyuria
The ARFF pyparsing parser linked from the weka site is extremely incomplete (if that's the one you're talking about). Also I've tried cython, but because I use yield a lot I had to use an hg version an all that did is produce code that segfaults.Colpin
PyPy seems to be 32bit only. At least it didn't compile for me. I've a x86_64 system. Also the 32 binary they provide links against libraries I do not have installed.Colpin
pyparsing is fully compatible with the current release of PyPy, and runs about twice as fast on PyPy as on CPython.Mite
A
10

The most general tip I'd give without further information would be to read the entire file, or at least a substantial section of it, into memory at once. You don't want to be reading it one character at a time and seeking here and there; regardless of the buffering that's going on under the hood it's probably a good idea just to have the whole thing in memory so you can operate on it however you want.

I have written parsers in Python and there's no particular requirement for them to be particularly slower than a parser written in any other language. As it is with these sorts of things, it's more likely that you're doing work you don't need to do. Of those class of item, creating and destroying and recreating the same object is more costly than just storing it off somewhere. Recomputing a value over and over again is more costly than just storing it somewhere. Etc., etc.

In Python specifically, one trap that people fall into is doing a lot of unnecessary string manipulation. Don't append to strings one character at a time; when you're building up your tokens do your work on the "master" string and strip out the token in one fell swoop. (In other words, index into the "master" string, figure out the start and end points, and then grab it with token = master[start:end].) Doing string concatenation one character at a time is a short path to performance misery. I suspect even if you want/need for some reason to do for c in master: newstr += c you might have better luck stuffing the 'c's into a list and then newstr = ''.join(newstr_charlist).

Apart answered 27/4, 2010 at 17:3 Comment(6)
I use things like this a lot, is this the fastest way? However, this particular code is not triggered by me use cases. pastebin.com/yAmEcKB8Colpin
Oh and I read 4096 byte chunks from the file (the readc() and peek() method operate on these chunks). I don't think reading the hole file would be a good idea because the files get up to > 150 MB in size.Colpin
Modern computers have 512M or more of memory. Reading 150MB is nothing. :)Apart
The code is an interesting use of generators, but I'd be worried about the function call overhead. Maybe it's perfectly fine, but I might try doing an implementation without generators, just indexing into the string, to see if there was any particular performance difference.Apart
Function calls in CPython are extremely expensive, and writing a good, generalized parser without making function calls is inelegant.Floyd
I have one particular case where the parser first reads protocol definitions from one set of files and then message instances based on those from a second set of files. The heavily-optimized Python version takes 56 seconds to complete the task. Pypy jit takes 18 seconds. A naive C++ version using flex/yacc takes under 900ms. Rewriten Python code with two purpose-built parsing algorithms (for protos and instances) takes 5.7 seconds.Floyd

© 2022 - 2024 — McMap. All rights reserved.