python solutions for managing scientific data dependency graph by specification values
Asked Answered
S

2

8

I have a scientific data management problem which seems general, but I can't find an existing solution or even a description of it, which I have long puzzled over. I am about to embark on a major rewrite (python) but I thought I'd cast about one last time for existing solutions, so I can scrap my own and get back to the biology, or at least learn some appropriate language for better googling.

The problem: I have expensive (hours to days to calculate) and big (GB's) data attributes that are typically built as transformations of one or more other data attributes. I need to keep track of exactly how this data is built so I can reuse it as input for another transformation if it fits the problem (built with right specification values) or construct new data as needed. Although it shouldn't matter, I typically I start with 'value-added' somewhat heterogeneous molecular biology info, for example, genomes with genes and proteins annotated by other processes by other researchers. I need to combine and compare these data to make my own inferences. A number of intermediate steps are often required, and these can be expensive. In addition, the end results can become the input for additional transformations. All of these transformations can be done in multiple ways: restricting with different initial data (eg using different organisms), by using different parameter values in the same inferences, or by using different inference models, etc. The analyses change frequently and build on others in unplanned ways. I need to know what data I have (what parameters or specifications fully define it), both so I can reuse it if appropriate, as well as for general scientific integrity.

My efforts in general: I design my python classes with the problem of description in mind. All data attributes built by a class object are described by a single set of parameter values. I call these defining parameters or specifications the 'def_specs', and these def_specs with their values the 'shape' of the data atts. The entire global parameter state for the process might be quite large (eg a hundred parameters), but the data atts provided by any one class require only a small number of these, at least directly. The goal is to check whether previously built data atts are appropriate by testing if their shape is a subset of the global parameter state.

Within a class it is easy to find the needed def_specs that define the shape by examining the code. The rub arises when a module needs a data att from another module. These data atts will have their own shape, perhaps passed as args by the calling object, but more often filtered from the global parameter state. The calling class should be augmented with the shape of its dependencies in order to maintain a complete description of its data atts. In theory this could be done manually by examining the dependency graph, but this graph can get deep, and there are many modules, which I am constantly changing and adding, and ... I'm too lazy and careless to do it by hand.

So, the program dynamically discovers the complete shape of the data atts by tracking calls to other classes attributes and pushing their shape back up to the caller(s) through a managed stack of __get__ calls. As I rewrite I find that I need to strictly control attribute access to my builder classes to prevent arbitrary info from influencing the data atts. Fortunately python is making this easy with descriptors.

I store the shape of the data atts in a db so that I can query whether appropriate data (i.e. its shape is a subset of the current parameter state) already exists. In my rewrite I am moving from mysql via the great SQLAlchemy to an object db (ZODB or couchdb?) as the table for each class has to be altered when additional def_specs are discovered, which is a pain, and because some of the def_specs are python lists or dicts, which are a pain to translate to sql.

I don't think this data management can be separated from my data transformation code because of the need for strict attribute control, though I am trying to do so as much as possible. I can use existing classes by wrapping them with a class that provides their def_specs as class attributes, and db management via descriptors, but these classes are terminal in that no further discovery of additional dependency shape can take place.

If the data management cannot easily be separated from the data construction, I guess it is unlikely that there is an out of the box solution but a thousand specific ones. Perhaps there is an applicable pattern? I'd appreciate any hints at how to go about looking or better describing the problem. To me it seems a general issue, though managing deeply layered data is perhaps at odds with the prevailing winds of the web.

Sphygmoid answered 19/6, 2010 at 19:34 Comment(6)
Your description of requirements for data attributes loosely reminds me "Trees!", "Structural Sharing" slides from infoq.com/presentations/Are-We-There-Yet-Rich-Hickey by Rich Hickey (Clojure) 0:41:10Barna
Thank you for the suggestion -- that looks deep so I'll need to take some time with it.Sphygmoid
Ok watched the talk and grokked a fair bit of it in a general way. Never had thought about time explicitly. Coming from my specific problem I have seen the need for making my objects act more like functions providing immutable objects as the only way to guarantee data integrity with known 'shapes.' Had only thought about concurrency a bit, but it reinforced the notion that my classes essentially had to act like functions and have strict attribute control. Thanks again. Got to stick to the practical, but that gives me some ideas and terminology to use, and a bit of faith.Sphygmoid
I think it is better to use MongoDB than CouchDB. Both are document db's (a document here is a piece of json/binary json), not object db's. But MongoDB is faster because MongoDB doesn't have the REST overhead.Embrocate
Thanks, both are new to me. Scanned intro to MongoDB. I like the simpler querying, but have some concern about RAM use as I need careful management. If either python interface made converting nested python objects easier (via jsonpickle?) that would probably be sufficient reason for me to choose it.Sphygmoid
What exactly are your concerns about ram? Mongodb becomes faster with more ram but I think this is true for every db. The db size is limited to 2 giga byte on a 32bits system but is is unlimited (well almost unlimited) on a 64 bits system.Embrocate
E
2

I don't have specific python-related suggestions for you, but here are a few thoughts:

You're encountering a common challenge in bioinformatics. The data is large, heterogeneous, and comes in constantly changing formats as new technologies are introduced. My advice is to not overthink your pipelines, as they're likely to be changing tomorrow. Choose a few well defined file formats, and massage incoming data into those formats as often as possible. In my experience, it's also usually best to have loosely coupled tools that do one thing well, so that you can chain them together for different analyses quickly.

You might also consider taking a version of this question over to the bioinformatics stack exchange at http://biostar.stackexchange.com/

Ellita answered 19/6, 2010 at 21:10 Comment(1)
Thanks, will poke around at biostar. This is a different problem from dealing with the primary data, which can be a pain to parse and check. Primary data is cheap (to me). It gets expensive when I layer it with more inferences -- eg assign protein families by some phylogenetic inference with always somewhat arbitrary parameters, create discrete co-occurrence profiles, find genome context relationships, infer biological process, combine with cellular location ... etc all those things needed to get to real results (each transformation might typically be a published method)...Then its costly.Sphygmoid
P
2

ZODB has not been designed to handle massive data, it is just for web-based applications and in any case it is a flat-file based database.

I recommend you to try PyTables, a python library to handle HDF5 files, which is a format used in astronomy and physics to store results from big calculations and simulations. It can be used as an hierarchical-like database and has also an efficient way to pickle python objects. By the way, the author of pytables explained that ZOdb was too slow for what he needed to do, and I can confirm you that. If you are interested in HDF5, there is also another library, h5py.

As a tool for managing the versioning of the different calculations you have, you can have a try at sumatra, which is something like an extension to git/trac but designed for simulations.

You should ask this question on biostar, you will find better answers there.

Picturesque answered 21/6, 2010 at 8:52 Comment(2)
Thanks for the thoughts. Its really the 'shape' of the data (metadata that defines it) that I am considering NOsql, not the data constructs themselves, though it would be convenient to store them alongside. I do use pytables to store my data atts occasionally, with decent results, though usually they are just shelved numpy arrays.Sphygmoid
Thanks for the heads up about sumatra. That does look like a great tool that I will keep my eye on. On first glance it doesn't look like it fits my problem: I don't want to store each data att with all parameters (many), but only those that define them. Otherwise if I do a later analysis with an impertinent param changed, relevant data is not seen as such. So the problem is defining which params define each data att dynamically, which I don't think sumatra deals with. Maybe it would be a good back end storage for the metadata, though, once determined, especially if indexable.Sphygmoid

© 2022 - 2024 — McMap. All rights reserved.