Method to find "cleanest" subset of data i.e. subset with lowest variability
Asked Answered
A

2

1

I am trying to find a trend in several datasets. The trends involve finding the best fit line, but if i imagine the procedure would not be too different for any other model (just possibly more time consuming).

There are 3 conceivable scenarios:

  1. All good data where all the data fits a single trend with a low variability
  2. All bad data where all or most of the data exhibits tremendous variability and the entire dataset must be discarded.
  3. Partial good data where some of the data may be good while the rest needs to be discarded.

If the net percentage of data with extreme variability is too high then the entire set must be discarded. This implies that there is essentially only this type of data and the percentage of bad data varies:

0% bad = Case 1
100% bad = Case 2

I am only looking for contiguous sections with low variablity; i.e. I don't care if there are some individual points that fit the trend

What I am looking for is a smart way to subsection section the dataset and search for the specified trend. As is the nature of the problem, I am not looking for sections that best fit the overall trend. I understand that the subsection with "cleaner" data will end up having slightly different trendline properties than the overall (which would contain the outliers). This is exactly what i want since this part of the data would best best reflect the actual trend.

I am fluent in C++ but, since I am trying to make the code open source and cross-platform, I am stick to ISO C++ standards. This implies no .NET but if you have a .NET example I would appreciate if you could also help me convert it to ISO C++. I also have knowledge of JAVA, some assembly and fortran.

The datasets themselves are not huge but there are about 150 million of them and so brute force may not be the best way.

Thanks in advance


I understand that I have left some things up in the air and so let me clarify:

  • Each dataset can, and probably will, have different trends; i.e. I am not looking for the same trend throughout all datasets.
  • The program user will define how close a fit they want
  • The program user will define how contiguous the subset must be before it its considered for trend fitting
  • In case the program is extended to allow for any type of fit (not simply linear), the user will define what model is to be fit -- THIS IS NOT A PRIORITY and if the above query is solved then I am sure this expansion would be relatively trivial
  • The outliers come about as a result of the nature of the experiment and the data acquisition technique whereby data from "bad" sections must still be collected even though these areas are known to give outliers. The discarding of these outliers DOES NOT imply that the data is being manipulated to fit any trend (statistics disclaimer, hehe).
Anthropogenesis answered 5/4, 2009 at 12:22 Comment(2)
Is there a problem with just calculating the line of best fit for the entire data, discarding the outliers, then recalculating the line for the new subset of points?Clause
You could have basically summed this question in under 5 lines.Hackler
M
4

The RANSAC algorithm is one approach to what you're looking for if I understand you right. http://en.wikipedia.org/wiki/RANSAC

Mechelle answered 5/4, 2009 at 12:37 Comment(0)
P
1

You might use the term "outlier" in your searches. An outlier is a particular point of data that represents either a special condition not captured in the experiment design, or a statistical fluke (a point grabbed from the exstreams of the distribution in a data set too small to expect that too happen).

Outlier elimination carries some risk of biasing the result by your expectation.

Partnership answered 5/4, 2009 at 13:54 Comment(2)
I may not have presented this clearly... I know that it appears that this might introduce bias but that occurs only of I am eliminating valid data points. In my experimental setup, the outliers occur exclusively because from bad cells (this is a biological exp.) and so I need them eliminatedAnthropogenesis
I wasn't asaying "Don't do it." Outlier elimination isn't necessarily bad: it is an accepted part of data analysis. Just be aware that it carries some risk. The usual mitigation is thrash the outlier in search of unexpected patterns...Partnership

© 2022 - 2024 — McMap. All rights reserved.