What are good algorithms for detecting abnormality?
Asked Answered
R

3

13

Background

Here is the problem:

  1. A black box outputs a new number each day.
  2. Those numbers have been recorded for a period of time.
  3. Detect when a new number from the black box falls outside the pattern of numbers established over the time period.

The numbers are integers, and the time period is a year.

Question

What algorithm will identify a pattern in the numbers?

The pattern might be simple, like always ascending or always descending, or the numbers might fall within a narrow range, and so forth.

Ideas

I have some ideas, but am uncertain as to the best approach, or what solutions already exist:

  • Machine learning algorithms?
  • Neural network?
  • Classify normal and abnormal numbers?
  • Statistical analysis?
Racialism answered 22/9, 2010 at 15:10 Comment(6)
You may want to ask this question here: stats.stackexchange.com.Humidor
I doubt this problem is solvable without some information about the black box. No machine learning algorithm will tell you e.g. that in a series of 1000 numberes, only one isn't a prime number.Garling
@nikie: I'm interested in solutions that would work in the widest variety of situations. It's possible that none of them would catch the example you describe.Racialism
@Joseph Garvin: It doesn't work that way. You have to make some assumptions about your data. If you don't, every possible series of numbers would be as likely as the next. It's not possible to do any prediction on that premises. You wouldn't get better than blind guessing.Garling
@nikie: I'm just phrasing things badly. I've described it as a blackbox to draw focus away from trying to analyze the data up front -- I want algorithms that I can throw at real life data without having to deeply analyze that data first, that will do a reasonable job most of the time. This means things like performance numbers, timing of network events and user actions, etc. Humans somehow have an ability to recognize "what's usual" and "what's unusual"; I'm looking for algorithms that help automate that. There is imprecise similarity between most natural instances of this problem.Racialism
en.wikipedia.org/wiki/Anomaly_detectionDecontrol
F
5

Cluster your data.

If you don't know how many modes your data will have, use something like a Gaussian Mixture Model (GMM) along with a scoring function (e.g., Bayesian Information Criterion (BIC)) so you can automatically detect the likely number of clusters in your data. I recommend this instead of k-means if you have no idea what value k is likely to be. Once you've constructed a GMM for you data for the past year, given a new datapoint x, you can calculate the probability that it was generated by any one of the clusters (modeled by a Gaussian in the GMM). If your new data point has low probability of being generated by any one of your clusters, it is very likely a true outlier.

If this sounds a little too involved, you will be happy to know that the entire GMM + BIC procedure for automatic cluster identification has been implemented for you in the excellent MCLUST package for R. I have used it several times to great success for such problems.

Not only will it allow you to identify outliers, you will have the ability to put a p-value on a point being an outlier if you need this capability (or want it) at some point.

Frisk answered 29/9, 2010 at 1:27 Comment(0)
B
3

You could try line fitting prediction using linear regression and see how it goes, it would be fairly easy to implement in your language of choice. After you fitted a line to your data, you could calculate the mean standard deviation along the line. If the novel point is on the trend line +- the standard deviation, it should not be regarded as an abnormality.

PCA is an other technique that comes to mind, when dealing with this type of data.

You could also look in to unsuperviced learning. This is a machine learning technique that can be used to detect differences in larger data sets.

Sounds like a fun problem! Good luck

Buttocks answered 23/9, 2010 at 8:51 Comment(0)
M
3

There is little magic in all the techniques you mention. I believe you should first try to narrow the typical abnormalities you may encounter, it helps keeping things simple.

Then, you may want to compute derived quantities relevant to those features. For instance: "I want to detect numbers changing abruptly direction" => compute u_{n+1} - u_n, and expect it to have constant sign, or fall in some range. You may want to keep this flexible, and allow your code design to be extensible (Strategy pattern may be worth looking at if you do OOP)

Then, when you have some derived quantities of interest, you do statistical analysis on them. For instance, for a derived quantity A, you assume it should have some distribution P(a, b) (uniform([a, b]), or Beta(a, b), possibly more complex), you put a priori laws on a, b and you ajust them based on successive information. Then, the posterior likelihood of the info provided by the last point added should give you some insight about it being normal or not. Relative entropy between posterior and prior law at each step is a good thing to monitor too. Consult a book on Bayesian methods for more info.

I see little point in complex traditional machine learning stuff (perceptron layers or SVM to cite only them) if you want to detect outliers. These methods work great when classifying data which is known to be reasonably clean.

Mnemonic answered 23/9, 2010 at 9:27 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.