how to calculate roc curves?
Asked Answered
C

2

8

I write a classifier (Gaussian Mixture Model) to classify five human actions. For every observation the classifier compute the posterior probability to belong to a cluster.

I want to valutate the performance of my system parameterized with a threshold, with values from 0 to 100. For every threshold values, for every observation, if the probability of belonging to one of cluster is greater than threshold I accept the result of the classifier otherwise I discard it.

For every threshold values I compute the number of true-positive, true-negative, false-positive, false-negative.

Than I compute the two function: sensitivity and specificity as

sensitivity = TP/(TP+FN);

specificity=TN/(TN+FP);

In matlab:

plot(1-specificity,sensitivity);

to have the ROC curve. But the result isn't what I expect.

This is the plot of the functions of discards, errors, corrects, sensitivity and specificity varying the threshold of one action.

This is the plot of the functions of discards, errors, corrects, sensitivity and specificity varying the threshold

This is the plot of ROC curve of one action This is the plot of ROC curve

This is the stem of ROC curve for the same action enter image description here

I am wrong, but i don't know where. Perhaps I do wrong the calculating of FP, FN, TP, TN especially when the result of the classifier is minor of the threshold, so I have a discard. What I have to incremente when there is a discard?

Corliss answered 19/10, 2012 at 18:26 Comment(2)
care to show some of your code and data... It's hard to know what's going on with end products. As a side note, the first figure you are showing doesn't appear right (without knowing the machanics, hard to say if either spec on sens is wrong)Grumpy
You can have a look on the example below. saedsayad.com/flash/RocGainKS.html This animation shows how to calculate TPR and FPR for different threshold values and plot it.Finella
D
5

Background

I am answering this because I need to work through the content, and a question like this is a great excuse. Thank you for the good opportunity.

I use data from the built-in fisher iris data: http://archive.ics.uci.edu/ml/datasets/Iris

I also use code snippets from the Mathworks tutorial on the classification, and for plotroc

Problem Description

There is clearer boundary within the domain to classify "setosa" but there is overlap for "versicoloir" vs. "virginica". This is a two dimensional plot, and some of the other information has been discarded to produce it. The ambiguity in the classification boundaries is a useful thing in this case.

%load data
load fisheriris

%show raw data
figure(1); clf
gscatter(meas(:,1), meas(:,2), species,'rgb','osd');
xlabel('Sepal length');
ylabel('Sepal width');
axis equal
axis tight
title('Raw Data')

display of the data

Analysis

Lets say that we want to determine the bounds for a linear classifier that defines "virginica" versus "non-virginica". We could look at "self vs. not-self" for other classes, but they would have their own

So now we make some linear discriminants and plot the ROC for them:

%load data
load fisheriris
load iris_dataset

irisInputs=meas(:,1:2)';
irisTargets=irisTargets(3,:);

ldaClass1 = classify(meas(:,1:2),meas(:,1:2),irisTargets,'linear')';
ldaClass2 = classify(meas(:,1:2),meas(:,1:2),irisTargets,'diaglinear')';
ldaClass3 = classify(meas(:,1:2),meas(:,1:2),irisTargets,'quadratic')';
ldaClass4 = classify(meas(:,1:2),meas(:,1:2),irisTargets,'diagquadratic')';
ldaClass5 = classify(meas(:,1:2),meas(:,1:2),irisTargets,'mahalanobis')';

myinput=repmat(irisTargets,5,1);
myoutput=[ldaClass1;ldaClass2;ldaClass3;ldaClass4;ldaClass5];
whos
plotroc(myinput,myoutput)

The result is shown in the following, though it took deleting repeat copies of the diagonal:

enter image description here

You can note in the code that I stack "myinput" and "myoutput" and feed them as inputs into the "plotroc" function. You should take the results of your classifier as targets and actuals and you can get similar results. This compares the actual output of your classifier versus the ideal output of your target values. Those are the input to plotroc.

So this will give you "built-in" ROC, which is useful for quick work, but does not make you learn every step in detail.

Questions you can ask at this point include:

  • which classifier is best? How do I determine what best is in this case?
  • What is the convex hull of the classifiers? Is there some mixture of classifiers that is more informative than any pure method? Bagging perhaps?
Delorenzo answered 27/12, 2013 at 17:20 Comment(0)
F
-1

You are trying to draw the curves of precision vs recall, depending on the classifier threshold parameter. The definition of precision and recall are:

Precision = TP/(TP+FP)

Recall = TP/(TP+FN)   

You can check the definition of these parameters in: http://en.wikipedia.org/wiki/Precision_and_recall

There are some curves here: http://www.cs.cornell.edu/courses/cs578/2003fa/performance_measures.pdf

Are you dividing your dataset in training set, cross validation set and test set? (if you do not divide the data, it is normal that your precision-recall curve seems weird)

EDITED: I think that there are two possible sources for your problem:

  1. When you train a classifier for 5 classes, usually you have to train 5 distinctive classifiers. One classifier for (class A = class 1, class B = class 2, 3, 4 or 5), then a second classfier for (class A = class 2, class B = class 1, 3, 4 or 5), ... and the fifth for class A = class 5, class B = class 1, 2, 3 or 4).

As you said to select the output for your "compound" classifier, you have to pass your new (test) datapoint through the five classifiers, and you choose the one with the biggest probability.

Then, you should have 5 thresholds to define weighting values that my prioritize selecting one classifier over the others. You should check how the matlab implementations uses the thresholds, but their effect is that you don't choose the class with more probability, but the class with better weighted probability.

  1. As you say, maybe you are not calculating well TP, TN, FP, FN. Your test data should have datapoints belonging to all the classes. Then you have testdata(i,:) and classtestdata(i) are the feature vector and "ground truth" class of datapoint i. When you evaluate the classifier you obtain classifierOutput(i) = 1 or 2 or 3 or 4 or 5. Then you should calculate the "confusion matrix", which is the way to calculate TP, TN, FP, FN when you have multiple classes (> 2): http://en.wikipedia.org/wiki/Confusion_matrix http://www.mathworks.com/help/stats/confusionmat.html (note the relation between TP, TN, FP, FN that you are calculating for the multiclass problem)

I think that you can obtain the TP, TN, FP, FN data of each subclassifier (remember that you are calculating 5 separate classifiers, even if you do not realize it) from the confusion matrix. I am not sure but you can draw the precision recall curve for each subclassifier.

Also check these slides: http://www.slideserve.com/MikeCarlo/multi-class-and-structured-classification

I don't know what the ROC curve is, I will check it because machine learning is a really interesting subject for me.

Hope this helps,

Frangos answered 19/10, 2012 at 18:46 Comment(6)
Yes I divide the dataset in training set, cross validation set and test set. As you can see the function of discards, errors and corrects, varying the threshold of one action (figure n°1) are good. The problem are the other two function sensitivity and specificity.Corliss
Check my answer again. I have edited it, trying to answer how to identify the thresholds and the TP, TN, ... for the multiclass problem.Frangos
Thanks, the problem was the wrong calculating of FP, FN, TP, TN when the result of the classifier was minor of the threshold.Corliss
It would be nice that you explain the solution further in your question. It could help other users. I am glad that you solved your problem.Frangos
The initial definitions are wrong. Precision != Sensitivity, please review.Winegar
sensitivity =/= Precision!!!! 1 - specificity =/= recall!!!!. 1 - specificity is called False Positive Rate (FPR) and Recall is the same as SensitivityVerdellverderer

© 2022 - 2024 — McMap. All rights reserved.