Use tf.metrics in Keras?
Asked Answered
L

2

3

I'm especially interested in specificity_at_sensitivity. Looking through the Keras docs:

from keras import metrics

model.compile(loss='mean_squared_error',
              optimizer='sgd',
              metrics=[metrics.mae, metrics.categorical_accuracy])

But it looks like the metrics list must have functions of arity 2, accepting (y_true, y_pred) and returning a single tensor value.


EDIT: Currently here is how I do things:

from sklearn.metrics import confusion_matrix

predictions = model.predict(x_test)
y_test = np.argmax(y_test, axis=-1)
predictions = np.argmax(predictions, axis=-1)
c = confusion_matrix(y_test, predictions)
print('Confusion matrix:\n', c)
print('sensitivity', c[0, 0] / (c[0, 1] + c[0, 0]))
print('specificity', c[1, 1] / (c[1, 1] + c[1, 0]))

The disadvantage of this approach, is I only get the output I care about when training has finished. Would prefer to get metrics every 10 epochs or so.

Laing answered 26/5, 2018 at 4:3 Comment(0)
L
2

I've found a related issue on github, and it seems that tf.metrics are still not supported by Keras models. However, in case you are very interested in using tf.metrics.specificity_at_sensitivity, I would suggest the following workaround (inspired by BogdanRuzh's solution):

def specificity_at_sensitivity(sensitivity, **kwargs):
    def metric(labels, predictions):
        # any tensorflow metric
        value, update_op = tf.metrics.specificity_at_sensitivity(labels, predictions, sensitivity, **kwargs)

        # find all variables created for this metric
        metric_vars = [i for i in tf.local_variables() if 'specificity_at_sensitivity' in i.name.split('/')[2]]

        # Add metric variables to GLOBAL_VARIABLES collection.
        # They will be initialized for new session.
        for v in metric_vars:
            tf.add_to_collection(tf.GraphKeys.GLOBAL_VARIABLES, v)

        # force to update metric values
        with tf.control_dependencies([update_op]):
            value = tf.identity(value)
            return value
    return metric


model.compile(loss='mean_squared_error',
              optimizer='sgd',
              metrics=[metrics.mae,
                       metrics.categorical_accuracy,
                       specificity_at_sensitivity(0.5)])

UPDATE:

You can use model.evaluate to retrieve the metrics after training.

Lapstrake answered 26/5, 2018 at 10:56 Comment(4)
Thanks, that's working. I'm getting output throughout. It's useful, so I'll upvote. But I've also edited my original question. – Laing
πŸ™, but how do I get sensitivity and specificity every 10 epochs? – Laing
I'm unsure how you want it exactly, but you could probably define a custom callback to do that: keras.io/callbacks/#create-a-callback. – Lapstrake
Thanks. I've marked your answer as correct, and posted a different question on the edited topic here: https://mcmap.net/q/1425269/-report-keras-model-evaluation-metrics-every-10-epochs – Laing
P
0

I don't think there is a strict limit to only two incoming arguments, in metrics.py the function is just three incoming arguments, but k selects the default value of 5.

def sparse_top_k_categorical_accuracy(y_true, y_pred, k=5):
    return K.mean(K.in_top_k(y_pred, K.cast(K.max(y_true, axis=-1), 'int32'), k), axis=-1)
Polard answered 26/5, 2018 at 7:51 Comment(0)

© 2022 - 2024 β€” McMap. All rights reserved.