I want to implement my custom metric in Keras. According to the documentation, my custom metric should be defined as a function that takes as input two tensors, y_pred
and y_true
, and returns a single tensor value.
However, I'm confused to what exactly will be contained in these tensors y_pred
and y_true
when the optimization is running. Is it just one data point? Is it the whole batch? The whole epoch (probably not)? Is there a way to obtain these tensors' shapes?
Can someone point to a trustworthy place where I can get this information? Any help would be appreciated. Not sure if relevant, but I'm using TensorFlow backend.
Things I tried so far, in order to answer this:
- Checking the Keras metrics documentation (no explanation there about what these tensors are).
- Checking the source code for the Keras metrics and trying to understand these tensors by looking at the Keras implementation for other metrics (This seems to suggest that
y_true
andy_pred
have the labels for an entire batch, but I'm not sure). - Reading these stackoverflow questions: 1, 2, 3, and others (none answer my question, most are centered on the OP not clearly understanding the difference between a tensor and the values computed using that tensor during the session).
- Printing the values of
y_true
andy_pred
during the optimization, by defining a metric like this:
def test_metric(y_true, y_pred):
y_true = K.print_tensor(y_true)
y_pred = K.print_tensor(y_pred)
return y_true - y_pred
(unfortunately these don't print anything during the optimization).