I am trying to solve a semantic segmentation problem. In accordance with the real constraints, the criteria for false positive and the criteria for false negative is different. For instance, if a pixel is miscorrected as foreground is less desirable than a pixel is miscorrected as background. How to handle this kind of constraint in setting up the loss function.
You can use the class_weight
parameter of model.fit
to weight your classes and, as such, punish misclassifications differently depending on the class.
class_weight
: optional dictionary mapping class indices (integers) to a weight (float) to apply to the model's loss for the samples from this class during training. This can be useful to tell the model to "pay more attention" to samples from an under-represented class.
For example:
out = Dense(2, activation='softmax')
model = Model(input=..., output=out)
model.fit(X, Y, class_weight={0: 1, 1: 0.5})
This would punish the second class less than the first.
y_pred
and y_true
, compute your loss and multiply your weight vector). –
Disquisition weights[class[i]] * loss(y_true[i], y_pred[i])
where class
is a mapping of sample index to respective class. and weights
is a mapping of class to weight. therefore, the loss is re-weighted according to the class of the sample. –
Disquisition Check out the jaccard distance (or IOU) loss function in keras-contrib:
This loss is useful when you have unbalanced numbers of pixels within an image because it gives all classes equal weight. However, it is not the defacto standard for image segmentation. For example, assume you are trying to predict if each pixel is cat, dog, or background. You have 80% background pixels, 10% dog, and 10% cat. If the model predicts 100% background should it be be 80% right (as with categorical cross entropy) or 30% (with this loss)?
Source: https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/losses/jaccard.py
© 2022 - 2024 — McMap. All rights reserved.