How to get results from custom loss function in Keras?
Asked Answered
S

3

7

I want to implement a custom loss function in Python and It should work like this pseudocode:

aux = | Real - Prediction | / Prediction
errors = []
if aux <= 0.1:
 errors.append(0)
elif aux > 0.1 & <= 0.15:
 errors.append(5/3)
elif aux > 0.15 & <= 0.2:
 errors.append(5)
else:
 errors.append(2000)
return sum(errors)

I started to define the metric like this:

def custom_metric(y_true,y_pred):
    # y_true:
    res = K.abs((y_true-y_pred) / y_pred, axis = 1)
    ....

But I do not know how to get the value of the res for the if and else. Also I want to know what have to return the function.

Thanks

Shanda answered 27/4, 2018 at 11:25 Comment(0)
P
6

Also I want to know what have to return the function.

Custom metrics can be passed at the compilation step.

The function would need to take (y_true, y_pred) as arguments and return a single tensor value.

But I do not know how to get the value of the res for the if and else.

You can return the result from result_metric function.

def custom_metric(y_true,y_pred):
     result = K.abs((y_true-y_pred) / y_pred, axis = 1)
     return result

The second step is to use a keras callback function in order to find the sum of the errors.

The callback can be defined and passed to the fit method.

history = CustomLossHistory()
model.fit(callbacks = [history])

The last step is to create the the CustomLossHistory class in order to find out the sum of your expecting errors list.

CustomLossHistory will inherit some default methods from keras.callbacks.Callback.

  • on_epoch_begin: called at the beginning of every epoch.
  • on_epoch_end: called at the end of every epoch.
  • on_batch_begin: called at the beginning of every batch.
  • on_batch_end: called at the end of every batch.
  • on_train_begin: called at the beginning of model training.
  • on_train_end: called at the end of model training.

You can read more in the Keras Documentation

But for this example we only need on_train_begin and on_batch_end methods.

Implementation

class LossHistory(keras.callbacks.Callback):
    def on_train_begin(self, logs={}):
        self.errors= []

    def on_batch_end(self, batch, logs={}):
         loss = logs.get('loss')
         self.errors.append(self.loss_mapper(loss))

    def loss_mapper(self, loss):
         if loss <= 0.1:
             return 0
         elif loss > 0.1 & loss <= 0.15:
             return 5/3
         elif loss > 0.15 & loss <= 0.2:
             return 5
         else:
             return 2000

After your model is trained you can access your errors using following statement.

errors = history.errors
Percale answered 27/4, 2018 at 11:36 Comment(15)
That might be a valid answer from a pure programming perspective at first, but I still believe this can't work because the gradient will not be useful. I might be wrong though.Speight
@Mihai Alexandru-Ionut but how can I save the results of the conditions and use the callback, could you put an example? ThanksShanda
@AlexanderHarnisch, now I understand. I updated my answer.Percale
@Aceconhielo, Yes, I updated my answer and soon I will update it with a full solution example.Percale
@Mihai Alexandru-Ionut first of all, thanks for your answer. I have a question. The goal in my loss function is minimize the sum of my expecting errors list. If I put the LossHistory in the callback, really in the trainning the neuralnetwork try to minimize this sum?? Every value in the list returns me 2000Shanda
@Aceconhielo, yes, neural network try to minimize this sum using the optimizer.Percale
@Aceconhielo, you can try to use more epochs because 0.15 it's a low loss value and it's reached when you're using many epochs and the model has a very good learning.Percale
@Mihai Alexandru-Ionut ok but the sum of the errors is automatically detected when the code runs the .fit function() or I have to make the sum and pass it as parameter in some part of the code?Shanda
When you invoke the fit method the train will begin. Using your custom metric the loss function is calculated and the error is passed backward in your network using backpropagation step. Using the keras callback function you will append every batch error to your list. Don't forget, it's the batch error. If you want the sum of errors for epoch, this is called cost function. If you want directly the cost function or the sum of errors from one epoch you can override another keras function another than on_batch_end and that function is on_epoch_endPercale
@Mihai Alexandru-Ionut ok, my goal is calculate the cost function then. If I understood your answer well, the cost function will be the same as LossHistory but changing the name of on_batch_end by on_epoch_end ? the problem of this is that the loss then is nan when I train the networkShanda
Include print(result) in your custom metric function. Do you receive expecting values ?Percale
@Mihai Alexandru-Ionut never mind, fixed. But I have another question if you do not mind. The values that I want to predict are between 0 and 1 but my output in my prediction is sometimes < 0 and > 1. Do you know how could I fix it?Shanda
Have you used normalization or scaling for dataset ?Percale
Can you post another question with your current code in order to help you ?Percale
@Mihai Alexandru-Ionut I open another question here with this doubt, thanks ! #50103877Shanda
S
1

I'll take a leap here and say this won't work because it is not differentiable. The loss needs to be continuously differentiable so you can propagate a gradient through there.

If you want to make this work you need to find a way to do this without discontinuity. For example you could try a weighted average over your 4 discrete values where the weights strongly prefer the closest value.

Speight answered 27/4, 2018 at 11:30 Comment(1)
My fault. I have forgotten to say that finally I want to sum all the errors. I modified my questionShanda
G
0

Appending to self directly didnt work for me, instead appending to params dict of self did the job, answering op it would be self.params['error'] = [], then add to the array as you see fit.

class CustomCallback(tf.keras.callbacks.Callback):
     
     def on_train_begin(self, logs=None):
          self.params['error'] = []

     def on_epoch_end(self, epochs, logs=None):
          #do something with self.params['error']

history = model.fit(callbacks = [CustomCallback()])

#When train ends

error = history.params['error']
Grits answered 11/6, 2021 at 15:28 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.