Using Tensorflow Huber loss in Keras
Asked Answered
C

5

18

I am trying to use huber loss in a keras model (writing DQN), but I am getting bad result, I think I am something doing wrong. My is code is below.

model = Sequential()
model.add(Dense(output_dim=64, activation='relu', input_dim=state_dim))
model.add(Dense(output_dim=number_of_actions, activation='linear'))
loss = tf.losses.huber_loss(delta=1.0)
model.compile(loss=loss, opt='sgd')
return model
Canterbury answered 15/12, 2017 at 22:10 Comment(0)
C
18

I came here with the exact same question. The accepted answer uses logcosh which may have similar properties, but it isn't exactly Huber Loss. Here's how I implemented Huber Loss for Keras (note that I'm using Keras from Tensorflow 1.5).

import numpy as np
import tensorflow as tf

'''
 ' Huber loss.
 ' https://jaromiru.com/2017/05/27/on-using-huber-loss-in-deep-q-learning/
 ' https://en.wikipedia.org/wiki/Huber_loss
'''
def huber_loss(y_true, y_pred, clip_delta=1.0):
  error = y_true - y_pred
  cond  = tf.keras.backend.abs(error) < clip_delta

  squared_loss = 0.5 * tf.keras.backend.square(error)
  linear_loss  = clip_delta * (tf.keras.backend.abs(error) - 0.5 * clip_delta)

  return tf.where(cond, squared_loss, linear_loss)

'''
 ' Same as above but returns the mean loss.
'''
def huber_loss_mean(y_true, y_pred, clip_delta=1.0):
  return tf.keras.backend.mean(huber_loss(y_true, y_pred, clip_delta))

Depending if you want to reduce the loss or the mean of the loss, use the corresponding function above.

Characteristic answered 14/2, 2018 at 16:10 Comment(2)
hi @avejidah. Actually they are very similar. There was PR open for Huber Loss in Keras. You can see that here. github.com/keras-team/keras/pull/6410.Canterbury
Hey hakaishinbeerus. I agree that they are similar, but they are not the same. the-moliver and Danielhiversen point that out in the PR comment, then they renamed huber to logcosh to accurately reflect the loss function that Keras has. AFAIK, Keras still does not have Huber Loss, so for those interested in using it, my function should be correct.Characteristic
R
15

You can wrap Tensorflow's tf.losses.huber_loss in a custom Keras loss function and then pass it to your model.

The reason for the wrapper is that Keras will only pass y_true, y_pred to the loss function, and you likely want to also use some of the many parameters to tf.losses.huber_loss. So, you'll need some kind of closure like:

def get_huber_loss_fn(**huber_loss_kwargs):

    def custom_huber_loss(y_true, y_pred):
        return tf.losses.huber_loss(y_true, y_pred, **huber_loss_kwargs)

    return custom_huber_loss

# Later...
model.compile(
    loss=get_huber_loss_fn(delta=0.1)
    ...
)
Rinna answered 22/3, 2018 at 20:4 Comment(3)
Why not just specify loss=tf.losses.huber_loss directly?Bridie
If I remember correctly, then there was no good reason to not specify it directly.Bridie
@MikiP this can be useful if you want to modify the 2 tensors prior to the huber calculation (for example by doing the square) this way you dont have to rewrite the entire functionSerafina
A
5

How about:

    loss=tf.keras.losses.Huber(delta=100.0)
Archilochus answered 30/6, 2019 at 21:24 Comment(2)
I believe this is now deprecatedBurkhalter
It's not in the docs, but nothing in the in-code documentation says that it's deprecated. This answer is equivalent to the higher-rated answers above but requires writing less code, so it's better!Simile
C
4

I was looking through the losses of keras. Apparently logcosh has same properties as huber loss. More details of their similarity can be seen here.

Canterbury answered 16/12, 2017 at 15:51 Comment(0)
C
0

To anyone still wondering about this: In tensorflow 2.0, you can do it in the following way:

model.compile(optimizer=custom_optimizer, #add your optimizer
loss='huber') 
Crowns answered 25/7 at 14:30 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.