Unsupervised loss function in Keras
Asked Answered
D

2

6

Is there any way in Keras to specify a loss function which does not need to be passed target data?

I attempted to specify a loss function which omitted the y_true parameter like so:

def custom_loss(y_pred):

But I got the following error:

Traceback (most recent call last):
  File "siamese.py", line 234, in <module>
    model.compile(loss=custom_loss,optimizer=Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0))
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 911, in compile
    sample_weight, mask)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 436, in weighted
    score_array = fn(y_true, y_pred)
TypeError: custom_loss() takes exactly 1 argument (2 given)

I then tried to call fit() without specifying any target data:

 model.fit(x=[x_train,x_train_warped, affines], batch_size = bs, epochs=1)

But it looks like not passing any target data causes an error:

Traceback (most recent call last):
  File "siamese.py", line 264, in <module>
    model.fit(x=[x_train,x_train_warped, affines], batch_size = bs, epochs=1)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1435, in fit
    batch_size=batch_size)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1322, in _standardize_user_data
    in zip(y, sample_weights, class_weights, self._feed_sample_weight_modes)]
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 577, in _standardize_weights
    return np.ones((y.shape[0],), dtype=K.floatx())
AttributeError: 'NoneType' object has no attribute 'shape'

I could manually create dummy data in the same shape as my neural net's output but this seems extremely messy. Is there a simple way to specify an unsupervised loss function in Keras that I am missing?

Durst answered 26/6, 2017 at 13:40 Comment(4)
I think you are missing the point, what would your unsupervised loss do exactly? What exact computation?Mayor
I am trying to compare the similarity of two different outputs from the neural net. The more similar they are the lower the loss should be. To be more specific, I am attempting to re-implement the neural network described in this paperDurst
I think you should use the dummy data.... yes...it's ugly and I don't like it either... but I can't see a solution.Sheep
the second error related to your input/output data, you need to use numpy.array. You can use x_train as a target.Ancalin
O
2

I think the best solution is customizing the training instead of using the model.fit method.

The complete walkthrough is published in the Tensorflow tutorials page.

Ogee answered 24/3, 2020 at 22:22 Comment(0)
T
2

Write your loss function as if it had two arguments:

  1. y_true
  2. y_pred

If you don't have y_true, that's fine, you don't need to use it inside to compute the loss, but leave a placeholder in your function prototype, so keras wouldn't complain.

def custom_loss(y_true, y_pred):
    # do things with y_pred
    return loss

Adding custom arguments

You may also need to use another parameter like margin inside your loss function, even then your custom function should only take in those two arguments. But there is a workaround, use lambda functions

def custom_loss(y_pred, margin):
    # do things with y_pred
    return loss

but use it like

model.compile(loss=lambda y_true, y_pred: custom_loss(y_pred, margin), ...)
Tymon answered 6/11, 2019 at 17:27 Comment(0)
O
2

I think the best solution is customizing the training instead of using the model.fit method.

The complete walkthrough is published in the Tensorflow tutorials page.

Ogee answered 24/3, 2020 at 22:22 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.