I want to underline that you have two possibilities according to your problem:
[1] If the weights are equal for all your samples:
You can build a loss wrapper. Here a dummy example:
n_sample = 200
X = np.random.uniform(0,1, (n_sample,10))
y = np.random.uniform(0,1, (n_sample,100))
W = np.random.uniform(0,1, (100,)).astype('float32')
def custom_loss_wrapper(weights):
def loss(true, pred):
sum_weights = tf.reduce_sum(weights) * tf.cast(tf.shape(pred)[0], tf.float32)
resid = tf.sqrt(tf.reduce_sum(weights * tf.square(true - pred)))
return resid/sum_weights
return loss
inp = Input((10,))
x = Dense(256)(inp)
pred = Dense(100)(x)
model = Model(inp, pred)
model.compile('adam', loss=custom_loss_wrapper(W))
model.fit(X, y, epochs=3)
[2] If the weights are different between samples:
You should build your model using add_loss
in order to dynamically take into account the weights for each sample. Here a dummy example:
n_sample = 200
X = np.random.uniform(0,1, (n_sample,10))
y = np.random.uniform(0,1, (n_sample,100))
W = np.random.uniform(0,1, (n_sample,100))
def custom_loss(true, pred, weights):
sum_weights = tf.reduce_sum(weights)
resid = tf.sqrt(tf.reduce_sum(weights * tf.square(true - pred)))
return resid/sum_weights
inp = Input((10,))
true = Input((100,))
weights = Input((100,))
x = Dense(256)(inp)
pred = Dense(100)(x)
model = Model([inp,true,weights], pred)
model.add_loss(custom_loss(true, pred, weights))
model.compile('adam', loss=None)
model.fit([X,y,W], y=None, epochs=3)
When using add_loss
you should pass all the tensors involved in the loss as input layers and pass them inside the loss for the computation.
At inference time you can compute predictions as always, simply removing the true and weights as input:
final_model = Model(model.input[0], model.output)
final_model.predict(X)