The usual way of doing that is appending your VGG to the end of your model, making sure all its layers have trainable=False
before compiling.
Then you recalculate your Y_train.
Suppose you have these models:
mainModel - the one you want to apply a loss function
lossModel - the one that is part of the loss function you want
Create a new model appending one to another:
from keras.models import Model
lossOut = lossModel(mainModel.output) #you pass the output of one model to the other
fullModel = Model(mainModel.input,lossOut) #you create a model for training following a certain path in the graph.
This model will have the exact same weights of mainModel and lossModel, and training this model will affect the other models.
Make sure lossModel is not trainable before compiling:
lossModel.trainable = False
for l in lossModel.layers:
l.trainable = False
fullModel.compile(loss='mse',optimizer=....)
Now adjust your data for training:
fullYTrain = lossModel.predict(originalYTrain)
And finally do the training:
fullModel.fit(xTrain, fullYTrain, ....)
a=model(y_pred)
. – Lightproof