I have a set of fairly complicated models that I am training and I am looking for a way to save and load the model optimizer states. The "trainer models" consist of different combinations of several other "weight models", of which some have shared weights, some have frozen weights depending on the trainer, etc. It is a bit too complicated of an example to share, but in short, I am not able to use model.save('model_file.h5')
and keras.models.load_model('model_file.h5')
when stopping and starting my training.
Using model.load_weights('weight_file.h5')
works fine for testing my model if the training has finished, but if I attempt to continue training the model using this method, the loss does not come even close to returning to its last location. I have read that this is because the optimizer state is not saved using this method which makes sense. However, I need a method for saving and loading the states of the optimizers of my trainer models. It seems as though keras once had a model.optimizer.get_sate()
and model.optimizer.set_sate()
that would accomplish what I am after, but that does not seem to be the case anymore (at least for the Adam optimizer). Are there any other solutions with the current Keras?
model.optimizer.get_config()
, saving this dictionary, and then setting each of these values to the trainer model optimizers before retraining accomplish this? – Abandonget_config()
only gets properties likelr
,decay
, etc. The internal weights would not be returned by it. – Couscousget_sate()
on keras.__version__ 2.1.6 and also in master github.com/keras-team/keras/blob/… Looks like they were removed github.com/keras-team/keras/pull/437 – Gilliammodel.compile
, thenmodel.save_weights
andmodel.load_weights
seem to preserve the optimizer state with no problem. – Stemware