Unable to load Keras model in Keras 2.4.3 (with Tensorflow 2.3.0) that was saved in Keras 2.1.0 (with Tensorflow 1.3.0)
Asked Answered
L

2

8

I'm implementing a Keras model with a custom batch-renormalization layer, which has 4 weights (beta, gamma, running_mean, and running_std) and 3 state variables (r_max, d_max, and t):

    self.gamma = self.add_weight(shape = shape, #NK - shape = shape
                                 initializer=self.gamma_init,
                                 regularizer=self.gamma_regularizer,
                                 name='{}_gamma'.format(self.name))
    self.beta = self.add_weight(shape = shape, #NK - shape = shape
                                initializer=self.beta_init,
                                regularizer=self.beta_regularizer,
                                name='{}_beta'.format(self.name))
    self.running_mean = self.add_weight(shape = shape, #NK - shape = shape
                                        initializer='zero',
                                        name='{}_running_mean'.format(self.name),
                                        trainable=False)
    # Note: running_std actually holds the running variance, not the running std.
    self.running_std = self.add_weight(shape = shape, initializer='one',
                                       name='{}_running_std'.format(self.name),
                                       trainable=False)
    self.r_max = K.variable(np.ones((1,)), name='{}_r_max'.format(self.name))

    self.d_max = K.variable(np.zeros((1,)), name='{}_d_max'.format(self.name))

    self.t = K.variable(np.zeros((1,)), name='{}_t'.format(self.name))

When I checkpoint the model, only gamma, beta, running_mean, and running_std are saved (as expected), but when I try to load the model, I get this error:

Layer #1 (named "batch_renormalization_1" in the current model) was found to correspond to layer batch_renormalization_1 in the save file. However the new layer batch_renormalization_1 expects 7 weights, but the saved weights have 4 elements. 

So it looks like the model is expecting all 7 weights to be part of the saved file, even though some of them are state variables.

Any insights as to how to get around this?

EDIT: I realize that the problem was that the model was trained and saved on Keras 2.1.0 (with Tensorflow 1.3.0 backend), and I only get the error when loading the model using Keras 2.4.3 (with Tensorflow 2.3.0 backend). I am able to load the model using Keras to 2.1.0.

So the real question is - what changed in Keras/Tensorflow, and is there a way to load older models without receiving this error?

Lowrie answered 5/9, 2020 at 19:30 Comment(0)
S
0

You cant not load the model this way because keras.models.load_model will load the configuration that has been defined, not something has been self_customed.

To overcome this, you should reload the model architecture and try to load_weights from that instead:

model = YourModelDeclaration()
model.load_weights("checkpoint/h5file")

I have the same problem when I self custom BatchNormalize, so I would be pretty sure this is the only way to load it.

Synchronic answered 9/9, 2020 at 6:48 Comment(3)
Thanks for your response, but load_weights doesn't work either. After some digging, I found that the error is actually caused by saving and then attempting to load on different versions of Keras/Tensorflow. So real question is whether there's a way to load models saved in older versions of Keras without running into this error.Lowrie
Obviously, you should not save weights only in a different version and then load it in a newer one. But it is strange that even saving the whole model cant save you moving back and forth :DSynchronic
It seems like this could be a common issue. In this case, I'm attempting to use a model created by someone else, and they didn't provide detailed information on what version of Tensorflow/Keras they used. So it took a bit of guess-and-check to get things working. It seems to me like model loading should work across versions, or sharing models will be very difficult.Lowrie
W
0

In Keras, there's two ways to save the state of your model.

You can call the model.save() and model.save_weights() functions.

model.save() saves the entire model, including the weights and gradients. In your case, the 4 weights and 3 state variables will all be saved by this method. You can simply use the load_model("path.h5") method to get your model back.

The model.save_weights() function only saves the weights of the model and does not save the structure at all. The important thing to note here is that the Keras checkpoint callback uses the model.save_weights() method under the hood. If you wish to use the checkpoint weights, you must instantiate your model structure model = customModel() and then load the weights into it model.load_weights("checkpoint.h5")

Waverly answered 10/9, 2020 at 11:1 Comment(2)
Thanks for your response, but load_weights doesn't work either. After some digging, I found that the error is actually caused by saving and then attempting to load on different versions of Keras/Tensorflow. So real question is whether there's a way to load models saved in older versions of Keras without running into this error.Lowrie
From my understanding, you cannot move between Keras/tf versions when saving and loading models.Waverly

© 2022 - 2024 — McMap. All rights reserved.