Tensorboard AttributeError: 'ModelCheckpoint' object has no attribute 'on_train_batch_begin'
Asked Answered
D

4

10

I'm currently using Tensorboard using the below callback as outlined by this SO post as shown below.

from keras.callbacks import ModelCheckpoint

CHECKPOINT_FILE_PATH = '/{}_checkpoint.h5'.format(MODEL_NAME)
checkpoint = ModelCheckpoint(CHECKPOINT_FILE_PATH, monitor='val_acc', verbose=1, save_best_only=True, mode='max', period=1)

When I run Keras' dense net model, I get the following error. I haven't had any issues running Tensorboard in this manner with any of my other models, which makes this error very strange. According to this Github post, the official solution is to use the official Tensorboard implementation; however, this requires upgrading to Tensorflow 2.0, which is not ideal for me. Anyone know why I'm getting the following error for this specific densenet and is there a workaround/fix that someone knows?

AttributeError Traceback (most recent call last) in () 26 batch_size=32, 27 class_weight=class_weights_dict, ---> 28 callbacks=callbacks_list 29 ) 30

2 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py in _call_batch_hook(self, mode, hook, batch, logs) 245 t_before_callbacks = time.time() 246 for callback in self.callbacks: --> 247 batch_hook = getattr(callback, hook_name) 248 batch_hook(batch, logs) 249 self._delta_ts[hook_name].append(time.time() - t_before_callbacks)

AttributeError: 'ModelCheckpoint' object has no attribute 'on_train_batch_begin'

The dense net I'm running

from tensorflow.keras import layers, Sequential
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications.densenet import preprocess_input, DenseNet121
from keras.optimizers import SGD, Adagrad
from keras.utils.np_utils import to_categorical

IMG_SIZE = 256
NUM_CLASSES = 5
NUM_EPOCHS = 100

x_train = np.asarray(x_train)
x_test = np.asarray(x_test)

y_train = to_categorical(y_train, NUM_CLASSES)
y_test = to_categorical(y_test, NUM_CLASSES)


x_train = x_train.reshape(x_train.shape[0], IMG_SIZE, IMG_SIZE, 3)
x_test = x_test.reshape(x_test.shape[0], IMG_SIZE, IMG_SIZE, 3)

densenet = DenseNet121(
    include_top=False,
    input_shape=(IMG_SIZE, IMG_SIZE, 3)
)

model = Sequential()
model.add(densenet)
model.add(layers.GlobalAveragePooling2D())
model.add(layers.Dense(NUM_CLASSES, activation='softmax'))
model.summary()

model.compile(loss='categorical_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])

history = model.fit(x_train,
                    y_train,
                    epochs=NUM_EPOCHS,
                    validation_data=(x_test, y_test),
                    batch_size=32,
                    class_weight=class_weights_dict,
                    callbacks=callbacks_list
                   )
Duumvirate answered 20/7, 2019 at 8:8 Comment(0)
K
24

In your imports you are mixing keras and tf.keras, which are NOT compatible with each other, as you get weird errors like these.

So a simple solution is to choose keras or tf.keras, and make all imports from that package, and never mix it with the other.

Kremlin answered 20/7, 2019 at 8:57 Comment(0)
B
1

I replace this line

from keras.callbacks import EarlyStopping, ModelCheckpoint

To this line

from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
Borer answered 19/6, 2020 at 17:30 Comment(1)
This is essentially the same guidance as the three existing answers from a year ago.Postmark
A
0

Make all imports from either keras or tensorflow.keras

I hope this will sort it out!

Anonym answered 14/11, 2019 at 15:0 Comment(1)
This is exactly what the accepted answer say to NOT doMadiemadigan
S
0

Yes imports are mixed from keras and tensorflow

try sticking on to tensorflow.keras for example :

from tensorflow.keras.callbacks import EarlyStopping
Stumble answered 16/5, 2020 at 13:29 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.