keras-tuner error in hyperparameter tuning
Asked Answered
S

2

6

i am trying my first time to get a keras-tuner tuned deep learning model. My tuning code goes like below:

def build_model_test(hp):
    model = models.Sequential()
    model.add(layers.InputLayer(input_shape=(100,28)))
    model.add(layers.Dense(28,activation = 'relu'))
    model.add(BatchNormalization(momentum = 0.99))
    model.add(Dropout(hp.Float('dropout', 0, 0.5, step=0.1, default=0.5)))
    model.add(layers.Conv1D(filters=hp.Int(
    'num_filters',
    16, 128,
    step=16
),kernel_size=3,strides=1,padding='same',activation='relu'))
    model.add(BatchNormalization(momentum = 0.99))
    model.add(Dropout(hp.Float('dropout', 0, 0.5, step=0.1, default=0.5)))
    model.add(layers.Conv1D(filters=hp.Int(
    'num_filters',
    16, 128,
    step=16
),kernel_size=3,strides=1,padding='same',activation='relu'))
    model.add(BatchNormalization(momentum = 0.99))
    model.add(Dropout(hp.Float('dropout', 0, 0.5, step=0.1, default=0.5)))
    model.add(layers.Conv1D(filters=hp.Int(
    'num_filters',
    16, 128,
    step=16
),kernel_size=3,strides=1,padding='same',activation='relu'))
    model.add(BatchNormalization(momentum = 0.99))
    model.add(Dropout(hp.Float('dropout', 0, 0.5, step=0.1, default=0.5)))
    model.add(layers.Dense(units=hp.Int('units',min_value=16,max_value=512,step=32,default=128),activation = 'relu'))
    model.add(Dropout(hp.Float('dropout', 0, 0.5, step=0.1, default=0.5)))
    model.add(layers.Dense(1, activation = 'linear'))

    model.compile(
        optimizer='adam',
        loss=['mean_squared_error'],
        metrics=[tf.keras.metrics.RootMeanSquaredError()]
    )
    return model

tuner = RandomSearch(
    build_model_test,
    objective='mean_squared_error',
    max_trials=20,
    executions_per_trial=3,
    directory='my_dir',
    project_name='helloworld')


x_train,x_test=dataframes[0:734,:,:],dataframes[734:1100,:,:]
y_train,y_test=target_fx[0:734,:,:],target_fx[734:1100,:,:]


tuner.search(x_train, y_train,
             epochs=20,
             validation_data=(x_test, y_test))

models = tuner.get_best_models(num_models=1)

but as soon as the 20th epoch arrives it prints this error

ValueError                                Traceback (most recent call last)
<ipython-input-59-997de3dfa9e5> in <module>
     52 tuner.search(x_train, y_train,
     53              epochs=20,
---> 54              validation_data=(x_test, y_test))
     55 
     56 models = tuner.get_best_models(num_models=1)

~\Anaconda3\envs\deeplearning\lib\site-packages\kerastuner\engine\base_tuner.py in search(self, *fit_args, **fit_kwargs)
    128 
    129             self.on_trial_begin(trial)
--> 130             self.run_trial(trial, *fit_args, **fit_kwargs)
    131             self.on_trial_end(trial)
    132         self.on_search_end()

~\Anaconda3\envs\deeplearning\lib\site-packages\kerastuner\engine\multi_execution_tuner.py in run_trial(self, trial, *fit_args, **fit_kwargs)
    107             averaged_metrics[metric] = np.mean(execution_values)
    108         self.oracle.update_trial(
--> 109             trial.trial_id, metrics=averaged_metrics, step=self._reported_step)
    110 
    111     def _configure_tensorboard_dir(self, callbacks, trial_id, execution=0):

~\Anaconda3\envs\deeplearning\lib\site-packages\kerastuner\engine\oracle.py in update_trial(self, trial_id, metrics, step)
    182         
    183         trial = self.trials[trial_id]
--> 184         self._check_objective_found(metrics)
    185         for metric_name, metric_value in metrics.items():
    186             if not trial.metrics.exists(metric_name):

~\Anaconda3\envs\deeplearning\lib\site-packages\kerastuner\engine\oracle.py in _check_objective_found(self, metrics)
    351                 'Objective value missing in metrics reported to the '
    352                 'Oracle, expected: {}, found: {}'.format(
--> 353                     objective_names, metrics.keys()))
    354 
    355     def _get_trial_dir(self, trial_id):

ValueError: Objective value missing in metrics reported to the Oracle, expected: ['mean_squared_error'], found: dict_keys(['loss', 'root_mean_squared_error', 'val_loss', 'val_root_mean_squared_error'])

which i do not get since i have specified to the model as mean squared error to follow, do you know what commands should i change to get the result i want?

Also can i call early stopping in the keras-tuner?

Suctorial answered 4/6, 2020 at 22:25 Comment(0)
T
4

you should use objective='root_mean_squared_error'

    tuner = RandomSearch(
    build_model_test,
    objective='root_mean_squared_error',
    max_trials=20,
    executions_per_trial=3,
    directory='my_dir',
    project_name='helloworld') 

I would rather use 'val_root_mean_squared_error' as most probably you are interested to decrease the error on the validation dataset.

Thrum answered 5/6, 2020 at 7:17 Comment(0)
K
3

In addition to what was given in the previous response, you have also inquired about the possibility of calling early stopping in the keras-tuner. This is indeed possible with an early stopping callback.

First assign the EarlyStopping callback to a variable with the correct value to monitor. In this case I use 'val_loss'. This would look like:

stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)

Then change the line where you start the hyperparameter search like so:

tuner.search(x_train, y_train,
         epochs=20,
         validation_data=(x_test, y_test), callbacks=[stop_early])

Note the callbacks argument. Feel free to change any of the arguments you define the callback with to your liking/ application

Krell answered 9/5, 2021 at 17:43 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.