How to return history of validation loss in Keras
Asked Answered
S

13

77

Using Anaconda Python 2.7 Windows 10.

I am training a language model using the Keras exmaple:

print('Build model...')
model = Sequential()
model.add(GRU(512, return_sequences=True, input_shape=(maxlen, len(chars))))
model.add(Dropout(0.2))
model.add(GRU(512, return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', optimizer='rmsprop')

def sample(a, temperature=1.0):
    # helper function to sample an index from a probability array
    a = np.log(a) / temperature
    a = np.exp(a) / np.sum(np.exp(a))
    return np.argmax(np.random.multinomial(1, a, 1))


# train the model, output generated text after each iteration
for iteration in range(1, 3):
    print()
    print('-' * 50)
    print('Iteration', iteration)
    model.fit(X, y, batch_size=128, nb_epoch=1)
    start_index = random.randint(0, len(text) - maxlen - 1)

    for diversity in [0.2, 0.5, 1.0, 1.2]:
        print()
        print('----- diversity:', diversity)

        generated = ''
        sentence = text[start_index: start_index + maxlen]
        generated += sentence
        print('----- Generating with seed: "' + sentence + '"')
        sys.stdout.write(generated)

        for i in range(400):
            x = np.zeros((1, maxlen, len(chars)))
            for t, char in enumerate(sentence):
                x[0, t, char_indices[char]] = 1.

            preds = model.predict(x, verbose=0)[0]
            next_index = sample(preds, diversity)
            next_char = indices_char[next_index]

            generated += next_char
            sentence = sentence[1:] + next_char

            sys.stdout.write(next_char)
            sys.stdout.flush()
        print()

According to Keras documentation, the model.fit method returns a History callback, which has a history attribute containing the lists of successive losses and other metrics.

hist = model.fit(X, y, validation_split=0.2)
print(hist.history)

After training my model, if I run print(model.history) I get the error:

 AttributeError: 'Sequential' object has no attribute 'history'

How do I return my model history after training my model with the above code?

UPDATE

The issue was that:

The following had to first be defined:

from keras.callbacks import History 
history = History()

The callbacks option had to be called

model.fit(X_train, Y_train, nb_epoch=5, batch_size=16, callbacks=[history])

But now if I print

print(history.History)

it returns

{}

even though I ran an iteration.

Suzerainty answered 30/4, 2016 at 8:45 Comment(3)
Could you specify if you run this code from console or do you run your script from command line (or IDE)? Do you have access to hist variable after training?Sesterce
I'm running it off Anaconda. I have found a solution that lets me access the hist variable. But it always returns an empty curly bracket.Suzerainty
is there a way to retrieve it after the model is fit. I.e. I trained the model but did not create a new variable model.fit(). Can I obtain the loss history somehow or do I have to repeat the whole training processDisrespect
S
47

It's been solved.

The losses only save to the History over the epochs. I was running iterations instead of using the Keras built in epochs option.

so instead of doing 4 iterations I now have

model.fit(......, nb_epoch = 4)

Now it returns the loss for each epoch run:

print(hist.history)
{'loss': [1.4358016599558268, 1.399221191623641, 1.381293383180471, 1.3758836857303727]}
Suzerainty answered 30/4, 2016 at 15:0 Comment(1)
Note: calling model.predict(...) destroys all history.Shayla
W
61

Just an example started from

history = model.fit(X, Y, validation_split=0.33, nb_epoch=150, batch_size=10, verbose=0)

You can use

print(history.history.keys())

to list all data in history.

Then, you can print the history of validation loss like this:

print(history.history['val_loss'])
Whitleather answered 26/9, 2017 at 8:58 Comment(3)
When I do this, I only get 'acc' and 'loss', I do not see 'val_loss'Pigment
@Pigment You would get both a "train_loss" and a "val_loss" if you had given the model both a training and a validation set to learn from: the training set would be used to fit the model, and the validation set could be used e.g. to evaluate the model on unseen data after each epoch and stop fitting if the validation loss ceases to decrease.Heartworm
Note: calling model.predict(...) destroys all history.Shayla
S
47

It's been solved.

The losses only save to the History over the epochs. I was running iterations instead of using the Keras built in epochs option.

so instead of doing 4 iterations I now have

model.fit(......, nb_epoch = 4)

Now it returns the loss for each epoch run:

print(hist.history)
{'loss': [1.4358016599558268, 1.399221191623641, 1.381293383180471, 1.3758836857303727]}
Suzerainty answered 30/4, 2016 at 15:0 Comment(1)
Note: calling model.predict(...) destroys all history.Shayla
A
12

The following simple code works great for me:

    seqModel =model.fit(x_train, y_train,
          batch_size      = batch_size,
          epochs          = num_epochs,
          validation_data = (x_test, y_test),
          shuffle         = True,
          verbose=0, callbacks=[TQDMNotebookCallback()]) #for visualization

Make sure you assign the fit function to an output variable. Then you can access that variable very easily

# visualizing losses and accuracy
train_loss = seqModel.history['loss']
val_loss   = seqModel.history['val_loss']
train_acc  = seqModel.history['acc']
val_acc    = seqModel.history['val_acc']
xc         = range(num_epochs)

plt.figure()
plt.plot(xc, train_loss)
plt.plot(xc, val_loss)

Hope this helps. source: https://keras.io/getting-started/faq/#how-can-i-record-the-training-validation-loss-accuracy-at-each-epoch

Afrikaans answered 2/5, 2018 at 14:52 Comment(1)
Note: calling model.predict(...) destroys all history.Shayla
W
7

The dictionary with histories of "acc", "loss", etc. is available and saved in hist.history variable.

Wallaroo answered 30/4, 2016 at 9:33 Comment(3)
If I type "hist" into the console it only gives me the code I've run this session.Suzerainty
And how about hist.history?Sesterce
Hi Marcin, I solved it. The issue was that the losses only save over epochs whilst I was running external iterations. So with each iteration my history clearedSuzerainty
K
5

I have also found that you can use verbose=2 to make keras print out the Losses:

history = model.fit(X, Y, validation_split=0.33, nb_epoch=150, batch_size=10, verbose=2)

And that would print nice lines like this:

Epoch 1/1
 - 5s - loss: 0.6046 - acc: 0.9999 - val_loss: 0.4403 - val_acc: 0.9999

According to their documentation:

verbose: 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch.
Kauppi answered 17/3, 2018 at 7:18 Comment(0)
L
4

For plotting the loss directly the following works:

import matplotlib.pyplot as plt
...    
model_ = model.fit(X, Y, epochs= ..., verbose=1 )
plt.plot(list(model_.history.values())[0],'k-o')
Luckin answered 5/7, 2019 at 12:4 Comment(1)
Note: calling model.predict(...) destroys all history.Shayla
T
3

Another option is CSVLogger: https://keras.io/callbacks/#csvlogger. It creates a csv file appending the result of each epoch. Even if you interrupt training, you get to see how it evolved.

Trifling answered 25/12, 2017 at 4:32 Comment(0)
P
2

Actually, you can also do it with the iteration method. Because sometimes we might need to use the iteration method instead of the built-in epochs method to visualize the training results after each iteration.

history = [] #Creating a empty list for holding the loss later
for iteration in range(1, 3):
    print()
    print('-' * 50)
    print('Iteration', iteration)
    result = model.fit(X, y, batch_size=128, nb_epoch=1) #Obtaining the loss after each training
    history.append(result.history['loss']) #Now append the loss after the training to the list.
    start_index = random.randint(0, len(text) - maxlen - 1)
print(history)

This way allows you to get the loss you want while maintaining your iteration method.

Percipient answered 17/5, 2018 at 5:3 Comment(1)
Note: calling model.predict(...) destroys all history.Shayla
U
2

Thanks to Alloush,

Following parameter must be included in model.fit():

validation_data = (x_test, y_test)

If it is not defined, val_acc and val_loss will not be exist at output.

Unwilling answered 17/10, 2020 at 9:27 Comment(3)
Welcome to SO! When you are about to answer an old question (this one is over 4 years old) that already has an accepted answer (this is the case here) please ask yourself: Do I really have a substantial improvement to offer? If not, consider refraining from answering.Tman
Respectfully, @Timus, code changes significantly over 4 years, and previous solutions that may have worked fine back in 2016 are not guaranteed to work in 2020 on different versions of Tensorflow. So answering an old question in such a way that it works with the latest version of a framework, I would argue, actually does offer a substantial improvement.Heartsome
@Heartsome I didn't judge the offered solution, downvoting never crossed my mind (I don't have the knowledge)! I just wanted to point out that the answer should actually offer something new.Tman
P
1

Those who got still error like me:

Convert model.fit_generator() to model.fit()

Peden answered 14/4, 2020 at 8:22 Comment(0)
L
0

you can get loss and metrics like below: returned history object is dictionary and you can access model loss( val_loss) or accuracy(val_accuracy) like below:

model_hist=model.fit(train_data,train_lbl,epochs=my_epoch,batch_size=sel_batch_size,validation_data=val_data)

acc=model_hist.history['accuracy']

val_acc=model_hist.history['val_accuracy']

loss=model_hist.history['loss']

val_loss=model_hist.history['val_loss']

dont forget that for getting val_loss or val_accuracy you should specify validation data in the "fit" function.

Lemaster answered 3/3, 2022 at 21:2 Comment(4)
How is this different from the code that the asker included? Can you explain why this should work while it didn't for the asker?Vidovik
@Vidovik I edited code for more clarity: in first part of question the questioner accessed history in a wrong way, and in the update part questioner did not include validation_data in "fit" function which cause the val_loss be NULL. you can try the mentioned solution to check that it works.Lemaster
Note: calling model.predict(...) destroys all history.Shayla
when you train the model you can save history in a variable like below: model_hist=my_model.fit(.....) next you can use model_hist every where you wantLemaster
H
0
history = model.fit(partial_train_data, partial_train_targets,
validation_data=(val_data, val_targets),
epochs=num_epochs, batch_size=1, verbose=0)
mae_history = history.history['val_mean_absolute_error']

I had the same problem. The following code worked  for me.

mae_history = history.history['val_mae']
Homologate answered 12/9, 2022 at 19:32 Comment(0)
P
0

What can be obtained from the history when using model.predict() (like in DQN) are Loss and Accuracy:

history = model.fit(inputs, targets)
print(history.params, history.history.keys())
train_loss = history.history['loss']
train_accuracy = history.history['accuracy']
print("train_loss:", train_loss)
print("train_accuracy", train_accuracy)
Patron answered 1/3 at 15:3 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.