I have been in a simialr situation like yours
you can use softmax activation function in the output layer with categorical_crossentropy to check other metrics such as precision, recall and f1 score you can use the sklearn library as follows:
from sklearn.metrics import classification_report
y_pred = model.predict(x_test, batch_size=64, verbose=1)
y_pred_bool = np.argmax(y_pred, axis=1)
print(classification_report(y_test, y_pred_bool))
as for the training stage as far as know there is the accuracy metric as follows
model.compile(loss='categorical_crossentropy'
, metrics=['acc'], optimizer='adam')
if it helps you, you can plot the training history for the loss and accuracy of your training stage using matplotlib as follows :
hist = model.fit(x_train, y_train, batch_size=24, epochs=1000, verbose=2,
callbacks=[checkpoint],
validation_data=(x_valid, y_valid)
)
# Plot training & validation accuracy values
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()