Why is the accuracy for my Keras model always 0 when training?
Asked Answered
L

5

64

I have built a simple Keras network:

import numpy as np;

from keras.models import Sequential;
from keras.layers import Dense,Activation;

data= np.genfromtxt("./kerastests/mydata.csv", delimiter=';')
x_target=data[:,29]
x_training=np.delete(data,6,axis=1)
x_training=np.delete(x_training,28,axis=1)

model=Sequential()
model.add(Dense(20,activation='relu', input_dim=x_training.shape[1]))
model.add(Dense(10,activation='relu'))
model.add(Dense(1));

model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])
model.fit(x_training, x_target)

From my source data, I have removed 2 columns, as you can see. One is a column that came with dates in a string format (in the dataset, besides it, I have a column for the day, another for the month, and another for the year, so I don't need that column) and the other column is the column I use as target for the model).

When I train this model I get this output:

32/816 [>.............................] - ETA: 23s - loss: 13541942.0000 - acc: 0.0000e+00
800/816 [============================>.] - ETA: 0s - loss: 11575466.0400 - acc: 0.0000e+00 
816/816 [==============================] - 1s - loss: 11536905.2353 - acc: 0.0000e+00     
Epoch 2/10
 32/816 [>.............................] - ETA: 0s - loss: 6794785.0000 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5381360.4314 - acc: 0.0000e+00     
Epoch 3/10
 32/816 [>.............................] - ETA: 0s - loss: 6235184.0000 - acc: 0.0000e+00
800/816 [============================>.] - ETA: 0s - loss: 5199512.8700 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5192977.4216 - acc: 0.0000e+00     
Epoch 4/10
 32/816 [>.............................] - ETA: 0s - loss: 4680165.5000 - acc: 0.0000e+00
736/816 [==========================>...] - ETA: 0s - loss: 5050110.3043 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5168771.5490 - acc: 0.0000e+00     
Epoch 5/10
 32/816 [>.............................] - ETA: 0s - loss: 5932391.0000 - acc: 0.0000e+00
768/816 [===========================>..] - ETA: 0s - loss: 5198882.9167 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5159585.9020 - acc: 0.0000e+00     
Epoch 6/10
 32/816 [>.............................] - ETA: 0s - loss: 4488318.0000 - acc: 0.0000e+00
768/816 [===========================>..] - ETA: 0s - loss: 5144843.8333 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5151492.1765 - acc: 0.0000e+00     
Epoch 7/10
 32/816 [>.............................] - ETA: 0s - loss: 6920405.0000 - acc: 0.0000e+00
800/816 [============================>.] - ETA: 0s - loss: 5139358.5000 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5169839.2941 - acc: 0.0000e+00     
Epoch 8/10
 32/816 [>.............................] - ETA: 0s - loss: 3973038.7500 - acc: 0.0000e+00
672/816 [=======================>......] - ETA: 0s - loss: 5183285.3690 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5141417.0000 - acc: 0.0000e+00     
Epoch 9/10
 32/816 [>.............................] - ETA: 0s - loss: 4969548.5000 - acc: 0.0000e+00
768/816 [===========================>..] - ETA: 0s - loss: 5126550.1667 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5136524.5098 - acc: 0.0000e+00     
Epoch 10/10
 32/816 [>.............................] - ETA: 0s - loss: 6334703.5000 - acc: 0.0000e+00
768/816 [===========================>..] - ETA: 0s - loss: 5197778.8229 - acc: 0.0000e+00
816/816 [==============================] - 0s - loss: 5141391.2059 - acc: 0.0000e+00    

Why is this happening? My data is a time series. I know that for time series people do not usually use Dense neurons, but it is just a test. What really tricks me is that accuracy is always 0. And, with other tests, I did even lose: gets to a "NAN" value.

Could anybody help here?

Linkous answered 11/8, 2017 at 10:8 Comment(2)
This issue should be every ML framework's FAQ and a warning message should be implemented.Euphonious
Related: What function defines accuracy in Keras when the loss is mean squared error (MSE)?Spoor
T
103

Your model seems to correspond to a regression model for the following reasons:

  • You are using linear (the default one) as an activation function in the output layer (and relu in the layer before).

  • Your loss is loss='mean_squared_error'.

However, the metric that you use- metrics=['accuracy'] corresponds to a classification problem. If you want to do regression, remove metrics=['accuracy']. That is, use

model.compile(optimizer='adam',loss='mean_squared_error')

Here is a list of keras metrics for regression and classification (taken from this blog post):

Keras Regression Metrics

•Mean Squared Error: mean_squared_error, MSE or mse

•Mean Absolute Error: mean_absolute_error, MAE, mae

•Mean Absolute Percentage Error: mean_absolute_percentage_error, MAPE, mape

•Cosine Proximity: cosine_proximity, cosine

Keras Classification Metrics

•Binary Accuracy: binary_accuracy, acc

•Categorical Accuracy: categorical_accuracy, acc

•Sparse Categorical Accuracy: sparse_categorical_accuracy

•Top k Categorical Accuracy: top_k_categorical_accuracy (requires you specify a k parameter)

•Sparse Top k Categorical Accuracy: sparse_top_k_categorical_accuracy (requires you specify a k parameter)

Tana answered 11/8, 2017 at 10:14 Comment(3)
Why do you say my output layer is using relu? From the sources it seems that if you don't specify anything there's no activation function. Am I missing anything? On the other hand, I can only use 'accuracy' when using a classification model? I'm a bit messed up at this time :)Linkous
@Linkous the default activation function is indeed appears to be linear. I updated my answer. However, that doesn't change the fact that as it currently written, the model corresponds to a regression problem. Regarding your second question, 'accuracy' indeed can be used only for classification (as it measures the percentage of the correct labels).Tana
Here are a couple of links that I found helpful when trying to understand regression models, working with Keras. The explanations in this Github issue are extremely useful to someone who is just getting started. Also, here is a interesting SO question.Adlai
A
3

Add following to get metrics:

   history = model.compile(optimizer='adam', loss='mean_squared_error', metrics=['mean_squared_error'])
   # OR
   history = model.compile(optimizer='adam', loss='mean_absolute_error', metrics=['mean_absolute_error'])
   history.history.keys()
   history.history
Admiration answered 22/3, 2019 at 21:40 Comment(3)
What is the point of adding the loss functions as a metric another time?Scribner
The metric allows you to view the accuracies. Without it you can only view the loss.Flog
As implied by @bers, this is just redundant and meaningless for viewing purposes (both loss and metrics are reported, as can be easily seen from the outout posted by the OP above). This does not provide an answer to the question (the cause of which is adequately described in the accepted answer above).Spoor
N
1

I would like to point out something that is very important and has been unfortunately neglected: mean_squared_error is not an invalid loss function for classification.

The mathematical properties of cross_entropy in conjunction with the assumptions of mean_squared_error(both of which I will not expand upon in this comment) make the latter inappropriate or worse than the cross_entropy when it comes to training on classification problems.

Nevlin answered 26/1, 2020 at 21:42 Comment(0)
Z
1

Try this one.

while trying to solve the Titanic problem from kaggle, I forgot to fill the missing data from the Dataframe, because of which the missing data was filled with "nan".

The model threw a similar output

Epoch 1/50

891/891 [==============================] - 3s 3ms/step - loss: 9.8239 - acc: 0.0000e+00

Epoch 2/50

891/891 [==============================] - 1s 2ms/step - loss: 9.8231 - acc: 0.0000e+00

Epoch 3/50

891/891 [==============================] - 1s 1ms/step - loss: 9.8231 - acc: 0.0000e+00

Epoch 4/50

891/891 [==============================] - 1s 1ms/step - loss: 9.8231 - acc: 0.0000e+00

Epoch 5/50

891/891 [==============================] - 1s 1ms/step - loss: 9.8231 - acc: 0.0000e+00

Make sure you prepare your data before feeding it to the model.

In my case I had to do the following changes

dataset[['Age']] = dataset[['Age']].fillna(value=dataset[['Age']].mean())

dataset[['Fare']] = dataset[['Fare']].fillna(value=dataset[['Fare']].mean())

dataset[['Embarked']] = dataset[['Embarked']].fillna(value=dataset['Embarked'].value_counts().idxmax())
Zwinglian answered 23/4, 2020 at 16:32 Comment(1)
This does not address the issue, which, as pointed out in the accepted answer, comes from using accuracy in a regression setting, where it is not even defined (Titanic is a classification problem).Spoor
R
0

It seems that you are compiling the model for regression mode. However, the accuracy metric is used for classification.

accuracy = (TP+TN)/(TP+TN+FP+FN)

Its equation can be seen below and it can be seen that for this equation it is necessary to check the number of correctly predicted classes and wrongly predicted classes. As a result, this metric will not be suitable for regression.

You can see the complete list of classification and regression metrics in the link below https://www.tensorflow.org/api_docs/python/tf/keras/metrics

The best way to solve this problem is to either not use metrics at all or to use specific regression metrics metric=[regression metric]. for example:

model.compile(optimizer='adam',loss='mean_squared_error')
model.compile(optimizer='adam',loss='mean_squared_error', metrics=['mean_absolute_error','msle'])
Rimarimas answered 26/6 at 15:37 Comment(3)
Reminder: Answers generated by artificial intelligence tools are not allowed on Stack Overflow. Learn moreUndermine
But this is not produced by artificial intelligence, I wrote it myself :(Rimarimas
Well no worries thenUndermine

© 2022 - 2024 — McMap. All rights reserved.