CNN with keras, accuracy not improving
Asked Answered
T

2

8

I have started with Machine Learning recently, I am learning CNN, I planned to write an application for Car Damage severity detection, with the help of this Keras blog and this github repo.

This is how car data-set looks like:

F:\WORKSPACE\ML\CAR_DAMAGE_DETECTOR\DATASET\DATA3A
├───training (979 Images for all 3 categories of training set)
│   ├───01-minor
│   ├───02-moderate
│   └───03-severe
└───validation (171 Images for all 3 categories of validation set)
    ├───01-minor
    ├───02-moderate
    └───03-severe

Following code gives me only 32% of accuracy.

from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K


# dimensions of our images.
img_width, img_height = 150, 150

train_data_dir = 'dataset/data3a/training'
validation_data_dir = 'dataset/data3a/validation'
nb_train_samples = 979
nb_validation_samples = 171
epochs = 10
batch_size = 16

if K.image_data_format() == 'channels_first':
    input_shape = (3, img_width, img_height)
else:
    input_shape = (img_width, img_height, 3)

model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)

# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
    validation_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')
model.fit_generator(
    train_generator,
    steps_per_epoch=nb_train_samples // batch_size,
    epochs=epochs,
    validation_data=validation_generator,
    validation_steps=nb_validation_samples // batch_size)

model.save_weights('first_try.h5')

I tried:

  • By increasing the epochs to 10, 20,50.
  • By increasing images in the dataset (all validation images added to training set).
  • By updating the filter size in the Conv2D layer
  • Tried to add couple of Conv2D layer, MaxPooling layers
  • Also tried with different optimizers such as adam, Sgd, etc
  • Also Tried by updating the filter strides to (1,1) and (5,5) instead of (3,3)
  • Also tried by updating the changing image dimensions to (256, 256), (64, 64) from (150, 150)

But no luck, every-time I'm getting accuracy up to 32% or less than that but not more. Any idea what I'm missing.

As in the github repo we can see, it gives 72% accuracy for the same dataset (Training -979, Validation -171). Why its not working for me.

I tried his code from the github link on my machine but it hanged up while training the dataset(I waited for more than 8 hours), so changed the approach, but still no luck so far.

Here's the Pastebin containing output of my training epochs.

Tedmann answered 28/4, 2018 at 18:5 Comment(2)
I am not sure that it is directly causing your issue but I think you want to use a softmax activation in the final layer and categorical cross entropy instead of binary cross entropy as your loss function. The options you have set are for binary (2 class) problems and you have three classes.Giblet
@Giblet - Worked like a charm. That was the issue only, I could not observe it as I am still learning the CNN. Please update it as an answer, I will accept it. Thanks again :)Tedmann
G
18

The issue is caused by a mis-match between the number of output classes (three) and your choice of final layer activation (sigmoid) and loss-function (binary cross entropy).

The sigmoid function 'squashes' real values into a value between [0, 1] but it is designed for binary (two class) problems only. For multiple classes you need to use something like the softmax function. Softmax is a generalised version of sigmoid (the two should be equivalent when you have two classes).

The loss value also needs to be updated to one that can handle multiple classes - categorical cross entropy will work in this case.

In terms of code, if you modify the model definition and compilation code to the version below it should work.

model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(3))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

Finally you need to specify class_mode='categorical' in your data generators. That will ensure that the output targets are formatted as a categorical 3-column matrix that has a one in the column corresponding to the correct value and zeroes elsewhere. This response format is needed by the categorical_cross_entropy loss function.

Giblet answered 29/4, 2018 at 5:4 Comment(2)
As per the other answer... don't you need a model.add(Dense(3)) at the end of the model for what you have written to make sense?Lalise
@Lalise - thanks for spotting; I've edited the answer.Giblet
G
5

Minor correction:

model.add(Dense(1))

Should be:

model.add(Dense(3))

It has to comply with number of classes in the output.

Gazzo answered 9/10, 2018 at 18:10 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.