Neural network gives different results for each execution
Asked Answered
M

1

5

This is the exact code I'm running with Keras and TensorFlow as a back end. For each run with the same program, the training results are different. Some times it gets 100% accuracy in 400th iteration and some times in the 200th.

training_data = np.array([[0,0],[0,1],[1,0],[1,1]], "float32")
target_data = np.array([[0],[1],[1],[0]], "float32")

model = Sequential()
model.add(Dense(4, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

model.compile(loss='mean_squared_error',
              optimizer='adam',
              metrics=['binary_accuracy'])

model.fit(training_data, target_data, epochs=500, verbose=2)


Epoch 403/500
0s - loss: 0.2256 - binary_accuracy: 0.7500

So why does the result change in each execution as the train data is fixed ? Would greatly appreciate some explanation.

Marielamariele answered 30/8, 2017 at 6:18 Comment(0)
S
9

The training set is fixed, but we set the initial weights of the neural network to a random value in a small range, so each time you train the network you get slightly different results.

If you want reproducible results you can set the numpy random seed with numpy.random.seed to a fixed value, so the same weights will be used, but beware that this can bias your network.

Slier answered 30/8, 2017 at 6:46 Comment(1)
For TF 2.0.0 I needed to do tf.random.set_seed(1)Altheta

© 2022 - 2024 — McMap. All rights reserved.