Neural Network training with PyBrain won't converge [closed]
Asked Answered
O

4

32

I have the following code, from the PyBrain tutorial:

from pybrain.datasets import SupervisedDataSet
from pybrain.supervised.trainers import BackpropTrainer
from pybrain.tools.shortcuts import buildNetwork
from pybrain.structure.modules import TanhLayer

ds = SupervisedDataSet(2, 1)
ds.addSample((0,0), (0,))
ds.addSample((0,1), (1,))
ds.addSample((1,0), (1,))
ds.addSample((1,1), (0,))

net     = buildNetwork(2, 3, 1, bias=True, hiddenclass=TanhLayer)
trainer = BackpropTrainer(net, ds)

for inp, tar in ds:
     print [net.activate(inp), tar]

errors  = trainer.trainUntilConvergence()

for inp, tar in ds:
     print [net.activate(inp), tar]

However the result is a neural network that is not trained well. When looking at the error output the network gets trained properly however it uses the 'continueEpochs' argument to train some more and the network is performing worse again. So the network is converging, but there is no way to get the best trained network. The documentation of PyBrain implies that the network is returned which is trained best, however it returns a Tuple of errors.

Whens etting continueEpochs to 0 I get an error (ValueError: max() arg is an empty sequence) so continueEpochs must be larger than 0.

Is PyBrain actually maintained because it seems there is a big difference in documentation and code.

Olivann answered 21/8, 2012 at 7:53 Comment(1)
Ugh, the Github source is showing more examples solved in a completely different way compared to the documentation.Olivann
O
35

After some more digging I found that the example on the PyBrain's tutorial is completely out of place.

When we look at the method signature in the source code we find:

def trainUntilConvergence(self, dataset=None, maxEpochs=None, verbose=None, continueEpochs=10, validationProportion=0.25):

This means that 25% of the training set is used for validation. Although that is a very valid method when training a network on data you are not going to do this when you have the complete range of possiblities at your disposal, namely a 4-row XOR 2-in-1-out solution set. When one wants to train an XOR set and you remove one of the rows for validation that has as an immediate consequence that you get a very sparse training set where one of the possible combinations is omitted resulting automatically into those weights not being trained.

Normally when you omit 25% of the data for validation you do this by assuming that those 25% cover 'most' of the solution space the network already has encountered more or less. In this case this is not true and it covers 25% of the solution space completely unknown to the network since you removed it for validation.

So, the trainer was training the network correctly, but by omitting 25% of the XOR problem this results in a badly trained network.

A different example on the PyBrain website as a quickstart would be very handy, because this example is just plain wrong in this specific XOR case. You might wonder if they tried the example themselves, because it just outputs random badly trained networks.

Olivann answered 21/8, 2012 at 8:23 Comment(3)
Thanks for this! That example had me so confused. The docs say XOR is a classic neural netowork example, but then the sample code gave terrible answers for me.Kanishakanji
It is a classic neural network example because it show that using a combination of linear functions (sigmoids) you can train the network to learn binary logic. However, their tutorial is terrible since they don't seem to grasp the concept themselves.Olivann
I recommend you fork the github project, make the example into what you think it should be and then make a pull requestIncorrupt
L
17

I took the excellent Machine Learning class on Coursera, taught by Andrew Ng, and one part of the class covered training a small neural net to recognize xor. So I was a bit troubled by the pybrain example based on parts of the quickstart that did not converge.

I think there are many reasons, including the one above about the minimal dataset being split into training and validation. At one point in the course Andrew said "its not the person with the best algorithm that wins, its the one with the most data. And he went on to explain that the explosion in data availability in the 2000's is part of the reason for the resurgence in AI, now called Machine Learning.

So with all that in mind I found that

  1. the validation set can have 4 samples, because that comes after the training phase.
  2. the network only needs 2 nodes in the hidden layer, as I learned in the class,
  3. the learning rate needs to be pretty small in this case, like 0.005, or else the training will sometimes skip over the answer (this is an important point from the class that I confirmed by playing with the numbers).
  4. the smaller the learning rate, the smaller the maxEpochs can be. A small learning rate means that the convergence takes smaller steps along the gradient toward minimization. If its bigger, you need a bigger maxEpochs so that it will wait longer before deciding it has hit a minimum.
  5. You need a bias=True in the network (which adds a constant 1 node to the input and hidden layers). Read the answers to this question about bias.
  6. Finally, and most important, you need a big training set. 1000 converged on the right answer about 75% of the time. I suspect this has to do with the minimization algorithm. Smaller numbers would fail frequently.

So here's some code that works:

from pybrain.datasets import SupervisedDataSet

dataModel = [
    [(0,0), (0,)],
    [(0,1), (1,)],
    [(1,0), (1,)],
    [(1,1), (0,)],
]

ds = SupervisedDataSet(2, 1)
for input, target in dataModel:
    ds.addSample(input, target)

# create a large random data set
import random
random.seed()
trainingSet = SupervisedDataSet(2, 1);
for ri in range(0,1000):
    input,target = dataModel[random.getrandbits(2)];
    trainingSet.addSample(input, target)

from pybrain.tools.shortcuts import buildNetwork
net = buildNetwork(2, 2, 1, bias=True)

from pybrain.supervised.trainers import BackpropTrainer
trainer = BackpropTrainer(net, ds, learningrate = 0.001, momentum = 0.99)
trainer.trainUntilConvergence(verbose=True,
                              trainingData=trainingSet,
                              validationData=ds,
                              maxEpochs=10)

print '0,0->', net.activate([0,0])
print '0,1->', net.activate([0,1])
print '1,0->', net.activate([1,0])
print '1,1->', net.activate([1,1])
Loya answered 10/12, 2013 at 4:30 Comment(0)
V
2
trainer = BackpropTrainer(net, ds, learningrate = 0.9, momentum=0.0, weightdecay=0.0, verbose=True) 
trainer.trainEpochs(epochs=1000)

This way can converge. if learningrate is too small(e.g. 0.01), it lost in local minimum. As I have tested, learningrate in 0.3-30, it can converge.

Vulcanize answered 23/3, 2014 at 14:42 Comment(1)
This still does not address the fact that the example is not valid, since it uses a subset for validation and omitting that subset during training, under the false assumption that the remaining set covers most of the solution space.Olivann
D
0

The following seems to consistently give the right results:

from pybrain.tools.shortcuts import buildNetwork
from pybrain.structure import TanhLayer
from pybrain.datasets import SupervisedDataSet
from pybrain.supervised.trainers import BackpropTrainer

#net = buildNetwork(2, 3, 1, bias=True, hiddenclass=TanhLayer)
net = buildNetwork(2, 3, 1, bias=True)

ds = SupervisedDataSet(2, 1)
ds.addSample((0, 0), (0,))
ds.addSample((0, 1), (1,))
ds.addSample((1, 0), (1,))
ds.addSample((1, 1), (0,))
ds.addSample((0, 0), (0,))
ds.addSample((0, 1), (1,))
ds.addSample((1, 0), (1,))
ds.addSample((1, 1), (0,))
ds.addSample((0, 0), (0,))
ds.addSample((0, 1), (1,))
ds.addSample((1, 0), (1,))
ds.addSample((1, 1), (0,))
ds.addSample((0, 0), (0,))
ds.addSample((0, 1), (1,))
ds.addSample((1, 0), (1,))
ds.addSample((1, 1), (0,))
ds.addSample((0, 0), (0,))
ds.addSample((0, 1), (1,))
ds.addSample((1, 0), (1,))
ds.addSample((1, 1), (0,))
ds.addSample((0, 0), (0,))
ds.addSample((0, 1), (1,))
ds.addSample((1, 0), (1,))
ds.addSample((1, 1), (0,))

trainer = BackpropTrainer(net, ds, learningrate=0.001, momentum=0.99)

trainer.trainUntilConvergence(verbose=True)

print net.activate([0,0])
print net.activate([0,1])
print net.activate([1,0])
print net.activate([1,1])
Dieback answered 4/2, 2016 at 19:16 Comment(1)
Yes, but you give multiple times the same sample which means that when the training algorithm separates the training and validation set there is a high change that the training set is complete. However, there is also a chance that it is not... Although you've solved the issue it doesn't change the fact that the example on the website is blatantly wrong.Olivann

© 2022 - 2024 — McMap. All rights reserved.