How to utilize Hebbian learning?
M

4

27

I want to upgrade my evolution simulator to use Hebb learning, like this one. I basically want small creatures to be able to learn how to find food. I achieved that with the basic feedforward networks, but I'm stuck at understanding how to do it with Hebb learning. The basic principle of Hebb learning is that, if two neurons fire together, they wire together.

So, the weights are updated like this:

weight_change = learning_rate * input * output

The information I've found on how this can be useful is pretty scarce, and I don't get it.

In my current version of the simulator, the weights between an action and an input (movement, eyes) are increased when a creature eats a piece of food, and I fail to see how that can translate into this new model. There simply is no room to tell if it did something right or wrong here, because the only parameters are input and output! Basically, if one input activates movement in one direction, the weight would just keep on increasing, no matter if the creature is eating something or not!

Am I applying Hebb learning in a wrong way? Just for reference, I'm using Python.

Macdonald answered 17/5, 2012 at 17:44 Comment(0)
F
19

Hebbs law is a brilliant insight for associative learning, but its only part of the picture. And you are right, implemented as you have done, and left unchecked a weight will just keep on increasing. The key is to add in some form of normalisation or limiting process. This is illustrated quite well of the wiki page for Oja's rule. What I suggest you do is add in a post-synaptic divisive normalisation step, what this means is that you divide through a weight by the sum of all the weights converging on the same post-synaptic neuron (i.e. the sum of all weights converging on a neuron is fixed at 1).

What you want to do can be done by building a network that utilises Hebbian learning. I'm not quite sure on what you are passing in as input into your system, or how you've set things up. But you could look at LISSOM which is an Hebbian extension to SOM, (self-organising map).

In a layer of this kind typically all the neurons may be interconnected. You pass in the input vector, and allow the activity in the network to settle, this is some number of settling steps. Then you update the weights. You do this during the training phase, at the end of which associated items in the input space will tend to form grouped activity patches in the output map.

It's also worth noting that the brain is massively interconnected, and highly recursive (i.e. there is feedforward, feedback, lateral interconnectivity, microcircuits, and a lot of other stuff too..).

Fafnir answered 20/5, 2012 at 19:51 Comment(0)
B
2

Although Hebbian learning, as a general concept, forms the basis for many learning algorithms, including backpropagation, the simple, linear formula which you use is very limited. Not only do weights rise infinitely, even when the network has learned all the patterns, but the network can perfectly learn only orthogonal (linearly independent) patterns.

Linear Hebbian learning is not even biologically plausible. Biological neural networks are much bigger than yours and are highly non-linear, both the neurons and the synapses between them. In big, non-linear networks, the chances that your patterns are close to orthogonal are higher.

So, if you insist on using a neural network, I suggest adding hidden layers of neurons and introducing non-linearities, both in the weights, e.g. as fraxel proposed, and in firing of neurons---here you might use a sigmoid function, like tanh (yes, using negative values for "non-firing" is good since it can lead to reducing weights). In its generalized form, Hebbian rule can be expressed as

weight_change = learning_rate * f1(input, weight) * f2(output, target_output)

where f1 and f2 are some functions. In your case, there is no target_output, so f2 is free to ignore it.

In order to have neurons in your hidden layers fire, and thus to get a connection between input and output, you can initialize the weights to random values.

But is a neural network really necessary, or even suitable for your problem? Have you considered simple correlation? I mean, Hebb derived his rule to explain how learning might function in biological systems, not as the best possible machine learning algorithm.

Buitenzorg answered 23/5, 2012 at 21:27 Comment(0)
A
1

I'm not very well acquainted with this type of neural network, but it looks like you're expecting it to work like a supervised update method while it is unsupervised. This means you can't teach it what is right, it will only learn what is different, by association. That is, it will eventually associate actions with particular clusters of inputs. In your situation where you want it to improve its decisionmaking by feedback, I don't think Hebbian updates only will suffice. You could combine it with some sort of backpropagation though.

Afterheat answered 20/5, 2012 at 1:51 Comment(1)
I want the networks not to be feedforward, brains don't work like that, and I think that loops might cool. So, no feedback, you say... So basically, could you elaborate how exactly would that work? Because it still seems broken, it will associate input A with output B, just because the starting weights have been set like that, and then it will increase the connection between then infinitely, it seems.Macdonald
H
-1

You can try with my code.

/*
 * To change this license header, choose License Headers in Project Properties.
 * To change this template file, choose Tools | Templates
 * and open the template in the editor.
 */
package modelhebb;

/**
 *
 * @author Raka
 */
public class ModelHebb {
    public static void main(String[] args) {
        Integer xinput[][]  = new Integer[][]{
            {1, 1},
            {1, -1},
            {-1, 1},
            {-1, -1}
        };
        Integer xtarget[]   = new Integer[]{
            1,
            -1,
            -1,
            -1
        };
        Integer xweight[]   = new Integer[xinput[0].length];
        System.out.println("\t Iterasi \t");
        Integer bayes   = 0;
        for (int i = 0; i < xtarget.length; i++) {
            for (int j = 0; j < xinput[i].length; j++) {
                int temp    = xweight[j]==null?0:xweight[j];
                xweight[j]  = temp + (xinput[i][j] * xtarget[i]);
                System.out.print("W"+j+": "+xweight[j]+"\t");
            }
            bayes   = bayes + xtarget[i];
            System.out.println("Bobot : " + bayes);
        }
    }
}
Haematothermal answered 16/3, 2017 at 7:38 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.