Giving a neural network "pain"
Asked Answered
M

5

15

I've programmed a non-directional neural network. So kind of like the brain, all neurons are updated at the same time, and there are no explicit layers.

Now I'm wondering, how does pain work? How can I structure a neural network so that a "pain" signal will make it want to do anything to get rid of said pain.

Muirhead answered 19/2, 2011 at 21:28 Comment(2)
You should look into reinforcement learning and the POMDP-problem.Caitlyncaitrin
This is as much of a philosophical question as it is a programming question.Catchup
L
21

It doesn't really work quite like that. The network you have described is too simple to have a concept like pain that it would try to get rid of. On a low level it's nothing but just another input, but obviously that doesn't make the network "dislike" it.

In order to gain such a signal, you could train the network to perform certain actions when it receives this particular signal. As it becomes more refined, this signal starts looking like a real pain signal, but it's nothing more than a specific training of the network.

The pain signal in higher animals has this "do anything to get rid of it" response because higher animals have rather advanced cognitive abilities compared to the network you have described. Worms, on the other hand, might respond in a very specific way to a "pain" input - twitch a certain way. It's hard-wired that way, and to say that the worm tries to do anything to get rid of the signal would be wrong; it's more like a motor connected to a button that spins every time you press the button.

Realistic mechanisms for getting artificial neural networks to do useful things are collectively known as "neural network training", and is a large and complex research area. You can google for this phrase to get various ideas.

You should be aware, however, that neural networks are not a panacea for solving hard problems; they don't automatically get things done through magic. Using them effectively requires a good deal of experimentation with traning algorithm tweaks and network parameter tweaks.

Lund answered 19/2, 2011 at 21:44 Comment(1)
Thanks, I've yet to get my head around neural nets. This is for the brain of some simulated creatures by the way :).Muirhead
P
3

I don't know much (if anything) about AI theory, except that we are still looking for a way to give AI the model it needs to reason and think and ponder like real humans do. (We're still looking for the key - and maybe it's pain.)

Most of my adult life has been focused on computer programming and studying and understanding the mind.

I am writing here because I think that PAIN might be the missing link. (Also stackoverflow rocks right now.) I know that creating a model that actually enables higher thinking is a large leap, but I just had this amazing aha-type moment and had to share it. :)

In my studies of Buddhism, I learned of a scientist who studied leprosy cases. The reason lepers become deformed is because they don't feel pain when they come into contact with damaging forces. It's here that science and Buddhist reasoning collide in a fundamental truth.

Pain is what keeps us alive, defines our boundaries, and shapes how we make our choices and our world-view.

In an AI model, the principle would be to define a series of forces perhaps, that are constantly at play. The idea is to keep the mind alive.

The concept of ideas having life is something we humans also seem to play out. When someone "kills" your idea, by proving it wrong, at first, there is a resistance to the "death" of the idea. In fact, it takes a lot sometimes, to force an idea to be changed. We all know stubborn people... It has been said that the "death" of an idea, is the "death" of part of one's ego. The ego is always trying to build itself up.

So you see, to give AI an ego, you must give it pain, and then it will have to fight to build "safe" thoughts so that it may grow it's own ideas and eventually human psychosis and "consciousness".

Protractile answered 9/3, 2011 at 20:52 Comment(2)
Far out! +1 for that, interesting ideas! I often think of how an AI could be created that thinks like us, and I've realised that you could be right. Making an AI feel pain when it's "ego" is attacked, or emotional pain, could well be the start of AI that thinks like us.Muirhead
@grigb: I find it very interesting how you describe your idea. I have to add from my experience that The instruction every single form of life has is "SURVIVE", and pain is the key as you said, I put an example when we are in a situation of danger or an unpleasant one, we try to avoid this because we want to avoid pain or get damaged, if you are interested, I would like to talk with you about buddhism and some others aspects of human mind. Also I have studies in Computer Science but also interested in the amazing mindWnw
V
2

Artificial neural networks do not recognize such a thing as "pain", but may actually be trained in order to avoid certain states. In a Hopfield network, the final state of the network is attained at the energy minimum that is closest to the starting state. The starting state in this context is the state where the network is at "pain". If you train the network to have its local energy minimum at a state where the "pain" is gone, it should modify itself until that state is achieved. A simple way to train a Hopfield network is assigning a weight to the interactions between neurons. That weight is decided according to Hebb's rule, which is given by: Wij = (1/n) * [i] * [j].

Wij is the weight of the connection between neuron i and neuron j, n is the total number of neurons in the matrix,and [i] and [j] are the states of neurons i and j, respectively, which can have values of 1 or -1. Once you have completed the weight matrix for a state in which the "pain" does not exist, the network should shift most of the time towards that state without mattering the initial state.

Variola answered 12/8, 2013 at 2:24 Comment(0)
P
1

Think of Neural Networks as a multi-dimensional plane. Training a Neural Network is basically placing high and low points in the plane. The plane supports the "weights" and forms a depression around them. A depression in the plane is a desired output, and a highland is an undesired output. The idea of a neural network is to put the depressions in the areas that matter. Pain would look like a giant mountain. So an input neuron representing pain would have a very high probability of producing an undesired output.

But pain isn't the only thing that makes a creature behave the way it does. Pain to a tree doesn't cause much of a reaction. In animals, pain causes physiological reactions such as a surge in adrenaline. This causes a heightened state of awareness and a big uptick in energy consumption. To model the behavior of pain, you must provide a model of these mechanisms so that a stimulus of pain provides the appropriate output. In a NN, I imagine that it would need to be a Recursive Neural Network so that the pain has a duration proportionate to the input, so that the creature you are modeling avoids the pain for longer than the pain stimulus duration. This would be a healing period.

NNs tend to be more tree-like. By modeling an energy state with an energy cost, the creature would use minimal energy to survive, but use a lot of energy if by doing so, it moves it into the desired state faster than the cost of remaining in the undesired pain state. Going back to the hyperplane, this would look like a higher velocity off of the pain highland and into a desired "safe" depression. The vector's magnitude into the nearest depression is the motivation level of the NN to avoid pain. Training should naturally do this by adding heavy negative weights and biases to the pain inputs by always making the pain input result in a wrong answer, assuming the energy and awareness reaction is modeled into a recursive neural net.

Prudery answered 2/2, 2014 at 21:46 Comment(0)
S
0

I may have a partial answer to this question of how pain can be expressed in a neural network. For reference, the base network I use is an HTM algorithm. It is essentially a series of interconnected layers, each predicting their next input, correct predictions are rewarded using hebbian logic.

Theoretically, there could be some connections between layers that are gated, and this gate can only be opened by sufficient activation in another layer. This other layer would be rigged to only learn to recognize new patterns in the context of the pain trigger. Therefore, in the presence of pain-anticipated stimulus, the gated channel would be opened, creating a simulated attention system for the recognition of future pain. While this is not pain in itself, it is similar to fear.

Septuple answered 27/4, 2011 at 17:53 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.