I don't know much (if anything) about AI theory, except that we are still looking for a way to give AI the model it needs to reason and think and ponder like real humans do. (We're still looking for the key - and maybe it's pain.)
Most of my adult life has been focused on computer programming and studying and understanding the mind.
I am writing here because I think that PAIN might be the missing link. (Also stackoverflow rocks right now.) I know that creating a model that actually enables higher thinking is a large leap, but I just had this amazing aha-type moment and had to share it. :)
In my studies of Buddhism, I learned of a scientist who studied leprosy cases. The reason lepers become deformed is because they don't feel pain when they come into contact with damaging forces. It's here that science and Buddhist reasoning collide in a fundamental truth.
Pain is what keeps us alive, defines our boundaries, and shapes how we make our choices and our world-view.
In an AI model, the principle would be to define a series of forces perhaps, that are constantly at play. The idea is to keep the mind alive.
The concept of ideas having life is something we humans also seem to play out. When someone "kills" your idea, by proving it wrong, at first, there is a resistance to the "death" of the idea. In fact, it takes a lot sometimes, to force an idea to be changed. We all know stubborn people... It has been said that the "death" of an idea, is the "death" of part of one's ego. The ego is always trying to build itself up.
So you see, to give AI an ego, you must give it pain, and then it will have to fight to build "safe" thoughts so that it may grow it's own ideas and eventually human psychosis and "consciousness".