This thought is courtesy A paper by Cogito:
“We create an idea by processing some sort of sensory input—a sound, something read, a smell, a sight, a texture perceived by touch, a pain, etc. Then we associate that bit of sensory input with a relevant subset of ideas that already exist in our mind. If we see a dead rabbit on the road, for example, we instantly and subconsciously make many, seemingly unrelated, associations. We envision cuddly things, forest creatures, mammals, perhaps the “Trix are for kids” commercial, or a secular Easter icon. And since the rabbit is not living, we may even momentarily ponder the metaphysical properties of mortality. These associations, then, give rise to a whole range of conclusions, perceptions, feelings, moods and attitudes. In the computer, we don’t trouble ourselves with perceptions, feelings or moods, for those gifts are exclusively human.”
And it occurred to me that to create emotional computers, or at least convincing simulations thereof, you just have associate reactions with certain nodes. So every time a stimulus access the node ‘death’, a potential gets created for fear and sadness. If a means exists to express this potential, similar to the body language and such in humans, we might be getting close to an emotional computer. With reinforcement and learning thrown into the mix, it could only be a few more iterations to a full realization.
The rest of http://www.cogitoinc.com/ is kind of interesting too. Even if I don’t quite ‘get it’ yet. Or, at least they don’t do a very good job of showing potential applications. Their main example is plant engineering, and the technology sounds potentially relevant to a universal programming system.