||People Projects Publications Software Data Policy Press Videos FAQ Vintage|
Weixuan 'Vincent' Chen
Pablo Egana del Sol
Daniel Lopez Martinez
Ognjen (Oggi) Rudovic
>>> Newest Alumni:
Synthesizing Emotions in Machines
One of our research goals stands somewhat apart from the rest: Synthesizing emotions in machines -- and with this, building machines that not only appear to "have" emotions, but actually do have internal mechanisms analogous to human or animal emotions. In a machine, (or software agent, or virtual creature) which "has" emotions, the synthesis model decides which emotional state the machine (or agent or creature) should be in. The emotional state is then used to influence subsequent behavior.
Some forms of synthesis act by reasoning about emotion generation. An example of synthesis is as follows: if a person has a big exam tomorrow, and has encountered several delays today, then he/she might feel stressed and particularly intolerant of certain behaviors, such as interruptions not related to helping prepare for the exam. This synthesis model can reason about circumstances (exam, delays), and suggest which emotion(s) are likely to be present (stress, annoyance).
The ability to synthesize emotions via reasoning about them, i.e. to know that certain conditions tend to produce certain affective states, is also important for emotion recognition. Recognition is often considered the "analysis" part of modeling something -- analyzing what emotion is present. Synthesis is the inverse of analysis -- constructing the emotion. The two can operate in a system of checks and balances: Recognition can proceed by synthesizing several possible cases, then asking which case most closely resembles what is perceived. This approach to recognition is sometimes called "analysis by synthesis."
Synthesis models can also operate without explicit reasoning. We are exploring the need for machines to "have" emotions in a bodily sense. The importance of this follows from the work of Damasio and others who have studied patients who essentially do not have "enough emotions" and consequently suffer from impaired rational decision making. The nature of their impairment is oddly similar to that of today's boolean decision-making machines, and of AI's brittle expert systems. Recent findings suggest that in humans, emotions are essential for flexible and rational decision making. Our hypothesis is that they emotional mechanisms will be essential for machines to have flexible and rational decision making, as well as truly creative thought and a variety of other human-like cognitive capabilities.