"Affective Understanding:" Modeling and Responding
to User Affect
Once the Sensing and Recognition
modules have made their best attempt to translate user signals into patterns that signify
the user's emotional responses, the system may now be said to be primitively
aware of the user's immediate emotional state. But what can be done with
this information? How will applications be able to make sense of this moment-to-moment
update on the user's emotional state, and make use of it? The Affective
Understanding module will use, process, and store this information,
to build and maintain a model of the user's emotional life in different
levels of granularity--from quick, specific combinations of affective responses--to meta-patterns of moods and other emotional responses. This
module will communicate knowledge from this model with the other modules
in the system.
The Affective Understanding module will eventually be able to incorporate
contextual information about the user and his/her environment, to generate
appropriate responses to the user that incorporate the user's emotional
state, the user's cognitive abilities, and his/her environmental situation.
Features of the module
The Affective Understanding module may:
- Absorb information, by receiving a constant data stream on the
user's current emotional state from the Recognition
- Remember the information, by keeping track of the user's emotional
responses via storage in short, medium, and long-term memory buffers.
- Model the user's current mood, by detecting meta-patterns in
the user's emotional responses over time, comparing these patterns to the
user's previously-defined moods, and possibly canonical, universal or archetypal
definitions of human moods.
- Model the user's emotional life. Recognize patterns in the way
the user's emotional states may change over time, to generate a model of
the user's emotional states--patterns in the typical types of emotional
state that the user experiences, mood variation, degrees of valence (i.e.
mildly put off vs. enraged), pattern combinations (i.e. tendencies toward
a pattern of anger followed by depression).
- Apply the user affect model. This model may help the Affective
Understanding module by informing the actions that this module decides
to take--actions that use the Applications
and Interface modules to customize the interaction
between user and system, to predict user responses to system behavior,
and to eventually make predictions about the user's interaction with environmental
stimuli. The Affective Understanding module's actions may be in the form
of selecting an application to open, for example an application which might
interactively query the user regarding an emotional state. Or the module
might supply such an application with needed information, for instance
enabling the application to offer the user assistance that the user might
want or need. Alternatively, the Affective Understanding module might automatically
respond with a pre-user-approved action for the given situation, such as
playing uplifting music when the user is feeling depressed. The Affective
Understanding module might even open an application that can carry on a
therapeutically-motivated conversation with its user via a discrete speech
interface, perhaps discussing the user's feelings with the user after sensing
that the user is alone and extremely upset.
- Update the user affect model. This model must be inherently
dynamic in order to reflect the user's changing response patterns over
time. To this end, the system will be sensitive to changes in the user's
meta-patterns as it begins to receive new kinds of data from the Recognition
module. Similarly, the Understanding module's learning agents will receive
feedback from both Application and Interface modules that will inform changes to
the user model. This feedback may consist of indications of levels of user
satisfaction--whether the user liked or disliked the system's behavior.
This feedback may come either as direct feedback from the user via the
interface, or indirectly by way of inference from how an application was
used (e.g. the way that application X was used and then terminated indicated
that the user may have been frustrated with it). These user responses will
help to modify the Understanding module's model of the user and, therefore,
the recommendations for system behavior that the Understanding module makes
to the rest of the system.
- Build and maintain a user-editable taxonomy of user preferences,
for use in specific circumstances when interacting with the user. For example,
instructions not to attempt to communicate with the user while s/he is
extremely agitated, or requests for specific applications during certain
moods--i.e. "Start playing melancholy music when I've been depressed
for x number of hours, and then start playing upbeat music after this duration."
This taxonomy may be eventually incorporated into the user
model; however, ultimately, the user's wishes should be able to
override any modeled preference.
- Feature two-way communication with the system's Recognition
module. Not only will the Recognition module constantly send updates
to the Understanding module, but
the Understanding module will also send messages to the Recognition
module. These messages may include alerting the Recognition module to "look
out" for subsequent emotional responses that the Understanding module's
model of the user's meta-patterns predicts. Other kinds of Understanding-module-to-Recognizing-module
messages may include assisting the Recognition module in fine-tuning its
recognition engine by suggesting new combinations of affect response patterns
that the user seems to be displaying. These novel combinations may in turn
inform novel patterns in the user's affect that may be beyond the scope
of the Recognition engine to find on its own.
- Eventually build and maintain a more complete model of the user's
behavior. The more accurate a model of the user's cognitive abilities
and processes can be built, the better the system will be at predicting
the user's behavior and providing accurate information to the other modules
within the Affective Computing system. To this end, the field of Artificial
Intelligence may be able to inform such a model. For example, the Affective
Understanding module may be able to communicate with an outboard, independent
system such as Lenat's Cyc (a vast data
structure that attempts to model human
cognition). The Cyc system might be able to functionally provide a model
of the user's behavior, rationale, and motivations to the Affective Understanding
- Eventually model the user's context. The more information the
system has about the user's outside environment, the more effective the
interaction will be, as will be the benefit to the user. A system that
knows that the user is in a conversation with someone else may not wish
to interrupt the user to discuss the user's current affective response.
Similarly, a system that can tell that the user has not slept in several
days, is ill or starving or under deadline pressure, will certainly be
able to communicate with much more sensitivity to the user. Other research
on context modeling at the Media Lab includes the Smart
Rooms project, such as the Artificial
Life Interactive Video Environment (ALIVE).
- Provide a basis for the generation of synthetic
system affect. A system that can display emotional responses of
its own is a vast, distinct area of research. The Affective Understanding
module described here may be able to inform the design of such systems.
And, once built, such a system could be integrated into the Affective Understanding
module to great effect. For example, a system that is able to display authentic
empathy in its interaction with the user might prove even more effective
in an Active Listening application than did our system that shows
artificial empathy (looks like empathy to the user, but the machine doesn't
really feel anything).
- Ensure confidentiality and security. The understanding module
will build and maintain a working model and record of the user's emotional
life; eventually, this model may also record other salient, contextual
aspects of the user's life. Therefore, perhaps more so than any other part
of the affective computing system, the affective understanding module will
house information that must be kept confidential. Part of our
policy is to place the highest priority on maintaining the user's
privacy and control over the release of any such information. The Affective
Understanding module must therefore employ strong
security measures, which must be supported by the rest of the system.