People Projects Publications Software Data Policy Press Videos FAQ Vintage

Highlighted Projects:

Affective Response to Haptic signals
This study attempts to examine humans' affective responses to superimposed sinusoidal signals. These signals can be perceived either through sound, in the case of electronically synthesized musical notes, or through vibro-tactile stimulation, in the case of vibrations produced by vibrotactile actuators. This study is concerned with the perception of superimposed vibrations, whereby two or more sinusoisal signals are perceived simultaneously, producing a perceptual impression that is substantially different than of each signal alone, owing to the interactions between perceived sinusoidal vibrations that give rise to a unified percept of a sinusoidal chord. The theory of interval affect was derived from systematic analyses of Indian, Chinese, Greek, and Arabic music theory and tradition, and proposes a universal organization of affective response to intervals organized using a multidimensional system. We hypothesize that this interval affect system is multi-modal and will transfer to the vibrotactile domain.

An EEG and Motion-Capture Based Expressive Music Interface for Affective Neurofeedback
This project examines how the expression granted by new musical interfaces can be harnessed to create positive changes in health and wellbeing. We are conducting experiments to measure EEG dynamics and physical movements performed by participants who are using software designed to invite physical and musical expression of the basic emotions. The present demonstration of this system incorporates an expressive gesture sonification system using a Leap Motion device, paired with an ambient music engine controlled by EEG-based affective indices. Our intention is to better understand affective engagement, by creating both a new musical interface to invite it, and a method to measure and monitor it. We are exploring the use of this device and protocol in therapeutic settings in which mood recognition and regulation are a primary goal.

Automated Tongue Analysis
A common practice in Traditional Chinese Medicine (TCM) is visual examination of the patient's tongue. This study will examine ways to make this process more objective and to test its efficacy for understanding stress- and health-related changes in people over time. We start by developing an app that makes it comfortable and easy for people to collect tongue data in daily life together with other stress- and health-related information. We will obtain assessment from expert practitioners of TCM, and also use state-of-the art pattern analysis and machine learning to attempt to create state-of-the-art algorithms able to help provide better insights for health and prevention of sickness.

Automatic Stress Recognition in Real-Life Settings
Technologies to automatically recognize stress are extremely important to prevent chronic psychological stress and pathophysiological risks associated with it. The introduction of comfortable and wearable biosensors has created new opportunities to measure stress in real-life environments, but there is often great variability in how people experience stress and how they express it physiologically. In this project, we modify the loss function of Support Vector Machines to encode a person's tendency to feel more or less stressed, and give more importance to the training samples of the most similar subjects. These changes are validated in a case study where skin conductance was monitored in nine call center employees during one week of their regular work. Employees working in this type of setting usually handle high volumes of calls every day, and they frequently interact with angry and frustrated customers that lead to high stress levels.

Autonomic Nervous System Activity in Epilepsy
We are performing long-term measurements of autonomic nervous system (ANS) activity on patients with epilepsy. In certain cases, autonomic symptoms are known to precede seizures. Usually in our data, the autonomic changes start when the seizure shows in the EEG, and can be measured with a wristband (much easier to wear every day than wearing an EEG). We found that the larger the signal we measure on the wrist, the longer the duration of cortical brain-wave suppression following the seizure. The duration of the latter is a strong candidate for a biomarker for SUDEP (Sudden Unexpected Death in Epilepsy), and we are working with scientists and doctors to better understand this. In addition, bilateral changes in ANS activity may provide valuable information regarding seizure focus localization and semiology.

BioGlass: Physiological Parameter Estimation Using a Head-Mounted Wearable Device
What if you could see what calms you down or increases your stress as you go through your day? What if you could see clearly what is causing these changes for your child or another loved one? People could become better at accurately interpreting and communicating their feelings, and better at understanding the needs of those they love. This work explores the possibility of using sensors embedded in Google Glass, a head-mounted-wearable device, to robustly measure physiological signals of the wearer.

BioInsights: Extracting Personal Data from Wearable Motion Sensors
Wearable devices are increasingly in long-term close contact with the body, giving them the potential to capture sensitive, unexpected, and surprising personal data. For instance, we have recently demonstrated that motion sensors embedded in a head-mounted wearable device like Google Glass can capture the heart rate and respiration rate from subtle motions of the head. We are examining additional signatures of information that can be read from motion sensors in wearable devices: for example, can a person's identity be validated from their subtle physiological motions, especially those related to their cardiorespiratory information? How robust are these motion-signatures to identifying a wearer, even when undergoing changes in posture, stress, and activity?

BioPhone: Physiology Monitoring from Peripheral Smartphone Motions
The large-scale adoption of smartphones during recent years has created many opportunities to improve health monitoring and care delivery. This project explores whether motion sensors available in off-the-shelf smartphones can capture physiological parameters of a person during stationary postures, even while being carried in a bag or a pocket.

BioWatch: Estimation of Heart and Breathing Rates from Wrist Motions
Most wrist-wearable smart watches and fitness bands include motion sensors; however, their use is limited to estimating physical activities such as tracking the number of steps when walking or jogging. This project explores how we can process subtle motion information from the wrist to measure cardiac and respiratory activity. In particular we study the following research questions: How can we use the currently available motion sensors within wrist-worn devices to accurately estimate heart rate and breathing rate? How do the wrist-worn estimates compare to traditional sensors and to state-of-the-art wearable physiological sensors? Does combining measurements from motion and traditional methods improve performance? How well do the proposed methods perform in daily life situations to provide unobtrusive physiological assessments?

Building the Just-Right-Challenge in Games and Toys
Working with the LEGO Group and Hasbro, we looked at the emotional experience of playing with games and LEGO bricks. We measured participants' skin conductance as they learned to play with these new toys. By marking the stressful moments we were able to see what moments in learning should be redesigned. Our findings suggest that framing is key: how can we help children recognize their achievements? We also saw how children are excited to take on new responsibilities but are then quickly discouraged when they aren't given the resources to succeed. Our hope for this work is that by using skin conductance sensors, we can help companies better understand the unique perspective of children and build experiences fit for them.

EDA Explorer
Electrodermal Activity (EDA) is a physiological indicator of stress and strong emotion. While an increasing number of wearable devices can collect EDA, analyzing the data to obtain reliable estimates of stress and emotion remains a difficult problem. We have built a graphical tool that allows anyone to upload their EDA data and analyze it. Using a highly accurate machine learning algorithm, we can automatically detect noise within the data. We can also detect skin conductance responses, which are spikes in the signal indicating a "fight or flight" response. Users can visualize these results and download files containing features calculated on the data to be used in their own analysis. Those interested in machine learning can also view and label their data to train a machine learning classifier. We are currently adding active learning, so the site can intelligently select the fewest possible samples for the user to label.

Fathom: Probabilistic Graphical Models to Help Mental Health Counselors
We explore advanced machine learning and reflective user interfaces to scale the national Crisis Text Line. We are using state-of-the-art probabilistic graphical topic models and visualizations to help a mental health counselor extract patterns of mental health issues experienced by participants, and bring large-scale data science to understanding the distribution of mental health issues in the United States.

FEEL: A Cloud System for Frequent Event and Biophysiological Signal Labeling
The wide availability of low-cost, wearable, biophysiological sensors enables us to measure how the environment and our experiences impact our physiology. This creates a new challenge: in order to interpret the collected longitudinal data, we require the matching contextual information as well. Collecting weeks, months, and years of continuous biophysiological data makes it unfeasible to rely solely on our memory for providing the contextual information. Many view maintaining journals as burdensome, which may result in low compliance levels and unusable data. We present an architecture and implementation of a system for the acquisition, processing, and visualization of biophysiological signals and contextual information.

Got Sleep?
Got Sleep? is an Android application to help people to be aware of their sleep-related behavioral patterns and tips about how they should change their behaviors to improve their sleep. The application evaluates people's sleep habits before they start using the app, tracks day and night behaviors, and provides feedback about what kinds of behavior changes they should make and whether the improvement is achieved or not.

IDA: Inexpensive Networked Digital Stethoscope
Complex and expensive medical devices are mainly used in medical facilities by health professionals. IDA is an attempt to disrupt this paradigm and introduce a new type of device: easy to use, low cost, and open source. It is a digital stethoscope that can be connected to the Internet for streaming physiological data to remote clinicians. Designed to be fabricated anywhere in the world with minimal equipment, it can be operated by individuals without medical training.

Large-Scale Pulse Analysis
This study aims to bring objective measurement to the multiple "pulse" and "pulse-like" measures made by practitioners of Traditional Chinese Medicine (TCM). The measures are traditionally made by manually palpitating the patient's inner wrist in multiple places, and relating the sensed responses to various medical conditions. Our project brings several new kinds of objective measurement to this practice, compares their efficacy, and examines the connection of the measured data to various other measures of health and stress. Our approach includes the possibility of building a smartwatch application that can analyze stress and health information from the point of view of TCM.

Lensing: Cardiolinguistics for Atypical Angina
Conversations between two individuals--whether between doctor and patient, mental health therapist and client, or between two people romantically involved with each other--are complex. Each participant contributes to the conversation using her or his own "lens." This project involves advanced probabilistic graphical models to statistically extract and model these dual lenses across large datasets of real-world conversations, with applications that can improve crisis and psychotherapy counseling and patient-cardiologist consultations. We're working with top psychologists, cardiologists, and crisis counseling centers in the United States.

Mapping the Stress of Medical Visits
Receiving a shot or discussing health problems can be stressful, but does not always have to be. We measure participants' skin conductance as they use medical devices or visit hospitals and note times when stress occurs. We then prototype possible solutions and record how the emotional experience changes. We hope work like this will help bring the medical community closer to their customers.

Measuring Arousal During Therapy for Children with Autism and ADHD
Physiological arousal is an important part of occupational therapy for children with autism and ADHD, but therapists do not have a way to objectively measure how therapy affects arousal. We hypothesize that when children participate in guided activities within an occupational therapy setting, informative changes in electrodermal activity (EDA) can be detected using iCalm. iCalm is a small, wireless sensor that measures EDA and motion, worn on the wrist or above the ankle. Statistical analysis describing how equipment affects EDA was inconclusive, suggesting that many factors play a role in how a child's EDA changes. Case studies provided examples of how occupational therapy affected children's EDA. This is the first study of the effects of occupational therapy's in situ activities using continuous physiologic measures. The results suggest that careful case study analyses of the relation between therapeutic activities and physiological arousal may inform clinical practice.

Mobile Health Interventions for Drug Addiction and PTSD
We are developing a mobile phone-based platform to assist people with chronic diseases, panic-anxiety disorders, or addictions. Making use of wearable, wireless biosensors, the mobile phone uses pattern analysis and machine learning algorithms to detect specific physiological states and perform automatic interventions in the form of text/images plus sound files and social networking elements. We are currently working with the Veterans Administration drug rehabilitation program involving veterans with PTSD.

Mobisensus: Predicting Your Stress/Mood from Mobile Sensor Data
Can we recognize stress, mood, and health conditions from wearable sensors and mobile-phone usage data? We analyze long-term, multi-modal physiological, behavioral, and social data (electrodermal activity, skin temperature, accelerometer, phone usage, social network patterns) in daily lives with wearable sensors and mobile phones to extract bio-markers related to health conditions, interpret inter-individual differences, and develop systems to keep people healthy.

Modulating Peripheral and Cortical Arousal Using a Musical Motor Response Task
We are conducting EEG studies to identify the musical features and musical interaction patterns that universally impact measures of arousal. We hypothesize that we can induce states of high and low arousal using electrodermal activity (EDA) biofeedback, and that these states will produce correlated differences in concurrently recorded skin conductance and EEG data, establishing a connection between peripherally recorded physiological arousal and cortical arousal as revealed in EEG. We also hypothesize that manipulation of musical features of a computer-generated musical stimulus track will produce changes in peripheral and cortical arousal. These musical stimuli and programmed interactions may be incorporated into music technology therapy, designed to reduce arousal or increase learning capability by increasing attention. We aim to provide a framework for the neural basis of emotion-cognition integration of learning that may shed light on education and possible applications to improve learning by emotion regulation.

Objective Asessment of Depression and Its Improvement
Current methods to assess depression and then ultimately select appropriate treatment have many limitations. They are usually based on having a clinician rate scales, which were developed in the 1960s. Their main drawbacks are lack of objectivity, being symptom-based and not preventative, and requiring accurate communication. This work explores new technology to assess depression, including its increase or decrease, in an automatic, more objective, pre-symptomatic, and cost-effective way using wearable sensors and smart phones for 24/7 monitoring of different personal parameters such as physiological data, voice characteristics, sleep, and social interaction. We aim to enable early diagnosis of depression, prevention of depression, assessment of depression for people who cannot communicate, better assignment of a treatment, early detection of treatment remission and response, and anticipation of post-treatment relapse or recovery.

Panoply is a crowdsourcing application for mental health and emotional wellbeing. The platform offers a novel approach to computer-based psychotherapy, one that is optimized for accessibility, engagement, and therapeutic efficacy. A three-week randomized-controlled trial with 166 participants compared Panoply to an active control task (online expressive writing). Panoply conferred greater or equal benefits for nearly every therapeutic outcome measure. Panoply also significantly outperformed the control task on all measures of engagement.

PongCam is a wellbeing project that enables Media Lab ping pong players to save videos of their best ping pong shots on Youtube. The device is constantly capturing footage of the ping pong table, storing the most recent footage in a buffer. After a good shot, a player can hit a big red button and the last 30 seconds of footage will be uploaded to the PongCam highlights reel on YouTube. We observe how devices of this sort promote mental and physical wellbeing in the Lab.

Predicting Students' Wellbeing from Physiology, Phone, Mobility, and Behavioral Data
The goal of this project is to apply machine learning methods to model the wellbeing of MIT undergraduate students. Extensive data is obtained from the SNAPSHOT study, which monitors students on a 24/7 basis, collecting their location, smartphone logs, sleep schedule, phone and SMS communications, academics, social networks, and even physiological markers like skin conductance, skin temperature, and acceleration. We extract features from this data and apply a variety of machine learning algorithms including Multiple Kernel Learning, Gaussian Mixture Models, and Transfer Learning, among others. Interesting findings include: when participants visit novel locations they tend to be happier; when they use their phones or stay indoors for long periods they tend to be unhappy; and when several dimensions of wellbeing (including stress, happiness, health, and energy) are learned together, classification accuracy improves.

Real-Time Assessment of Suicidal Thoughts and Behaviors
Depression correlated with anxiety is one of the key factors leading to suicidal behavior, and is among the leading causes of death worldwide. Despite the scope and seriousness of suicidal thoughts and behaviors, we know surprisingly little about what suicidal thoughts look like in nature (e.g., How frequent, intense, and persistent are they among those who have them? What cognitive, affective/physiological, behavioral, and social factors trigger their occurrence?). The reason for this lack of information is that historically researchers have used retrospective self-report to measure suicidal thoughts, and have lacked the tools to measure them as they naturally occur. In this work we explore use of wearable devices and smartphones to identify behavioral, affective, and physiological predictors of suicidal thoughts and behaviors.

SmileTracker is a system designed to capture naturally occurring instances of positive emotion during the course of normal interaction with a computer. A facial expression recognition algorithm is applied to images captured with the user's webcam. When the user smiles, both a photo and a screenshot are recorded and saved to the user's profile for later review. Based on positive psychology research, we hypothesize that the act of reviewing content that led to smiles will improve positive affect, and consequently, overall wellbeing.

In this project, we apply what we have learned from the SNAPSHOT study to the problem of changing behavior. We explore the design of user-centered tools that can harness the experience of collecting and reflecting on personal data to promote healthy behaviors, including stress management and sleep regularity. We will draw on commonly used theories of behavior change as the inspiration for distinct conceptual designs for a behavior change application based on the SNAPSHOT study. This approach will enable us to compare the types of visualization strategies that are most meaningful and useful for acting on each theory.

The SNAPSHOT study seeks to measure Sleep, Networks, Affect, Performance, Stress, and Health using Objective Techniques. It is a NIH-funded collaborative research project between the Affective Computing group, Macro Connections group, and Harvard Medical School's Brigham & Women's hospital. We have been running this study since fall 2013 to collect one month of data from 50 MIT undergraduate students who are socially connected every semester. We have collected data from about 170 participants, totaling over 5,000 days of data. We measure physiological, behavioral, environmental, and social data using mobile phones, wearable sensors, surveys, and lab studies. We investigate how daily behaviors and social connectivity influence sleep behaviors, health, and outcomes such as mood, stress, and academic performance. Using this multimodal data, we are developing models to predict onsets of sadness and stress. This study will provide insights into behavioral choices for wellbeing and performance.

Stories, language, and art are at the heart StoryScape. While StoryScape began as a tool to meet the challenging language learning needs of children diagnosed with autism, it has become much more. StoryScape was created to be the first truly open and customizable platform for creating animated, interactive storybooks that can interact with the physical world. Download the android app: and make your own amazing stories at

The Challenge
Individuals who work in sedentary occupations are at increased risk of a number of serious health consequences. This project involves both a tool and an experiment aimed at decreasing sedentary activity and promoting social connections among members of the MIT Media Lab. Our system will ask participants to sign up for short physical challenges (ping pong, foosball, walking) and pair them with a partner to perform the activity. Participants' overall activity levels will be monitored with an activity tracker during the course of the study to assess the effectiveness of the system.

The proliferation of smartphones and wearable sensors is creating very large data sets that may contain useful information. However, the magnitude of generated data creates new challenges as well. Processing and analyzing these large data sets in an efficient manner requires computational tools. Many of the traditional analytics tools are not optimized for dealing with large datasets. Tributary is a parallel engine for searching and analyzing sensor data. The system utilizes large clusters of commodity machines to enable in-memory processing of sensor time-series signals, making it possible to search through billions of samples in seconds. Users can access a rich library of statistics and digital signal processing functions or write their own in a variety of languages.

Unlocking Sleep
Despite a vast body of knowledge about the importance of sleep, our daily schedules are often planned around work and social events, not healthy sleep. While we're prompted throughout the day by devices and people to plan and think about our schedules in terms of things to do, sleep is rarely considered until we're tired and it's late. This project proposes a way that our everyday use of technology can help improve sleep habits. Smartphone unlock screens are an unobtrusive way of prompting user reflection throughout the day by posing "microquestions" as users unlock their phone. The questions are easily answered with a single-swipe. Since we unlock our phones 50 to 200 times per day, microquestions can collect information with minimal intrusiveness to the user’s daily life. Can these swipe-questions help users mentally plan their day around sleep, and trigger healthier sleep behaviors?

Valinor: Mathematical Models to Understand and Predict Self-Harm
We are developing statistical tools for understanding, modeling, and predicting self-harm by using advanced probabilistic graphical models and fail-soft machine learning in collaboration with Harvard University and Microsoft Research.

Wavelet-Based Motion Artifact Removal for Electrodermal Activity
Electrodermal activity (EDA) recording is a powerful, widely used tool for monitoring psychological or physiological arousal. However, analysis of EDA is hampered by its sensitivity to motion artifacts. We propose a method for removing motion artifacts from EDA, measured as skin conductance (SC), using a stationary wavelet transform (SWT). We modeled the wavelet coefficients as a Gaussian mixture distribution corresponding to the underlying skin conductance level (SCL) and skin conductance responses (SCRs). The goodness-of-fit of the model was validated on ambulatory SC data. We evaluated the proposed method in comparison with three previous approaches. Our method achieved a greater reduction of artifacts while retaining motion-artifact-free data.

Prior Projects:

AboutFace is a user-dependent system that is able to learn patterns and discriminate the different facial movements characterizing confusion and interest. The system uses a piezoelectric sensor to detect eyebrow movements and begins with a training session to calibrate the unique values for each user. After the training session, the system uses these levels to develop an expression profile for the individual user. The system has many potential uses, ranging from computer and video-mediated conversations to interactions with computer agents. This system is an alternative to using camera-based computer vision analysis to detect faces and expressions. Additionally, when communicating with other people, users of this system also have the option of conveying their expressions anonymously by wearing a pair of glasses that conceals their expressions and the sensing device.

Adaptive, Wireless, Signal Detection and Decoding
In this project, we propose a new Bayesian receiver for signal detection in flat-fading channels. First, the detection problem is formulated as an inference problem in a hybrid dynamic system that has both continuous and discrete variables. Then, an expectation propagation algorithm is proposed to address the inference problem. As an extension of belief propagation, expectation propagation efficiently approximates a Bayesian estimation by iteratively propagating information between different nodes in the dynamic system and projecting exact messages into the exponential family. Compared to sequential Monte Carlo filters and smoothers, the new method has much lower computational complexity since it makes analytically deterministic approximations instead of Monte Carlo approximations. Our simulations demonstrate that the new receiver achieves accurate detection without the aid of any training symbols or decision feedbacks. Future work involves joint decoding and channel estimation, where convolutional codes are used to protect signals from noise corruption. Initial results are promising.

Affect as Index
Affect as Index is a tool that takes group physiological data as input, aggregates it across different demographic dimensions, and attaches them to media content. Users can review videotaped or prerecorded events by clicking on points of interest in a physiological graph. This software addresses two challenges: the difficulty of expressing and sharing emotions with others, and the laborious task of monitoring interpersonal interactions within natural settings. For the former, groups interested in discussing shared and dissimilar emotions evoked during experiences can use this tool to place context around their dialogue. For the latter, "meaningful moments" observed within natural interactions can be marked and superimposed on the physiological data collected. In this way, affect and observations of affect can be used to index group-level significant moments that occur within volumes of video data.

Affect in Speech: Assembling a Database
The aim of this project is to build a database of natural speech showing a range of affective variability. It is an extension of our ongoing research focused on building models for automatic detection of affect in speech. At a very basic level, training such systems requires a large corpus of speech containing a range of emotional vocal variation. A traditional approach to this research has been to assemble databases where actors have provided the affective variation on demand. However, this method often results in unnatural sounding speech and/or exaggerated expressions. We have developed a prototype of an interactive system that guides a user through a question and answer session. Without any rehearsals or scripts, the user navigates through touch and spoken language an interface guided by embodied conversational agents which prompt the user to speak about an emotional experience. Some of the issues we are addressing include the design of the text and character behavior (including speech and gesture) so as to obtain a convincing and disclosing interaction with the user.

Affective Carpet
The "Affective Carpet" is a soft, deformable surface made of cloth and foam, which detects continuous pressure with excellent sensitivity and resolution. It is being used as an interface for projects in affective expression, including as a controller to measure a musical performer's direction and intensity in leaning and weight-shifting patterns.

Affective Mirror
The Affective Mirror is an attempt to build a fully automated system that intelligently responds to a person's affective state in real time. Current work is focused on building an agent that realistically mirrors a person's facial expression and posture. The agent detects affective cues through a facial-feature tracker and a posture-recognition system developed in the Affective Computing group; based on what affect a person is displaying, such as interest, boredom, frustration, or confusion, the system responds with matching facial affect and/or posture. This project is designed to be integrated into the Learning Companion Project, as part of an early phase of showing rapport-building behaviors between the computer agent and the human learner.

Affective Social Quest
ASQ investigates ways to teach social-emotion skills to children interactively with toys. One of the first goals is to help autistic children recognize expressions of emotion in social situations. The system uses four "dwarfs" expressing sad, happy, surprise, and angry, and each communicates wirelessly to the system and detects which plush doll was selected by the child. The computer plays short entertaining video clips displaying examples of the four emotions and cues the child to pick a dwarf that closely matches that emotion. Future work includes improving the ability of the system to recognize direct displays of emotion by the child.

Affective Tangibles
People naturally express frustration through the use of their motor skills. The purpose of the Affective Tangibles project is to develop physical objects that can be grasped, squeezed, thrown, or otherwise manipulated via a natural display of affect. Constructed tangibles include a PressureMouse, affective pinwheels that are mapped to skin conductance, and a voodoo doll that can be shaken to express frustration. We have found that people often increase the intensity of muscle movements when experiencing frustrating interactions.

Affective Tigger
The Affective Tigger is a plush toy designed to recognize and react to certain emotinal behaviors of its playmate. For example the toy enters a state of "happy," moving its ears upward and emitting a happy vocalization when it recognizes that the child has postured the toy upright and is bouncing it along the floor. Tigger has five such states, involving recognizing and responding with an emotional behavior. The resulting behavior Tigger demonstrates allows it to serve as an affective mirror for the child's expression. This work involved designing the toy, and evaluating sessions of play with it with dozens of kids. The toy was shown to successfully communicate some aspects of emotion, and to prompt behaviors that are interesting to researchers trying to learn about the development of human emotional skills such as empathy.

Affective-Cognitive Framework for Machine Learning and Decision Making
Recent findings in affective neuroscience and psychology indicate that human affect and emotional experience play a significant and useful role in human learning and decision-making. Most machine-learning and decision-making models, however, are based on old, purely cognitive models, and are slow, brittle, and awkward to adapt. We aim to redress many of these classic problems by developing new models that integrate affect with cognition. Ultimately, such improvements will allow machines to make smarter and more human-like decisions for better human-machine interaction.

Affective-Cognitive Product Evaluation and Prediction of Customer Decisions
Companies would like more new products to be successful in the marketplace, but current evaluation methods such as focus groups do not accurately predict customer decisions. We are developing new technology-assisted methods to try to improve the customer-evaluation process and better predict customer decisions. The new methods involve multi-modal affective measures (such as facial expression and skin conductance) together with behavioral measures, anticipatory-motivational measures, and self-report cognitive measures. These measures are combined into a novel computational model, the form of which is motivated by findings in affective neuroscience and human behavior. The model is being trained and tested with customer product evaluations and marketplace outcomes from real product launches.

AffQuake is an attempt to incorporate signals that relate to a player's affect into ID Software's Quake II in a way that alters game play. Several modifications have been made that cause the player's avatar within Quake to alter its behaviors depending upon one of these signals. In StartleQuake, when a player becomes startled, his or her avatar also becomes startled and jumps back. Quake changes the size of the player's avatar in relation to the user's response as well, representing player excitement by average skin conductance level, and growing the avatar's size when this level is high. A taller avatar means the player can see further; however, it also makes him or her an easier target.

Ambient Displays for Social Support and Diabetes Management
We design and evaluate an ambient blood glucose level visualization and feedback system that uses an Ambient Orb for diabetes self-care and social support. The social support is provided by a friend or family member of an individual with diabetes. This research study was carried out with adult patients at Joslin Diabetes Center.

Analysis of Autonomic Sleep Patterns
We are examining autonomic sleep patterns using a wrist-worn biosensor that enables comfortable measurement of skin conductance, skin temperature, and motion. The skin conductance reflects sympathetic arousal. We are looking at sleep patterns in healthy groups, in groups with autism, and in groups with sleep disorders. We are looking especially at sleep quality and at performance on learning and memory tasks.

Auditory Desensitization Games
Persons on the autism spectrum often report hypersensitivity to sound. Efforts have been made to manage this condition, but there is wide room for improvement. One approach—exposure therapy—has promise, and a recent study showed that it helped several individuals diagnosed with autism overcome their sound sensitivities. In this project, we borrow principles from exposure therapy, and use fun, engaging games to help individuals gradually get used to sounds that they might ordinarily find frightening or painful.

Automatic Facial Expression Analysis
Recognizing non-verbal cues, which constitute a large percentage of our communication, is a prime facet of building emotionally intelligent systems. Facial expressions and movements such as a smile or a nod are used either to fulfill a semantic function, to communicate emotions, or as conversational cues. We are developing an automatic tool using computer vision and various machine-learning techniques, which can detect the different facial movements and head gestures of people while they are interacting naturally with the computer. Past work on this project determined techniques to track upper facial features (eyes and eyebrows) and detect facial actions corresponding to those features (eyes squintint or widening, eyebrows raised). The ongoing project is expanding its scope to track and detect facial actions corresponding to the lower features. Further, we hope to integrate the facial expression analysis module with other sensors developed by the Affective Computing group to reliably detect and recognize different emotions.

Autonomic Nervous System Activity in Sleep
We are characterizing changes in autonomic nervous system (ANS) during sleep. This can potentially provide insight into circadian rhythms as well as identification of various sleep stages. Furthermore, we are investigating differences between ANS activity in neurotypicals and people with sleep disorders or electrical status epilepticus of sleep (ESES).

Bayesian Spectral Estimation
This project developed efficient versions of Bayesian techniques for a variety of inference problems, including curve fitting, mixture-density estimation, principal-components analysis (PCA), automatic relevance determination, and spectral analysis. One of the surprising methods that resulted is a new Bayesian spectral analysis tool for nonstationary and unevenly sampled signals, such as electrocardiogram (EKG) signals, where there is a sample with each new (irregularly spaced) R wave. The new method outperforms other methods such as Burg, Music, and Welch, and compares favorably to the multitaper method without requiring any windowing. The ability to use unevenly spaced data helps avoid problems with aliasing. The method runs in real time on either evenly or unevenly sampled data.

BioMod is an integrated interface for users of mobile and wearable devices, monitoring various physiological signals such as the electrocardiogram, with the intention of providing useful and comfortable feedback about medically important information. The first version of this system includes new software for monitoring stress and its impact on heart functioning, and the ability to wirelessly communicate this information over a Motorola cell phone. One application under development is the monitoring of stress in patients who desire to stop smoking: the system will alert an "on-call" trained behavior-change assistant when the smoker is exhibiting physiological patterns indicative of stress or likely relapse, offering an opportunity for encouraging intervention at a point of weakness. Challenges in this project include the development of an interface that is easy and efficient to use on the go, is sensitive to user feelings about the nature of the information being communicated, and accurately recognizes the patterns of physiological signals related to the conditions of interest.

Car Phone Stress
We are building a system that can watch for certain signs of stress in drivers, specifically stress related to talking on the car phone, as may be caused by increased mental workload. To gather data for training and testing our system, subjects were asked to 'drive' in a simulator past several curves while keeping their speed close to a predetermined desired constant value. In some cases they were simultaneously asked to listen to random numbers from a speech-synthesis software and to perform simple mathematical tasks over a telephone headset. Several measures drawn from the subjects' driving behavior were examined as possible indicators of the subjects' performance and of their mental workload. When subjects were instructed (by a visible sign) to brake, most braked within 0.7-1.4 seconds after the sign came into view. However, in a significant number of incidents, subjects never braked or braked 1.5-3.5 seconds after the message; almost all of these incidents were when subjects were on the phone. On average, we found that drivers on the phone braked 10% slower than when not on the phone; additionally, the variance in their braking time was four times higher -- suggesting that although delayed driver reactions were infrequent, when delays happened they could be large and potentially dangerous. Furthermore, their infrequency could create a false sense of security. In future experiments, subjects' physiological data will be analyzed jointly with measures of workload, stress and performance.

Cardiac PAF Detection and Prediction
PAF (paroxysmal atrial fibrillation) is a dangerous form of cardiac arrhythmia that poses severe health risks, sometimes leading to heart attacks, the recognized number-one killer in the developed world. The technical challenges for detecting and predicting PAF include accurate sensing, speedy analysis, and a workable classification system. To address these issues, electrocardiogram (ECG) data from the PhysioNet Online Database will be analyzed using new spectrum estimation techniques to develop a program able to predict, as well as recognize, the onset of specific cardiac arrhythmias such as PAF. The system could then be incorporated into wearable/mobile medical devices, allowing for interventions before cardiac episodes occur, and potentially saving many lives.

Cardiocam is a low-cost, non-contact technology for measurement of physiological signals such as heart rate and breathing rate using a basic digital imaging device such as a webcam. The ability to perform remote measurements of vital signs is promising for enhancing the delivery of primary healthcare.

Causal Learning and Autism
In collaboration with the Early Childhood Cognition Center at MIT BCS, we are developing sensor-enabled toys and infant affect sensors with the goal to understand how children on the autism spectrum use patterns of evidence to learn causal relationships and the extent with which this is state-dependent. We investigate in what respects, if any, causal learning is different in comparison to typically developing children. The results of this research will inform the design of new object-based technologies for language and communication learning.

Conductive Chat
While instant messaging clients are frequently and widely used for interpersonal communication, they lack the richness of face-to-face conversations. Without the benefit of eye contact and other non-verbal "back-channel feedback," text-based chat users frequently resort to typing "emoticons" and extraneous punctuation in an attempt to incorporate contextual affect information in the text communication. Conductive Chat is an instant messenger client that integrates users' changing skin conductivity levels into their typewritten dialogue. Skin conductivity level (also referred to as galvanic skin response) is frequently used as a measure of emotional arousal, and high levels are correlated with cognitive states such as high stress, excitement, and attentiveness. On an expressive level, Conductive Chat communicates information about each user's arousal in a consistent, intuitive manner, without needing explicit controls or explanations. On a communication-theory level, this new communication channel allows for more "media rich" conversations without requiring more work from the users.

Customer Measurement Using Bluetooth
We are exploring innovative use of cell-phone Bluetooth technologies for consumer research and customer measurement. We have developed a small, portable, Bluetooth base station that can monitor consumer activity in a retail space and also enable new interactive services. This Bluetooth hub also serves as a network gateway for other wireless sensors in the local area.

Customized Computer-Mediated Interventions
Individuals diagnosed with autism spectrum disorder (ASD) often have intense, focused interests. These interests, when harnessed properly, can help motivate an individual to persist in a task that might otherwise be too challenging or bothersome. For example, past research has shown that embedding focused interests into educational curricula can increase task adherence and task performance in individuals with ASD. However, providing this degree of customization is often time-consuming and costly and, in the case of computer-mediated interventions, high-level computer-programming skills are often required. We have recently designed new software to solve this problem. Specifically, we have built an algorithm that will: (1) retrieve user-specified images from the Google database; (2) strip them of their background; and (3) embed them seamlessly into Flash-based computer programs.

Detection and Analysis of Driver Stress
Driving is an ideal test bed for detecting stress in natural situations. Four types of physiological signals (electrocardiogram, electromyogram, respiration, and skin conductivity related to autonomic nervous system activation) were collected in a natural driving situation under various driving conditions. The occurrence of natural stressors was designed into the driving task and validated using driver self-report, real-time, third-party observations, and independently coded video records of road conditions and facial expression. Features reflecting heart-rate variability derived from the adaptive Bayesian spectrum estimation, the rate of skin-conductivity orienting responses, and the spectral characteristics of respiration were extracted from the data. Initial pattern-recognition results show separation for the three types of driving states: rest, city, and highway, and some discrimination within states for cases in which the state precedes or follows a difficult turn-around or toll situation. These results yielded from 89-96 percent accuracy in recognizing stress level. We are currently investigating new, advanced means of modeling the driver data.

Digging into Brand Perception with Psychophysiology
What do customers really think about your company or brand? Using skin conductance sensors, we measure what excites and frustrates customers when discussing topics relevant to your brand. For example, with the National Campaign to Prevent Teenage Pregnancy, we saw conversations about empowerment and abortion upset conservative families. However, talking about the importance of strong families excited and engaged them. Rather than rely on self-reports, physiological measurements allow us to pinpoint what words and concepts affect your customers. We hope work like this will help companies better reflect on how their actions and messaging affect their customer's opinion in more detailed and accurate ways.

EmoteMail is an email client that is augmented to convey aspects of the state of the writer during the composition of email to the recipient. The client captures facial expressions and typing speed and introduces them as design elements. These contextual cues provide extra information that can help the recipient decode the tone of the email. Moreover, the contextual information is gathered and automatically embedded as the sender composes the email, allowing an additional channel of expression.

Emotion and Memory
Have you ever wondered what makes an ad memorable? We have performed a comprehensive review of literature concerning advertising, memory, and emotion. A summary of results are available.

Emotion Bottles
The Emotion Bottles are tangibly enticing objects that embody three emotions: angry, happy, and sad. When a bottle is opened, a vocal output is generated as if the emotion that was stored within the bottle is released. The bottles are placed near each other and represent a person in three possible emotional states. Varying degrees of these emotions are "bottled up" inside. The three bottles were chosen to maintain the simplicity of exploring the combination of distinct emotional states (eight possibilities). While not completely representative of the possible emotional state of a person, the bottles explore the interface in accessing emotions, the interaction between conflicting emotions, and the meaning of transition between clear emotional states as a person empathizes with or projects their feelings onto the bottles.

Emotion Communication in Autism
People who have difficulty communicating verbally (such as many people with autism) sometimes send nonverbal messages that do not match what is happening inside them. For example, a child might appear calm and receptive to learning—but have a heart rate over 120 bpm and be about to meltdown or shutdown. This mismatch can lead to misunderstandings such as "he became aggressive for no reason." We are creating new technologies to address this fundamental communication problem and enable the first long-term, ultra-dense longitudinal data analysis of emotion-related physiological signals. We hope to equip individuals with personalized tools to understand the influences of their physiological state on their own behavior (e.g., "which state helps me best maintain my attention and focus for learning?"). Data from daily life will also advance basic scientific understanding of the role of autonomic nervous system regulation in autism.

Emotion Prototyping: Redesigning the Customer Experience
You can test whether a website is usable by making wire frames, but how do you know if that site, product, or store is emotionally engaging? We build quick, iterative environments where emotions can be tested and improved. Emphasis is on setting up the right motivation (users always have to buy what they pick), pressures (can you buy the laptop in 10 minutes?), and environment (competitors’ products better be on the shelf too). Once we see where customers are stressed or miss the fun part, we change the space on a daily, iterative cycle. Within two to three weeks, we can tell how to structure a new offering for a great experience. Seldom do the emotions we hope to create happen on the first try; emotion prototyping delivers the experience we want. We hope to better understand the benefits of emotion prototyping, especially while using the skin conductance sensor.

Emotional DJ
The technology in this project changes facial expressions in videos without the system knowing anything in particular about the person's face ahead of time. There are a few reasons to create something like this: first, it provides an artistic tool with which to alter photos or videos; second, it could be set up to let people open-endedly explore their facial communication and expressiveness by playing with a real-time video of their own current face; finally, E-DJ demonstrates an unexpected way in which we can't always trust the video information we love to consume.

Emotional-Social Intelligence Toolkit
Social-emotional communication difficulties lie at the core of autism spectrum disorders, making interpersonal interactions overwhelming, frustrating, and stressful. We are developing the world's first wearable affective technologies to help the growing number of individuals diagnosed with autism—approximately 1 in 150 children in the United States—learn about nonverbal communication in a natural, social context. We are also developing technologies that build on the nonverbal communication that individuals are already using to express themselves, to help families, educators, and other persons who deal with autism spectrum disorders to better understand these alternative means of nonverbal communication.

Enhanced Sensory Perception
As the population ages, acuity in one or more sensory channels often diminishes or may be totally lost. Augmenting or compensating for loss in the perceptual system by taking advantage of sensory data outside the normal human range and mapping it to meaningful perceptual information has the potential of giving an ordinary person enhanced sensory perception (ESP). Sensory deficiency is not restricted to any particular segment of the population, however. For example, we tend to be myopic about ourselves, and thus can benefit from psychological mirrors in the form of trainers or therapists who can assess and guide our physical and/or mental development. In this spirit, "Reflective Biometrics" is a novel approach to analyzing and interpreting biometric sensory information for self monitoring and examination. It is self-examination via technology as a mirror. Biometric technologies in service of the individual can serve as reflectors that enhance our self-awareness, self-understanding, and health, and they can facilitate our interaction with computers and with each other by augmenting our perceptual system.

Evaluation Tool for Recognition of Social-Emotional Expressions from Facial-Head Movements
To help people improve their reading of faces during natural conversations, we developed a video tool to evaluate this skill. We collected over 100 videos of conversations between pairs of both autistic and neurotypical people, each wearing a Self-Cam. The videos were manually segmented into chunks of 7-20 seconds according to expressive content, labeled, and sorted by difficulty—all tasks we plan to automate using technologies under development. Next, we built a rating interface including videos of self, peers, familiar adults, strangers, and unknown actors, allowing for performance comparisons across conditions of familiarity and expression. We obtained reliable identification (by coders) of categories of smiling, happy, interested, thinking, and unsure in the segmented videos. The tool was finally used to assess recognition of these five categories for eight neurotypical and five autistic people. Results show some autistics approaching the abilities of neurotypicals while several score just above random.

Exploring Temporal Patterns of Smile
A smile is a multi-purpose expression. We smile to express rapport, polite disagreement, delight, sarcasm, and often, even frustration. Is it possible to develop computational models to distinguish among smiling instances when delighted, frustrated, or just being polite? In our ongoing work, we demonstrate that it is useful to explore how the patterns of smile evolve through time, and that while a smile may occur in positive and in negative situations, its dynamics may help to disambiguate the underlying state.

Externalization Toolkit
We propose a set of customizable, easy-to-understand, and low-cost physiological toolkits in order to enable people to visualize and utilize autonomic arousal information. In particular, we aim for the toolkits to be usable in one of the most challenging usability conditions: helping individuals diagnosed with autism. This toolkit includes: wearable, wireless, heart-rate and skin-conductance sensors; pendant-like and hand-held physiological indicators hidden or embedded into certain toys or tools; and a customized software interface that allows caregivers and parents to establish a general understanding of an individual's arousal profile from daily life and to set up physiological alarms for events of interest. We are evaluating the ability of this externalization toolkit to help individuals on the autism spectrum to better communicate their internal states to trusted teachers and family members.

EyeJacking: See What I See
While modern communication technologies mean that we can connect to more people, these connections lack the affective subtleties inherent in situated interactions. EyeJacking is an application for the sharing of experiences in which one or more persons “eyejack— a person’s visual field to share what he or she sees. Using a wearable camera/micorphone system, remote interaction partners can share an experience first-hand and play an active role in shaping the experience. We explore the application of EyeJacking as a tool for situated learning for individuals on the autism spectrum, where parents, caregivers, or peers could “eyejack— and tag the world remotely. We also explore the application of EyeJacking to leverage the power of the masses to bootstrap people-sense abilities in robots.

FaceSense: Affective-Cognitive State Inference from Facial Video
People express and communicate their mental states—such as emotions, thoughts, and desires—through facial expressions, vocal nuances, gestures, and other non-verbal channels. We have developed a computational model that enables real-time analysis, tagging, and inference of cognitive-affective mental states from facial video. This framework combines bottom-up, vision-based processing of the face (e.g., a head nod or smile) with top-down predictions of mental-state models (e.g., interest and confusion) to interpret the meaning underlying head and facial signals over time. Our system tags facial expressions, head gestures, and affective-cognitive states at multiple spatial and temporal granularities in real time and offline, in both natural human-human and human-computer interaction contexts. A version of this system is being made available commercially by Media Lab spin-off Affectiva, indexing emotion from faces. Applications range from measuring people's experiences to a training tool for autism spectrum disorders and people who are nonverbal learning disabled.

Facial Expression Analysis Over the Web
This work builds on our earlier work with FaceSense, created to help automate the understanding of facial expressions, both cognitive and affective. The FaceSense system has now been made available commercially by Media Lab spinoff Affectiva as Affdex. In this work we present the first project analyzing facial expressions at scale over the Internet. The interface analyzes the participants' smile intensity as they watch popular commercials. They can compare their responses to an aggregate from the larger population. The system also allows us to crowd-source data for training expression recognition systems and to gain better understanding of facial expressions under natural at-home viewing conditions instead of in traditional lab settings.

Fostering Affect Awareness and Regulation in Learning
Sometimes learners have to focus while experiencing strong emotions (e.g., family problems). They may also face challenges in perservering when encountering repeated failures in problem solving. The ability to know what one is feeling (e.g., worried, frustrated) and rise above it and handle the situation productively involves meta-affective skills. With such skills, a learner feeling "I can't do this; I want to quit," might instead think, "I am frustrated, but this is OK—it happens to experts. I should look for a different way to solve this." This research develops theory and technology to help learners develop meta-affective skills. Two recent achievements are development of (1) a technology with machine "common-sense" emotion—reasoning for enabling teenage girls to reflect on emotions in stories that they've constructed and improve their affect awareness; and (2) a technology to help students become stronger learners even when they feel like quitting.

Frame It
Frame It is an interactive, blended, tangible-digital puzzle game intended as a play-centered teaching and therapeutic tool. Current work is focused on the development of a social-signals puzzle game for children with autism that will help them recognize social-emotional cues from information surrounding the eyes. In addition, we are investigating if this play-centered therapy results in the children becoming less averse to direct eye contact with others. The study uses eye-tracking technology to measure gaze behavior while participants are exposed to images and videos of social settings and expressions. Results indicate that significant changes in expression recognition and social gaze are possible after repeated uses of the Frame It game platform.

The galvactivator is a glove-like wearable device that senses the wearer's skin conductivity and maps its values to a bright LED display. Increases in skin conductivity across the palm tend to be good indicators of physiological arousal, causing the galvactivator display to glow brightly. The galvactivator has many potentially useful purposes, ranging from self-feedback for stress management, to facilitation of conversation between two people, to new ways of visualizing mass excitement levels in performance situations or visualizing aspects of arousal and attention in learning situations. One of the findings in mass-communication settings was that people tended to "glow" when a new speaker came onstage, and during live demos, laughter, and live audience interaction. They tended to "go dim" during powerpoint presentations. In smaller educational settings, students have commented on how they tend to glow when they are more engaged with learning.

Gene Expression Data Analysis
This research aims to classify gene expression data sets into different categories, such as normal vs. cancer. The main challenge is that thousands of genes are measured in the micro-array data, while only a small subset of genes are believed to be relevant for disease classification. We have developed a novel approach called "predictive automatic relevance determination;" this method brings Bayesian tools to bear on the problem of selecting which genes are relevant, and extends our earlier work on the development of the "expectation propagation" algorithm. In our simulations, the new method outperforms several state-of-the-art methods, including support-vector machines with feature selection and relevance-vector machines.

Gesture Guitar
Emotions are often conveyed through gesture. Instruments that respond to gestures offer musicians new, exciting modes of musical expression. This project gives musicians wireless, gestural-based control over guitar effects parameters.

Girls Involved in Real-Life Sharing
In this research, a proactive emotional health system, geared toward supporting emotional self-awareness and empathy, was built as a part of a long-term research plan for understanding the role digital technology can play in helping people to reflect on their beliefs, attitudes, and values. The system, G.I.R.L.S. (Girls Involved in Real-Life Sharing), allows users to reflect actively upon the emotions related to their situations through the construction of pictorial narratives. The system employs common-sense reasoning to infer affective content from the users' stories and support emotional reflection. Users of this new system were able to gain new knowledge and understanding about themselves and others through the exploration of authentic and personal experiences. Currently, the project is being turned into an online system for use by school counselors.

Guilt Detection
The goal of this project is to produce a guilt detector. We have created an experiment that is designed to produce feelings of guilt of varying levels in different groups while we record EKG and skin conductivity. By examining the differences in physiology across the conditions, we have exlored how one might build a classifier to determine which condition, and thus which level of guilt, an individual is experiencing.

HandWave is a small, wireless, networked skin conductance sensor that can be worn or used in many different form factors. Skin conductance is the best known measure of arousal (whether emotional, cognitive, or physical) and this device makes it easy to gather this information from mobile users. Many existing affective computing systems make use of sensors that are inflexible and often physically attached to supporting computers. In contrast, HandWave allows an additional degree of flexibility by providing ad hoc wireless networking capabilities to a wide variety of Bluetooth devices as well as adaptive biosignal amplification. As a consequence, HandWave is useful in games, tutoring systems, experimental data collection, and augmented journaling, among other applications. The Handwave builds on the earlier Galvactivator project.

We are developing wearable sensors that measure cardiovascular parameters such as heart rate and heart rate variability (HRV) in real time. HRV provides a sensitive index of autonomic nervous system activity. These sensors will be capable of communication with mobile devices such as the iPhone and iPod Touch.

Human Motion Signatures
Given motion capture samples of Charlie Chaplin's walk, is it possible to synthesize other motions—say, ascending or descending stairs—in his distinctive style? More generally, in analogy with handwritten signatures, do people have characteristic motion signatures that individualize their movements? If so, can these signatures be extracted from example motions? Furthermore, can extracted signatures be used to recognize, say, a particular individual's walk subsequent to observing examples of other movements produced by this individual? We are developing an algorithm that extracts motion signatures and uses them in the animation of graphical characters. For example, given a corpus of walking, stair ascending, and stair descending motion data collected over a group of subjects, plus a sample walking-motion for a new subject, our algorithm can synthesize never-before-seen ascending and descending motions in the distinctive style of this new individual.

Improving Sleep-Wake Schedule Using Sleep Behavior Visualization and a Bedtime Alarm
Humans need sleep, along with food, water, and oxygen, to survive. With about one-third of our lives spent sleeping, there has been increased attention and interest in understanding sleep and the overall state of our "sleep health." The rapid adoption of smartphones, along with a growing number of sleep tracking applications for these devices, presents an opportunity to use phones to encourage better sleep hygiene. Procrastinating going to bed and being unable to stick to a consistent bedtime can lead to inadequate sleep time, which in turn affects quality of life and overall wellbeing. To help address this problem, we developed two applications, Lights Out and Sleep Wallpaper, which provide a sensor-based bedtime alarm and a connected peripheral display on the wallpaper of the user's mobile phone to promote awareness with sleep data visualization.

In Search of Wonder: Measuring Our Response to the Miraculous
The wonder that occurs while watching a good magic trick or admiring a gorgeous natural vista is a strong emotion that has not been well studied. Educators, media producers, entertainers, scientists and magicians could all benefit from a more robust understanding of wonder. A new model was developed, and an experiment was conducted to investigate how several variables affect how magic tricks are enjoyed. The experiment showed 70 subjects 10 videos of magic while recording their responses and reactions to the tricks. Some individuals were shown the explanations to the magic tricks to gauge their impact on enjoyment. The style of the presentation was varied between two groups to compare the effect of magic presented as a story to magic presented as a puzzle. Presentation style has an effect on magic enthusiasts' enjoyment and a story-oriented presentation is associated with individuals being more generous towards a charity.

Infant Monitoring and Communication
We have been developing comfortable, safe, attractive physiological sensors that infants can wear around the clock to wirelessly communicate their internal physiological state changes. The sensors capture sympathetic nervous system arousal, temperature, physical activity, and other physiological indications that can be processed to signal changes in sleep, arousal, discomfort or distress, all of which are important for helping parents better understand the internal state of their child and what things stress or soothe their baby. The technology can also be used to collect physiological and circadian patterns of data in infants at risk for developmental disabilities.

INNER-active Journal
The purpose of the INNER-active Journal system is to provide a way for users to reconstruct their emotions around events in their lives, and to see how recall of these events affects their physiology. Expressive writing, a task in which the participant is asked to write about extremely emotional events, is presented as a means towards story construction. Previous use of expressive writing has shown profound benefits for both psychological and physical health. In this system, measures of skin conductivity, instantaneous heart rate, and heart stress entropy are used as indicators of activities occurring in the body. Users have the ability to view these signals after taking part in an expressive writing task.

Interface Tailor
The Interface Tailor is an agent that attempts to adapt a system in response to affective feedback. Frustration is being used as a fitness function to select between a wide variety of different system behaviors. Currently, the Microsoft Office Assistant (or Paperclip) is one example interface that is being made more adaptive. Ultimately the project seeks to provide a generalized framework for making all software more tailor-able.

Learning Companion
"I can't do this" and "I'm not good at this" are common statements made by kids while trying to learn. Usually triggered by affective states of confusion, frustration, and hopelessness, these statements represent some of the greatest problems left unaddressed by educational reform. Education has emphasized conveying a great deal of information and facts, and has not modeled the learning process. When teachers present material to the class, it is usually in a polished form that omits the natural steps of making mistakes (feeling confused), recovering from them (overcoming frustration), deconstructing what went wrong (not becoming dispirited), and finally starting over again (with hope and maybe even enthusiasm). Learning naturally involves failure and a host of associated affective responses. This project aims to build a computerized learning companion that facilitates the child's own efforts at learning. The goal of the companion is to help keep the child's exploration going, by occasionally prompting with questions or feedback, and by watching and responding to the affective state of the child—watching especially for signs of frustration and boredom that may precede quitting, for signs of curiosity or interest that tend to indicate active exploration, and for signs of enjoyment and mastery, which might indicate a successful learning experience. The companion is not a tutor that knows all the answers but rather a player on the side of the student, there to help him or her learn, and in so doing, learn how to learn better.

Lux Meter: Real-Time Feedback in Ambient Light Environment
Light is an important factor in the regulation of all kinds of circadian rhythms in our body, such as heart rate, blood pressure, digestion, and mood. This project aims to measure the light environment around users, and provide real-time feedback about appropriate light intensity and color depending on the time of day.

MACH: My Automated Conversation coacH
MACH, My Automated Conversation coacH, is a system for people to practice social interactions in face-to-face scenarios. MACH consists of a 3D character that can "see," "hear," and make its own “decisions” in real time. The system was validated in the context of job interviews with 90 MIT undergraduate students. Students who interacted with MACH demonstrated significant performance improvement compared to the students in the control group. We are currently expanding this technology to open up new possibilities in behavioral health (e.g., treating people with Asperger syndrome, social phobia, PTSD) as well as designing new interaction paradigms in human-computer interaction and robotics.

Machine Learning and Pattern Recognition with Multiple Modalities
This project develops new theory and algorithms to enable computers to make rapid and accurate inferences from multiple modes of data, such as determining a person's affective state from multiple sensors—video, mouse behavior, chair pressure patterns, typed selections, or physiology. Recent efforts focus on understanding the level of a person's attention, useful for things such as determining when to interrupt. Our approach is Bayesian: formulating probabilistic models on the basis of domain knowledge and training data, and then performing inference according to the rules of probability theory. This type of sensor fusion work is especially challenging due to problems of sensor channel drop-out, different kinds of noise in different channels, dependence between channels, scarce and sometimes inaccurate labels, and patterns to detect that are inherently time-varying. We have constructed a variety of new algorithms for solving these problems and demonstrated their performance gains over other state-of-the-art methods.

Making Engaging Concerts
Working with the New World Symphony, we measured participant skin conductance as they attended a classical concert for the first time. With the sensor technology, we noted times when the audience reacted or engaged with the music and other times when they became bored and drifted away. Our overall findings suggest that transitions, familiarity, and visual supplements can make concerts accessible and exciting for new concert goers. We hope this work can help entertainment industries better connect with their customers and refine the presentation of their work so that it can best be received by a more diverse audience.

Measuring Customer Experiences with Arousal
How can we better understand people’s emotional experiences with a product or service? Traditional interview methods require people to remember their emotional state, which is difficult. We use psychophysiological measurements such as heart rate and skin conductance to map people’s emotional changes across time. We then interview people about times when their emotions changed, in order to gain insight into the experiences that corresponded with the emotional changes. This method has been used to generate hundreds of insights with a variety of products including games, interfaces, therapeutic activities, and self-driving cars.

Modular Light for Better Sleep
This project aims to build a modular lighting system where users can customize the design and lighting patterns.

Moral Sensors
The computer's emerging capacity to communicate an individual's affect raises critical ethical concerns. Additionally, designers of perceptual computer systems face moral decisions about how the information gathered by computers with sensors can be used. As humans, we have ethical considerations that come into play when we observe and report each other's behavior. Computers, as they are currently designed, do not employ such ethical considerations. This project assess the ethical acceptability of affect sensing in three different adversarial contexts, where within each context there are also different kinds of motivations (self-oriented and charity-oriented) for the individuals to perform as best as they can.

Mouse-Behavior Analysis and Adaptive Relational Agents
The goal of this project is to develop tools to sense and adapt to a user's affective state based on his or her mouse behavior. We are developing algorithms to detect frustration level for use in usability studies. We are also exploring how more permanent personality characteristics and changes in mood are reflected in the user’s mouse behavior. Ultimately, we seek to build adaptive relational agents that tailor their interactions with the user based on these sensed affective states.

Mr. Java: Customer Support
Mr. Java is the Media Lab's wired coffee machine, which keeps track of usage patterns and user preferences. The focus of this project is to give Mr. Java a tangible customer-feedback system that collects data on user complaints or compliments. "Thumbs-up" and "thumbs-down" pressure sensors were built and their signals integrated with the state of the machine to gather data from customers regarding their ongoing experiences with the machine. Potentially, the data gathered can be used to learn how to improve the system. The system also portrays an affective, social interface to the user: helpful, polite, and attempting to be responsive to any problems reported.

Objective Self: Understanding Internal Responses
How can technology help us understand ourselves better? In order to measure the physiological arousal of children with sensory challenges such as ASD and ADHD, tools were developed to help children understand and control what makes them overexcited. Using iCalm hardware, children in therapy sessions measured their arousal while eating, throwing tantrums, playing in ball pits, and making challenging choices. Beyond progressive findings in the field of occupational therapy, this research is a basis for bio-information technology: tools to help children, their parents, and their teachers better understand what is going on in their bodies in a comfortable, affordable, and adaptable way. With future work, technology will be developed to help children understand and control their own internal states. In addition, this project will go beyond children’s therapy—helping adults in various settings including business and home life.

Online Emotion Recognition
This project is aimed at building a system to recognize emotional expression given four physiological signals. Data was gathered from a graduate student with acting experience as she intentionally tried to experience eight different emotional states daily over a period of several weeks. Several features are extracted from each of her physiological signals. The first classifiers gave a classification result of 88% success when discriminating among 3 emotions (pure chance would be 33.3%), and of 51% when discriminating among 8 emotions (pure chance 12.5%). New, improved classifiers reach an 81% success rate when discriminating among all 8 emotions. Furthermore, an online classifier has now been built using the old method, which gives a success rate only 8% less than its old offline counterpart (i.e. 43%). We expect this percentage to sharply increase when the new methods are adapted to run online.

Passive Wireless Heart-Rate Sensor
We have developed a low-cost device that can wirelessly detect a beating heart over a short distance (1m) and does not require any sensor placed on the person's body. This device can be used for wireless medical/health applications as well as security and safety applications, such as automobile/truck drivers as well as ATM machines. We have also created a small battery-powered version of this sensor that can be worn on a person's clothing but does not require touching the person's skin.

Personal Heart-Stress Monitor
The saying, "if you can't measure it, you can't manage it" may be appropriate for stress. Many people are unaware of their stress level, and of what is good or bad for it. The issue is complicated by the fact that while too much stress is unhealthy, a certain amount of stress can be healthy as it motivates and energizes. The "right" level varies with temperment, task, and other factors, many of which are unknown. There seems to be no data analyzing how stress levels vary for the average healthy individual, over day-to-day activities. We would like to build a device that helps to gather and present data for improving an individual's understanding of both healthy and unhealthy stress in his or her life. The device itself should be comfortable and should not increase the user's stress. (It is noteworthy that stress monitoring is also important in human-computer interaction for testing new designs.) Currently, we are building a new, wireless, stress-mornitoring system by integrating Fitsense's heart-rate sensors and Motorola's iDen cell phone with our heart-rate-variability estimation algorithm.

Posture Recognition Chair
We have developed a system to recognize posture patterns and associated affective states in real time, in an unobtrusive way, from a set of pressure sensors on a chair. This system discriminates states of children in learning situations, such as when the child is interested, or is starting to take frequent breaks and looking bored. The system uses pattern recognition techniques, while watching natural behaviors, to "learn" what behaviors tend to accompany which states. The system thus detects the surface-level behaviors (postures) and their mappings during a learning situation in an unobtrusive manner so that they don't interfere with the natural learning process. Through the chair, we can reliably detect nine static postures, and four temporal patterns associated with affective states.

Prediction Game and Experience Sharing Market for Forecasting Marketplace Success
We have developed a novel market game, Prediction Game and Experience Sharing (PreGES, pronounced PreGuess), that harnesses people's collective prediction and experience sharing to forecast success or failure of new items (e.g., products, services, UI designs). Companies can register their new items on this market (as a testbed) to ask for collective opinions. In each PreGES trial session, participants makes their own best predictions on other people's overall opinions about the new items to get incentives (e.g., real opportunities to experience the items) and have fun in gambling-like games. As a participant’s guess (or portfolio) approaches the collective guess of all participants, he or she has a greater chance of winning an incentive. Participants improve the accuracy of their next prediction by sharing experiences. As participants have more trial sessions, their collective prediction converges into one common opinion (forecasting the success or failure of new items).

Recognizing Affect in Speech
This research project is concerned with building computational models for the automatic recognition of affective expression in speech. We are in the process of completing an investigation of how acoustic parameters extracted from the speech waveform (related to voice quality, intonation, loudness and rhythm) can help disambiguate the affect of the speaker without knowledge of the textual component of the linguistic message. We have carried out a multi-corpus investigation, which includes data from actors and spontaneous speech in English, and evaluated the model's performance. In particular, we have shown that the model exhibits a speaker-dependent performance which reflects human evaluation of these particular data sets, and, held against human recognition benchmarks, the model begins to perform competitively.

Reinventing the Retail Experience
With skin conductance sensors, we map out what frustrates and excites customers as they shop—from layout to wanting to touch the product. Our work has helped a variety of large retailers innovate on what it means to shop. Findings have focused on reducing the stress of choices and learning while surprising customers in new ways. With the sensor technology we can pinpoint moments when customers are overwhelmed and then build out new ways to make retail engaging again.

Relational Agents
Relational Agents are computational artifacts designed to build and maintain long-term, social-emotional relationships with their users. Central to the notion of relationship is that it is a persistent construct, spanning multiple interactions. Thus, Relational Agents are explicitly designed to remember past history and manage future expectations in their interactions with users. Since face-to-face conversation is the primary context of relationship-building for humans, our work focuses on Relational Agents as a specialized kind of embodied conversational agent (animated humanoid software agents that use speech, gaze, gesture, intonation, and other nonverbal modalities to emulate the experience of human face-to-face conversation). One major achievement was the development of a Relational Agent for health behavior change, specifically in the area of exercise adoption. A study involving 100 subjects interacting with this agent over one month demonstrated that the relational agent was respected more, liked more, and trusted more, and that these ratings were maintained over time (unlike for the non-relational agent, where they were not only significantly lower overall, but also declined over time.) People also expressed significantly greater ratings of perceived caring by the agent, and significantly more desire to keep working with the relational agent after the termination of the study.

RoCo: A Robotic Desktop Computer
A robotic computer that moves its monitor "head" and "neck," but that has no explicit face, is being designed to interact with users in a natural way for applications such as learning, rapport-building, interactive teaching, and posture improvement. In all these applications, the robot will need to move in subtle ways that express its state and promote appropriate movements in the user, but that don't distract or annoy. Toward this goal, we are giving the system the ability to recognize states of the user and also to have subtle expressions.

The Self-Cam is a wearable camera apparatus that consists of a chest-mounted camera aimed at the wearer’s face. Self-Cam was designed to be used in conjunction with a belt-mounted computer and real-time mental-state inference software that can be used with visual, auditory, or tactile output as personal feedback for the wearer. As the camera faces inward, many privacy issues are avoided–only those who choose to wear the Self-cam appear in the recorded video. Head movement can be seen and analyzed alongside facial expressions because the system rests on the chest and the light, simple nature of the structure allows it to be worn without any physical discomfort. By wearing the Self-Cam, you can explore who you appear to be from the outside. The Self-Cam acts as an objective point of view that might help you to understand yourself in a different light.

Sensor-Enabled Measurement of Stereotypy and Arousal in Individuals with Autism
A small number of studies support the notion of a functional relationship between movement stereotypy and arousal in individuals with ASD, such that changes in autonomic activity either precede or are a consequence of engaging in stereotypical motor movements. Unfortunately, it is difficult to generalize these findings as previous studies fail to report reliability statistics that demonstrate accurate identification of movement stereotypy start and end times, and use autonomic monitors that are obtrusive and thus only suitable for short-term measurement in laboratory settings. The current investigation further explores the relationship between movement stereotypy and autonomic activity in persons with autism by combining state-of-the-art ambulatory heart rate monitors to objectively assess arousal across settings; and wireless, wearable motion sensors and pattern recognition software that can automatically and reliably detect stereotypical motor movements in individuals with autism in real time.

Shybot is a personal mobile robot designed to both embody and elicit reflection on shyness behaviors. Shybot is being designed to detect human presence and familiarity from face detection and proximity sensing in order to categorize people as friends or strangers for interaction. Shybot also can reflect elements of the anxious state of its human companion through LEDs and a spinning propeller. We designed this simple social interaction to open up a new direction for intervention for children living with autism. We hope that from minimal social interaction, a child with autism or social anxiety disorders could reflect on and more deeply attain understanding about personal shyness behaviors, as a first step toward helping to make progress in developing greater capacity for complex social interactions.

SmileSeeker: Customer and Employee Affect Tagging System
SmileSeeker is a novel, machine-vision system that captures and provides quantified information about nonverbal communication where social interactions naturally happen. For example, in banking services, tellers observe facial expressions, head gestures, and eye gaze of customers, but this tool lets them both observe their own expressions and analyze how these interact with those of the customer to influence their mutual experience. The tool allows either real-time or offline feedback to help people reflect on what these interactions mean and determine how to elicit better experiences, such as true customer delight. The first deployment of this project focuses on eliciting and capturing smiles, and doing so in a way that is respectful of both customer and employee feelings. This project will also explore ways to share this information and link it to outcomes such as banking fee reductions or donations to charity.

The Affective Remixer: Personalized Music Arranging
Affective Remixer is a real-time music-arranging system that reacts to immediate affective cues from a listener. Data was collected on the potential of certain musical dimensions to elicit change in a listener’s affective state using sound files created explicitly for the experiment through composition/production, segmentation, and re-assembly of music along these dimensions. Based on listener data, a probabilistic state transition model was developed to infer the listener’s current affective state. A second model was made that would select music segments and re-arrange ('re-mix') them to induce a target affective state.

The Conductor's Jacket
The Conductor's Jacket is a unique wearable device that measures physiological and gestural signals. Together with the Gesture Construction, a musical software system, it interprets these signals and applies them expressively in a musical context. Sixteen sensors have been incorporated into the Conductor's Jacket in such a way as to not encumber or interfere with the gestures of a working orchestra conductor. The Conductor's Jacket system gathers up to sixteen data channels reliably at rates of 3 kHz per channel, and also provides real-time graphical feedback. Unlike many gesture-sensing systems it not only gathers positional and accelerational data but also senses muscle tension from several locations on each arm. We will demonstrate the Gesture Construction, a musical software system that analyzes and performs music in real-time based on the performer's gestures and breathing signals. A bank of software filters extract several of the features that were found in the conductor study, including beat intensities and the alternation between arms. These features are then used to generate real-time expressive effects by shaping the beats, tempos, articulations, dynamics, and note lengths in a musical score.

The Frustration of Learning Monopoly
We are looking at the emotional experience created when children learn games. Why do we start games with the most boring part, reading directions? How can we create a product that does not create an abundance of work for parents? Key insights generated from field work, interviews, and measurement of electrodermal activity are: kids become bored listening to directions, "it's like going to school"; parents feel rushed reading directions as they sense their children's boredom; children and parents struggle for power in interpreting and enforcing rules; children learn games by mimicking their parents, and; children enjoy the challenge of learning new games.

The Touch-Phone was developed to explore the use of objects to mediate the emotional exchange in interpersonal communication. Through an abstract visualization of screen-based color changes, a standard telephone is modified to communicate how it is being held and squeezed. The telephone receiver includes a touch-sensitive surface which conveys the user's physical response over a computer network. The recipient sees a small colored icon on his computer screen which changes in real time according to the way his conversational partner is interacting with the telephone object.

Wearable Relational Devices for Stress Monitoring
We have created a system for data collection, annotation, and feedback that is part of a longer-term research interest to gather data to understand more about stress and the physiological signals involved in its expression. First, we built a wearable apparatus for gathering data that allows the user to include as many accurate labels (annotations) as possible while going about natural daily activities. Gathering annotations is disruptive and likely to increase stress (thus interfering with the signals being measured). We hypothesized that empathetic ways of interrupting would be less stressful than non-empathetic and found significant effects on many of user's self-reported items such as preference for the more empathetic system, and also on behavioral items, such as the estimated number of times they were interrupted (significantly lower when system was more empathetic.)

What Do Facial Expressions Mean?
We are automating recognition of positive/negative experiences (valence) and affect from facial expressions. We present a toolkit, Acume, for interpreting and visualizing facial expressions whilst people interact with products and/or concepts.

[mit][media lab + Room E15-419 + 20 Ames Street + Cambridge, MA 02139]