Social Emotional Sensing Toolkit at the MIT Media Lab

This is project is to build a wearable system with mind-reading functions. It is funded by an NSF SGER award. This site is a JavaScript based Wiki, if you want the plain text version go to http://affect.media.mit.edu/esp/text/

@media screen\n{\n\nbody {\n font: 13px/125% "Lucida Grande", "Trebuchet MS", "Bitstream Vera Sans", Verdana, Helvetica, sans-serif;\n}\n\n}\n\n@media print\n{\n\nbody {\n font-size: 6.2pt;\n font-family: "Lucida Grande", "Bitstream Vera Sans", Helvetica, Verdana, Arial, sans-serif;\n}\n\n}
|bgcolor(#009933): |bgcolor(#FFFFFF):|bgcolor(#FF0000):|bgcolor(#FF9900):|bgcolor(#0066FF):|bgcolor(#FFFF00):|bgcolor(#666666):|
Social Emotional Sensing Toolkit
This project builds on work that was developed by Dr. Rana el Kaliouby and [[Professor Peter Robinson|http://www.cl.cam.ac.uk/~pr]] at the [[Computer Laboratory|http://www.cl.cam.ac.uk/]] at the [[University of Cambridge|http://www.cam.ac.uk/]] between 2001 and 2004. El Kaliouby's [[dissertation|http://www.cl.cam.ac.uk/TechReports/UCAM-CL-TR-636.html]] describes significant progress with regards to nonverbal perception and mental state inference. More specifically, that work presents an automated system for the inference of complex mental states from head and facial displays in a video stream in real time. Several publications by el Kaliouby and Robinson describe the notion of an [[Emotional Hearing Aid]]- an assistive tool for Autism Spectrum Disorders.
\n *[[The Emotional Hearing Aid: An Assistive tool for Children with Asperger Syndrome|uais05.pdf]] (Rana El Kaliouby, Peter Robinson): Universal Access in the Information Society 4(2), DOI 10.1007/s10209-005-0119-0
Link to wikiwords from HTML
\n*Tutorial on Autism Wearables at the International Symposium for Wearable Computing (ISWC 2007). \n* Rana el Kaliouby gives talk on "MindReading Machines: Technologies with PeopleSense" at the Royal Society, 27th September 2007 (Rana el Kaliouby) [[webcast|http://www.royalsoc.ac.uk/page.asp?id=4110]] [[slides|07-royalsoc.pdf]] \n*The series premier of Wired Science (http://www.pbs.org/kcet/wiredscience/) will air on PBS Wednesday, Oct 3rd (EST), and will feature coverage of our pilot studies at the Groden Center. Please check your local station and schedule using the "TV Schedules" link at the top of the Wired Science page. \n*Tutorial on Autism and Affective and Social Computing at the International Conference of Affective Computing and Intelligent Interactioin (ACII 2007). \n* Evaluation Tool for Recognition of Social and Emotional Expressions from Facial and Head Movements. [[Read more|EvalTool]] \n*Rosalind W. Picard and Rana el Kaliouby awarded, along with Matthew Goodwin at the [[Groden Center|http://www.grodencenter.org]], one of the NSF's very competitive collaborative research grants to continue their work on social-emotional technologies for autism spectrum disorders.
Social communication and emotion regulation difficulties, especially social-emotional ones, lie at the core of Autism Spectrum Disorders, making interpersonal interactions overwhelming, frustrating and stressful. Often, these difficulties portray that persons on the autism spectrum are "choosing" to be disengaged from social interactions due to a lack of interest or desire, even when that is not the case. To the contrary, many persons on the autism spectrum who are now able to communicate, write about their persistent attempts to seek interaction with others using unconventional nonverbal cues that were either misinterpreted or simply unnoticed by family members and others. Communication difficulties combined with atypical visual and auditory perception in ASD makes traditional learning challenging, and suggests that independent, spontaneous and sensory-based learning comes more naturally to persons with ASD. \n\nThe Massachusetts Institute of Technology (MIT) Media Laboratory (ML) in collaboration with the Groden Center is developing novel wearable, in situ social-emotional technology that helps individuals with ASD acquire an affinity for the social domain and improve their overall social communication abilities. We are also developing technologies that build on the nonverbal communication that individuals on the autism spectrum are already using to express themselves socially and emotionally, to help families, educators and other persons who deal with autism spectrum disorders to better understand these alternative means of nonverbal communication. Our work leverages the advances in affect sensing and perception to (1) develop technologies that are sensitive to people's affective-cognitive states; (2) advance autism research and (3) create new technologies that enhance the social-emotional intelligence of people diagnosed with autism, as well as those who are not. Several other projects are also underway, including autism and emotion regulation, sensor and toy technologies for monitoring children. Read more in [[Research]] | [[Background]] | [[NSF Proposal]].
In psychology, theory of mind or mind-reading describes our ability to attribute mental states to others from their behavior and to use that knowledge to guide our actions and predict those of others. Mind-reading is fundamental to our social functions, decision-making, perception and memory.
To help people improve their reading of faces during natural conversations, we developed a video tool to evaluate this skill. First, we collected over 100 videos of conversations between pairs of both autistic and neurotypical people, each of whom wore a Self-Cam. Next, the videos were manually segmented into chunks of 7-20 seconds according to expressive content, labeled, and sorted by difficulty--all tasks we plan to automate using technologies under development. Next, we built a rating interface including videos of self, peers, familiar adults, strangers, and unknown actors, allowing for performance comparisons across conditions of familiarity and expression. We obtained reliable identification (by coders) of categories of smiling, happy, interested, thinking, and unsure in the segmented videos. The tool was finally used to assess recognition of these five categories for eight neurotypical and five autistic people. Results show some autistics approaching the abilities of the neurotypicals while several score just above random. For more details, see [[Alea Teeter's Masters Thesis|http://affect.media.mit.edu/pdfs/07.Teeters-sm.pdf]]
Several projects are underway: \n* [[Social-emotional Interventions for Autism|WearCam]] \n* [[Self Cam|http://affect.media.mit.edu/projects.php?id=2001]] \n* [[Evaluation Tool for Recognition of Social and Emotional Expressions from Facial and Head Movements|EvalTool]] \n* [[Autism and Emotion regulation|EmotionRegulation]] \n* [[Sensors and toy technologies for child monitoring|TechToy]] (with Laura Schultz at MIT BCS)
We have developed the first wearable camera system capable of perceiving and visualizing social-emotional information in real-time human interaction. Using a small wearable camera and video-pattern analysis algorithms, the system analyzes video of the wearer or interaction partner and tags it at multiple granularities (facial actions, communicative facial/head gestures, and emotions) and gives real-time and offline feedback via visual, sound and tactile feedback. The research builds on [[Rana el Kaliouby|http://web.media.mit.edu/~kaliouby/]]'s doctoral research at the Computer Laboratory at the University of Cambridge UK. \n\n The wearable system aims to: (1) facilitate learning and systemizing of social-emotional cues; (2) promote self-reflection and perspective-taking; (3) allow wearers to study subtle nonverbal cues and share experiences with peers, family members, and caregivers; and (4) contribute new computational models and theories of social-emotional intelligence in machines. The project addresses open [[Research Challenges]] pertaining to (1) developing novel sensors and algorithms that measure and communicate a range of naturally-evoked, affective-cognitive states; (2) exploring whether machines can augment social interactions in a way that improves human to human communication. \n\nOur research explores the use of novel wearable technologies in promoting social-emotional intelligence skills and communication of people, enabling people to systematically monitor their interactions, gain better insight into how they interact with others, and find ways to improve how they communicate with others. This includes addressing problems such as: \n* [[Self Cam|http://affect.media.mit.edu/projects.php?id=2001]] \n* Real time mental state inference \n* Elicitation and tagging of naturally-evoked facial affect \n\nA pilot study of the first prototype is currently underway at the Groden Center, Providence RI to compare the efficacy of the wearable system to current gold standard interventions for autism spectrum disorders (ASD). The results of this pilot will inform next iterations of the technology, and a clinical study will be conducted at the Groden Center. The proposed human-centered, participatory approach to the co-design and use of technology draws on the experiences of individuals with ASD and their solutions to systematizing social interactions, thereby empowering them to enhance their relationships, while participating in the development of next-generation social-emotional intelligent technologies.
Many infant and child development studies involve observing how young children play with toys. Data is collected by taping the children playing with the toys, and then later researchers review the tapes and manually enter data based on visual observations. Researchers are able to observe a wide variety of play styles over a range of ages, which is crucial to learning about the development of young minds. In collaboration with the Early childhood Cognition Lab at MIT's Brain and Cognitive Sciences, this project will develop a toy equipped with motion detecting sensors such as accelerometers that will wirelessly transmit movement data to an external computer and automatically tagging the child's play with a description of how the toy is being manipulated. Our long-term aim is to use the toy for early detection to explore play behaviors in infants and young children at risk of developmental disorders such as autism.
\n\nTutorial on Autism Wearables at the International Symposium of Wearable Computing (ISWC 2007). \n\nTutorial on Autism and Affective-Social Computing at the International Conference of Affective Computing and Intelligent Interactioin (ACII 2007). \n\nA new MIT Media Lab course [[MAS.962: Autism Theory and Technology|http://courses.media.mit.edu/2007spring/mas962/]] was offered during Spring 2007. \n\nThe course lays a foundation in autism theory and autism technology that significantly leverages and expands MIT Media Lab's ability to pioneer new technology. Students not only develop new technologies, but also understand, help, and learn from people with autism, a fast-growing group that the CDC identified in the year 2005 as involving an estimated 1 in 150 school age children ages 6-21. Students will gain an understanding of the basic challenges faced by people with autism, together with their families and caregivers, and with an understanding of the fundamental theories that inform therapies and technologies for improving the autistic experience. The course will also explore the converging challenges and goals of autism research and the development of technologies with people sense. We will advance ways technology can be used for early detection and intervention in autism. We will enable new technologies for measuring behavior in people with autism, to enable better theory development through more systematic collection of behavior.
People who have difficulty communicating verbally (such as many people with autism) sometimes send nonverbal messages that do not match what is happening inside them. For example, a child might look calm and receptive to learning, while having a heart rate of over 120 bpm and being on the verge of a meltdown or shutdown. This mismatch can lead to serious problems, including misunderstandings such as "he became aggressive for no reason." We are creating new technologies to address this fundamental communication problem and enable the first long-term, ultra-dense longitudinal data analysis of emotion-related physiological signals. We hope to (1) equip individuals with personalized tools to understand the regulatory influences of emotion on their own state (e.g., "what state helps me best maintain my attention and focus for learning?"); and (2) enable scientists to accurately measure and understand the role of emotion regulation in autism.
The following MIT undergraduate research opportunities (UROP) are available for credit, pay or volunteer for Fall 2007, IAP/Spring and Summer 2008 (please indicate your preference when applying). \n* [[AutismWearables]] \n* [[AffectiveTagging]] \n* [[MonologueDetector]] \n* [[MindreadingRobots]] \n* [[FaceTracker]] \n* [[MindreadingAPI]] \n* [[IMProductive]] \n* [[MobileMindreading]] \n* [[Walk-Cam]] \n\nOur ideal UROP candidate would have strong programming and hardware skills, is energetic and self-motivated. \n\nFor more information contact Rana el Kaliouby (kaliouby AT media DOT mit DOT edu). Please include a resume and a paragraph describing your motivation and relevant experience.
FALL 2007 (extendable to spring and summer 08) \n\n We are developing and designing new wearable technologies that augment and enhance the wearer’s emotional-social intelligence skills. Our aim: novel technologies that enhance social interaction abilities of people with autism spectrum conditions (including Asperger Syndrome). We have several exciting and challenging project ideas underway such as Self-Cam and monologue detector. The projects are a great opportunity to experiment with various affective and wearable sensors and applications.[[More information|http://affect.media.mit.edu/projectpages/esp/]] \n\nOur ideal UROP candidate would have strong programming and hardware skills, is energetic and self-motivated. \n\nFor more information contact Rana el Kaliouby (kaliouby AT media DOT mit DOT edu). Please include a resume and a paragraph describing your motivation and relevant experience.
FALL 2007 (extendable to spring and summer 08) \n\n Wearable technologies that sense your affective-cognitive state from wearable cameras to physiological sensors now exist. Yet there are no visualization / tagging applications available to deal with the vast amounts of affective data collected. This project is to develop a platform that allows the visualization, segmentation and (machine or human) tagging of multiple affective channels including video, audio and physiology. It could also allow synchronization of affective data from several persons. UROPs will explore [[Processing|http://processing.org/]] \n\nOur ideal UROP candidate would have strong programming skills, is energetic and self-motivated. \n\nFor more information contact Rana el Kaliouby (kaliouby AT media DOT mit DOT edu). Please include a resume and a paragraph describing your motivation and relevant experience.
FALL 2007 (extendable to spring and summer 08) \n\nThis project aims to develop a cell-phone application that monitors conversational signals such as turn-taking, pauses in speech to detect when a person is monologuing and notifies him/her in real time. One possible application is for people diagnosed with high functioning autism and Apserger syndrome. \n\nOur ideal UROP candidate would have strong programming, is energetic and self-motivated. Experience in speech and/or signal processing is an asset. \n\nFor more information contact Rana el Kaliouby (kaliouby AT media DOT mit DOT edu). Please include a resume and a paragraph describing your motivation and relevant experience.
FALL 2007 (extendable to spring and summer 08) \n\nWalk-Cam is a wearable chest-mounted camera that can be inward facing (toward the person's face) or outward facing. The UROP will build on our current prototype of self-cam to achieve a cool-looking, collapsable, multi-orientation camera that can take quality images of the face. This project is a collaboration with Creapole School of Design in Paris, France. \n\nOur ideal UROP candidate would have strong protoyping and hardware skills, is energetic and self-motivated. Background in industrial design is an asset. \n\nFor more information contact Rana el Kaliouby (kaliouby AT media DOT mit DOT edu). Please include a resume and a paragraph describing your motivation and relevant experience.
\nUROP Opportunity \nASAP (extendable to spring and summer 07) \nMedia Laboratory \n\nFaculty Supervisor: Rosalind W. Picard \n\n A lot of our work at the affective computing group involves "reading" the face - tracking facial movements and inferring a range of affective and cognitive states (such as interest and confusion). Real time face detectors do exist, but they require a frontal view of the face and good lighting conditions. This project aims to 1) survey the latest in face detectors and 2) develop an open-source, robust face detector. One proposed approach is to extend Intel's OpenCV face detection to deal with head rotation, but there are other possibilities. \n\nWe are seeking energetic, self-motivated and qualified UROPs. Programming skills (ideally C/C++) are a must. Experience with machine vision, face or image processing is a plus. \n\nFor more information contact Rana el Kaliouby (kaliouby AT media DOT mit DOT edu). Please include a resume and a paragraph describing your motivation and relevant experience.
\nPeer-reviewed Publications \n*Rana el Kaliouby and Peter Robinson (2005). The Emotional Hearing Aid: An Assistive Tool for Children with Asperger Syndrome. Universal Access in the Information Society, 4(2). \n*Rana el Kaliouby and Peter Robinson. Real-Time Vision for HCI, chapter Real-time Inference of Complex Mental States from Facial Expressions and Head Gestures, pages 181–200. Spring-Verlag, 2005. \n*Rana el Kaliouby and Peter Robinson. Designing a More Inclusive World, chapter The Emotional Hearing Aid: An Assistive Tool for Children with Asperger Syndrome, pages 163–172. London: Springer-Verlag, 2004. \n*Rana el Kaliouby and Peter Robinson. Prosthetic versus Therapeutic Assistive Technologies: The Case of Autism. Assistive Technology. Under review. \n\nConference Proceedings \n* R. el Kaliouby, R. W. Picard, A. Teeters, M. Goodwin, Social-Emotional Technologies For ASD, "International Meeting for Autism Research" Seattle, Washington, May 2007. [[Abstract Online|http://www.cevs.ucdavis.edu/Cofred/Public/Aca/WebSec.cfm?confid=281&webid=1514]] \n* Alea Teeters, Rana el Kaliouby, and Rosalind W. Picard (2006). "~Self-Cam: Feedback from what would be your social partner", In Proceedings of the ACM SIGGRAPH'06. [[PDF|selfcam_final.pdf]] \n* Rana el Kaliouby, Alea Teeters and Rosalind W. Picard (2006), "An Exploratory ~Social-Emotional Prosthetic for Autism Spectrum Disorders," International Workshop on Wearable and Implantable Body Sensor Networks, April 3-5, 2006, MIT Media Lab, pages 3-4. [[PDF|http://affect.media.mit.edu/pdfs/06.kaliouby-teeters-picard-bsn.pdf ]] | [[Invited talk|bsntalk.pdf]] | [[Video|dualcam_processed_roz_barry_2.avi]] \n*Rana el Kaliouby and Peter Robinson. Generalization of a Computational Model of Mind-Reading. International Conference on Affective Computing and Intelligent Interfaces, 2005. \n*Rana el Kaliouby and Peter Robinson (2004). Mind Reading Machines: Automated Inference of Cognitive Mental States from Video. In Proceedings of The IEEE International Conference on Systems, Man and Cybernetics. \n*Rana el Kaliouby and Peter Robinson (2004). Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures. In IEEE Workshop on Real-Time Vision for Human-Computer Interaction at the CVPR Conference. Won Best Paper Award by the Computer Lab Ring. \n*Rana el Kaliouby and Peter Robinson. The Emotional Hearing Aid: An Assistive Tool for Autism. In Proceedings of the 10th International Conference on Human-Computer Interaction (HCII): Universal Access in HCI, volume 4, pages 68-72. Lawrence Erlbaum Associates. \n\nOthers \n*R. el Kaliouby, R. W. Picard and S. Baron-Cohen (2006). Affective Computing and Autism. Progress in Convergence. Eds. W. S. Bainbridge and M. C. Roco, Annals of the New York Academy of Sciences 1093: 228-248, doi:10.1196/annals.1382.016. [[PDF|http://affect.media.mit.edu/pdfs/07.elkaliouby-picard-SBC-autismaffect.pdf]] \n*Rana el Kaliouby (2005). Mind-reading Machines: the automated inference of complex mental states from video. ~PhD Dissertation, Computer Laboratory, University of Cambridge. [[PDF|elkaliouby-Phd.pdf]] | [[Videos]]
Our autism work has been featured in the following: \n* Wired Science (3 Oct 2007) - [[Premier episode|http://www.pbs.org/kcet/wiredscience/]] \n* MIT Zig Zag (4 May 2006)- [[Episode 4|http://web.mit.edu/zigzag/vid/episode4.html]] \n* Wired News (14 April 2006)- [[Face Reader Bridges Autism Gap|http://www.wired.com/news/technology/medtech/0,70655-0.html?tw=rss.technology]] \n* CNET News (4 April 2006)- [[MIT Group Develops Mind-reading Device|http://news.com.com/MIT+group+develops+mind-reading+device/2100-1008_3-6057638.html]] \n* eMaxHealth.com (1 April 2006)- [[Helping Autistic People Communicate|http://www.emaxhealth.com/7/5297.html]] \n* Hindustan Times (31 March 2006)- [[Now your computer could tell if you are boring and irritating|http://www.hindustantimes.com/news/181_1663716,00110002.htm]] \n* The Boston Globe (31 March 2006 )- [[Emotion Detectors|http://www.boston.com/news/local/articles/2006/03/31/emotion_detectors/]] \n* Corriere Della Sera (30 March 2006)- [[Gli occhiali che tradiscono le emozioni|http://www.corriere.it/Primo_Piano/Scienze_e_Tecnologie/2006/03_Marzo/30/occhiali.shtml]] \n* Slashdot (30 March 2006)- [[ Device Developed To Help Socially Challenged|http://science.slashdot.org/article.pl?sid=06/03/30/1716234]] \n* Sploid (30 March 2006)- [[New Device warns you when you're boring|http://www.sploid.com/news/2006/03/new_device_warn.php]] \n* boingboing (29 March 2006)- [[Device tells you if you're boring|http://www.boingboing.net/2006/03/29/device_tells_you_if_.html]] \n* NewScientist (26 March 2006)- [[Device Warns if You're Boring|http://www.newscientist.com/article/mg19025456.500-device-warns-you-if-youre-boring-or-irritating.html]] \n\nOur work on computational models of mindreading has been featured in: \n* BBC (27 June 2006) [[Computers 'set to read our minds'|http://news.bbc.co.uk/1/hi/sci/tech/5116762.stm]] \n* Times Online (27 June 2006) [[You can already read minds. And soon computers will be able to as well|http://timesonline.typepad.com/technology/2006/06/you_can_already.html#more]] \n* Discovery channel (27 June 2006) [[Emotionally sensitive computer coming soon|http://reports.discoverychannel.ca/servlet/an/discovery/1/20060626/discovery_computer_feelings_060626/20060626?hub=DiscoveryReport]] \n* Reuters (26 June 2006) [[Coming soon -- mind-reading computers|http://today.reuters.co.uk/news/newsArticle.aspx?type=internetNews&storyID=2006-06-25T232902Z_01_L23596655_RTRIDST_0_OUKIN-UK-SCIENCE-COMPUTERS.XML]] \n* Earth times (26 June 2006) [[Scientists develop computer than can distinguish human emotions|http://www.earthtimes.org/articles/show/7334.html]] \n* EFYTimes (26 june 2006) [[Robots Invade Human Minda|http://www.efytimes.com/fullnews13.asp?edid=12557&magid=11]]
\n\n The development of robust socio-emotional intelligence in machines involves addressing essentially the same challenges that practitioners face when teaching people with autism about social and emotion understanding. These challenges include devising a “code-book” that maps a range of nonverbal cues to corresponding mental states and defining “rules of conduct” for various social contexts. Both challenges are accentuated by the uncertainties and subtleties that are inherent in the process of nonverbal communication, which makes it hard to generalize to new situations and contexts. The problem is made more complex by being required to execute in real-time in varying interaction settings, poses and lighting conditions.
This site was developed by [[Rana el Kaliouby]] using a ~JavaScript application called ~TiddlyWiki, and is primarily based on Andres Monroy Hernandez's webpage.
\n* [[Summary|SGER-summary.pdf ]]- Summary of our proposal to NSF \n* [[Descripton|SGER-description.pdf]]- Description of our proposal to NSF
!Get started:\nStart by saving ~GTDTiddlyWiki to your computer (right click on [[this link|#]] and select 'Save link as...' or 'Save target as...').\n \nYou can edit entries, or "Tidders", by clicking "edit" or double clicking anywhere on the Tiddler. When you click "done" or press [ctrl+enter] your changes are saved.\n\nThanks to ~JeremyRuston for creating such a great Open Source project!\n----\n~TiddlyWiki and GTD Tiddly Wiki are published under an BSD License and carry No Warranty.\n \nCurrent Version: 1.0.6 Sept 3rd, 2005
esp AT media DOT mit DOT edu\n [img[Media Lab|images/medialab.png]]\n[[Massachusetts Institute of Technology|http://maps.google.com/maps?q=77+Massachusetts+Ave+Cambridge,+MA]]\nThe Media Laboratory\n[[E15-120b|http://whereis.mit.edu/map-jpg?mapterms=E15-120b]]\n[[20 Ames Street|http://maps.google.com/maps?oi=map&q=20+Ames+Street,+Cambridge,+MA]]\nCambridge, MA 02139\n(617) 253 - 0611\n
\n\n[[MIT|http://www.mit.edu/]] [[Media Lab|http://www.media.mit.edu/]] | [[Affective Computing Group|http://affect.media.mit.edu/]]
Principal Investigators \n* [[Rosalind Picard|http://web.media.mit.edu/~picard]] \n* [[Rana el Kaliouby|http://web.media.mit.edu/~kaliouby/]] \n* Matthew Goodwin, Associate Research director, Groden Center \n\nResearch Assistants \n* [[Mohammed Hoque|http://web.media.mit.edu/~mehoque/]] \n\nAlumni \n* [[Alea Teeters|http://web.media.mit.edu/~alea/]] \n\nUROPs \n* Nicole Berdy \n* David Koh \n* Mish Madsen \n* Andrew Marecki \n* Mia Shandell \n* Angela Wang \n\nInterns \n* Jen-Lee Lin \n* Tyler Wilson \n\n Collaborators \n* The Gorden Center Inc, Providence RI \n* The Creapole Design School in Paris, FRANCE \n* Jocelyn Scheirer
Videos \n* [[Presentation at h2.0 Spring 07 Symposium|http://www.media.mit.edu/events/movies/video.php?id=h20-2007-05-09-2]] \n* [[Autism Wearables|http://www.media.mit.edu/affect/projectpages/esp/h20/affective_computing_may9_1.wmv]] \n* [[Description by Rana el Kaliouby and Alea Teeters|http://www.media.mit.edu/affect/projectpages/esp/selfcam.mov]] (Courtesy of MIT [[ZigZag|http://web.mit.edu/zigzag/]]) \n* [[Estée Klar-Wolfond|www.media.mit.edu/events/eventpage.php?event=talk-410]] [[Poster|autismposter.pdf]] (The Autism Acceptance Project) \n\nExhibitions and Conferences \n* [[Poster and live demo at IMFAR 2007|IMFAR-2007-poster.pdf]] \n* Poster and live demo at SIGGRAPH 2006 \n* Mindreading Machines at the [[Royal Society Summer Science Exhibition|http://www.royalsoc.ac.uk/exhibit.asp?id=4683]] (2006) \n\nInvited Lectures \n* "MindReading Machines: Technologies with PeopleSense", Royal Society, 27th September 2007 (Rana el Kaliouby) [[webcast|http://www.royalsoc.ac.uk/page.asp?id=4110]] [[slides|07-royalsoc.pdf]] \n* "Technologies for autism assessment, intervention and understanding" Child Psychiatry Abbasia Hospital in Egypt (Rana el Kaliouby, Jun 2007). \n* "Affective Tagging" Google Tech Talk, (Rana el Kaliouby) Feb. 2007. \n* "Technology Autism", Cog Lunch Series, Stanford University, (Rana el Kaliouby) Feb. 2007 \n* "Measuring User Experience and Cognitive-Affective Mental State", NASA Ames Research Center, (Rana el Kaliouby) Feb. 2007. \n* Autism Society of America conference (Alea Teeters, July 2006) \n* Interagency Committee on Disability Research, Technology for Improving Cognitive Function Workshop (Matthew Goodwin, June 2006) [[slides|Goodwin ICDR Talk.pdf]] \n* Body-Sensor Networks (Rana el Kaliouby, April 2006) [[slides|bsntalk.pdf]]
This material is based upon work supported by the National Science Foundation under Grant No. 0555411 and the Thinks that Think Consortium. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
\n* [[seagate|http:\\www.seagate.com]] \n* [[myDejaview|http:\\www.mydejaview.com]] \n* [[Google|http:\\www.google.com ]] \n* [[Adobe|http:\\www.adobe.com]]
Pix taken throughout this project from [[esp_wearable|http://flickr.com/photos/esp_wearable]] of public flickr photos.
[[What's New]] [[About]] [[People]] [[UROPs]]
[[What's New]] [[About]] [[People]] [[Research]] [[Teaching]] [[Publications]] [[Presentations]] [[Press]] [[UROPs]] [[Funding]] [[Sponsors]] [[Contact]] [[This Site]]