emotional health.
http://www.media.mit.edu/research/groups/affective-computing
The projects are fascinating and will offer a lot for the future of
mental health. For example
"
Health Interventions Using Mobile Phones
Rich Fletcher, Rosalind Picard, Sharon Tam, Javier Hernandez Rivera and
Micah Ekhardt
We are developing a mobile phone-based platform to assist people with
chronic diseases, panic-anxiety disorders or addictions. Making use of
wearable, wireless biosensors, the mobile phone uses pattern analysis
and machine learning algorithms to detect specific physiological states
and perform automatic interventions in the form of text/images plus
sound files and social networking elements. We are currently working
with the Veterans Administration drug rehabilitation program involving
veterans with PTSD.
"
or
"
FaceSense: Affective-Cognitive State Inference from Facial Video
Daniel McDuff, Rana el Kaliouby, Abdelrahman Nasser Mahmoud, Youssef
Kashef, M. Ehsan Hoque, Matthew Goodwin and Rosalind W. Picard
People express and communicate their mental states—such as emotions,
thoughts, and desires—through facial expressions, vocal nuances,
gestures, and other non-verbal channels. We have developed a
computational model that enables real-time analysis, tagging, and
inference of cognitive-affective mental states from facial video. This
framework combines bottom-up, vision-based processing of the face (e.g.,
a head nod or smile) with top-down predictions of mental-state models
(e.g., interest and confusion) to interpret the meaning underlying head
and facial signals over time. Our system tags facial expressions, head
gestures, and affective-cognitive states at multiple spatial and
temporal granularities in real time and offline, in both natural
human-human and human-computer interaction contexts. A version of this
system is being made available commercially by a spin-off Affectiva
(http://www.affectiva.com), indexing emotion from faces. Applications
range from measuring people's experiences to a training tool for autism
spectrum disorders and people who are nonverbal learning disabled.
"
or
"
Tell me more...
M. Ehsan Hoque
Design and implementation of computer systems that can understand and
respond to the full range of human communication (e.g., facial
expressions, intonation in speech) can change the way we interact with
machines today. In this project, we are currently envisioning
development of an avatar that encourages interaction and can sense
affect from users by analyzing facial expressions and intonation in
speech in appropriate context. The potential application of our demo is
to help people with autism spectrum disorder with their communication
needs. With this avatar and our sensing technologies, we could
selectively probe responses to individual interaction variables by
turning off certain sensing abilities while leaving the rest intact. It
is also possible to expand our technology to other areas where
recognizing users affect from multimodal cues is important.
"
The last idea can be used in many different ways. It's not just autistic
people who have problems with communicating with 'normal' people. Each
person has their own way to communicate. An avatar instead of a
telepresence videocall may be preferable to ensure good communication
between two people using the internet.
No comments:
Post a Comment