New research clarifies that emotional intelligence involves much more than reading people’s micro-expressions. When it comes to reading a person’s state of mind, the visual context of background and action is as important as facial expressions and body language.
Researchers from the University of California, Berkeley give the example of actor James Franco in the Oscar-nominated movie “127 hours.” In one scene, Franco looks vaguely happy as he records a video diary in the movie. But when the camera zooms out, the audience sees that his arm is crushed under a boulder, and that his smile belies his agony.
The new viewpoint challenges decades of research positing that emotional intelligence and recognition are based largely on the ability to read facial micro-expressions. The expressions were believed to be an indicator of happiness, sadness, anger, fear, surprise, disgust, contempt and other positive and negative moods and sentiments.
The new study, to appear online this week in the journal Proceedings of the National Academy of Sciences, suggests emotional detection requires more than a facial “read.”
“Our study reveals that emotion recognition is, at its heart, an issue of context as much as it is about faces,” said lead author Zhimin Chen, a doctoral student in psychology at UC Berkeley.
In the study, researchers blurred the faces and bodies of actors in dozens of muted clips from Hollywood movies and home videos. Despite the characters’ virtual invisibility, hundreds of study participants were able to accurately read their emotions by examining the background and how they were interacting with their surroundings.
The “affective tracking” model that Chen created for the study allows researchers to track how people rate the moment-to-moment emotions of characters as they view videos.
Chen’s method is capable of collecting large quantities of data in a short time, and could eventually be used to gauge how people with disorders like autism and schizophrenia read emotions in real time, and help with their diagnoses.
“Some people might have deficits in recognizing facial expressions, but can recognize emotion from the context,” Chen said. “For others, it’s the opposite.”
Moreover, the findings, based on statistical analyses of the ratings collected, could inform the development of facial recognition technology.
“Right now, companies are developing machine learning algorithms for recognizing emotions, but they only train their models on cropped faces and those models can only read emotions from faces,” Chen said. “Our research shows that faces don’t reveal true emotions very accurately and that identifying a person’s frame of mind should take into account context as well.”
For the study, Chen and study senior author Dr. David Whitney, a UC Berkeley vision scientist and psychology professor, tested the emotion recognition abilities of nearly 400 young adults. The visual stimuli they used were video clips from various Hollywood movies as well as documentaries and home videos that showed emotional responses in more natural settings.
Study participants went online to view and rate the video clips. A rating grid was superimposed over the video so that researchers could track each study participant’s cursor as it moved around the screen, processing visual information and rating moment-to-moment emotions.
In the first of three experiments, 33 study participants viewed interactions in movie clips between two characters, one of which was blurred, and rated the perceived emotions of the blurred character. The results showed that study participants inferred how the invisible character was feeling based not only on their interpersonal interactions, but also from what was happening in the background.
Next, approximately 200 study participants viewed video clips showing interactions under three different conditions: one in which everything was visible, another in which the characters were blurred, and another in which the context was blurred. The results showed that context was as important as facial recognition for decoding emotions.
In the final experiment, 75 study participants viewed clips from documentaries and home videos so that researchers could compare emotion recognition in more naturalistic settings. Again, context was as critical for inferring the emotions of the characters as were their facial expressions and gestures.
“Overall, the results suggest that context is not only sufficient to perceive emotion, but also necessary to perceive a person’s emotion,” said Whitney, a UC Berkeley psychology professor. “Face it, the face is not enough to perceive emotion.”
Source: University of California Berkeley
from Psych Central News https://ift.tt/2JjW5xU
via IFTTT
Become a patron of The Carlisle Wellness Network. Show everyone that you think this service is worth at least a buck. Go to; https://www.patreon.com/carlislewellness and pledge one dollar per month and help improve the resources it takes to gather the articles you see here as well as create fresh content including interviews an podcasts. We only need one dollar per month from all of our patrons to give The Carlisle Wellness Network a bright furture in the health and wellness social media ecosystem.
Comments
Post a Comment