Researchers have identified a single, universal facial expression that is interpreted across many cultures as the embodiment of negative emotion.
The look proved identical for native speakers of English, Spanish, Mandarin Chinese and American Sign Language (ASL).
It consists of a furrowed brow, pressed lips and raised chin, and because we make it when we convey negative sentiments, such as “I do not agree,” researchers are calling it the “not face.”
The study, published in the journal Cognition, also reveals that our facial muscles contract to form the “not face” at the same frequency at which we speak or sign words in a sentence. That is, we all instinctively make the “not face” as if it were part of our spoken or signed language.
What’s more, the researchers discovered that ASL speakers sometimes make the “not face” instead of signing the word “not”–a use of facial expression in ASL that was previously undocumented.
“To our knowledge, this is the first evidence that the facial expressions we use to communicate negative moral judgment have been compounded into a unique, universal part of language,” said Aleix Martinez, cognitive scientist and professor of electrical and computer engineering at The Ohio State University.
“Where did language come from? This is a question that the scientific community has grappled with for a very long time,” he continued. “This study strongly suggests a link between language and facial expressions of emotion.”
Previously, Martinez and his team had used computer algorithms to identify 21 distinct emotional expressions–including complex ones that are combinations of more basic emotions. “Happy” and “disgusted,” for instance, can be compounded into “happily disgusted,” a face that we might make when watching a gross-out comedy movie or when an adorable baby poops in its diaper.
For this new study, the researchers hypothesized that if a universal “not face” existed, it was likely to be combination of three basic facial expressions that are universally accepted to indicate moral disagreement: anger, disgust and contempt.
Why focus on negative expressions? Charles Darwin believed that the ability to communicate danger or aggression was key to human survival long before we developed the ability to talk, Martinez explained. So the researchers suspected that if any truly universal facial expressions of emotion exist, then the expression for disapproval or disagreement would be the easiest to identify.
To test the hypothesis, they sat 158 Ohio State students in front of a digital camera. The students were filmed and photographed as they had a casual conversation with the person behind the camera in their native language.
The students belonged to four groups, which were chosen to represent a wide variety of grammatical structures. English is a Germanic language, while Spanish is based on Latin; Mandarin Chinese is a modern form of Middle Chinese that was formalized early in the 20th century. Like other forms of sign language, ASL combines hand gestures, head and body movements and facial expressions to communicate individual words or phrases.
The researchers were looking for a facial “grammatical marker,” a facial expression that determines the grammatical function of a sentence. For example, in the sentence “I am not going to the party,” there is a grammatical marker of negation: “not.” Without it, the meaning of the sentence completely changes: “I am going to the party.”
If the grammatical marker of negation is universal, the researchers reasoned, then all the study participants would make similar facial expressions when using that grammatical marker, regardless of which language they were speaking or signing. They should all make the same “not face” in conjunction with–or in lieu of–the spoken or signed marker of negation.
The tests went like this: The students either memorized and recited negative sentences that the researchers had written for them ahead of time, or the students were prompted with questions that were likely to illicit disagreement, such as “A study shows that tuition should increase 30 percent. What do you think?”
In all four groups–speakers of English, Spanish, Mandarin and ASL–the researchers identified clear grammatical markers of negation. The students’ answers translated to statements like “That’s not a good idea,” and “They should not do that.”
The researchers manually tagged images of the students speaking, frame by frame, to show which facial muscles were moving and in which directions. Then computer algorithms searched the thousands of resulting frames to find commonalities among them.
A “not face” emerged: the furrowed brows of “anger” combined with the raised chin of “disgust” and the pressed-together lips of “contempt.” Regardless of language–and regardless of whether they were speaking or signing–the participants’ faces displayed these same three muscle movements when they communicated negative sentences.
Computer analysis also compared the tempo at which the students’ facial muscles moved.
Here’s why: Human speech typically varies between three to eight syllables per second–that is, 3-8 Hz, or hertz, a measure of frequency. Researchers believe that the human brain is wired to recognize grammatical constructs that fall within that frequency band as language.
Martinez and his team reasoned that if all the students’ facial muscles moved to make the “not face” within that same frequency band, then the face itself likely functions as a universal grammatical marker of language.
In the tests, native English speakers made the “not face” at a frequency of 4.33 Hz, Spanish at 5.23 Hz, and Mandarin speakers at 7.49 Hz. Speakers of ASL made the face at a frequency of 5.48 Hz. All frequencies were within the 3-8 Hz range of spoken communication, which strongly suggests that the facial expression is an actual grammatical marker, Martinez said.
Also, something truly unique emerged from the studies of the ASL-signing students. They utilized the facial expression a different way–as if it were the unique grammatical marker in the signed sentence.
People sometimes signed the word “not.” Other times, they just shook their head “no” when they got to the part of the sentence where they would have signed “not.” Both are accepted ways to communicate negation in ASL.
But sometimes, speakers didn’t make the sign for “not,” nor did they shake their head. They just made the “not face,” as if the face itself counted explicitly as a marker of negation in the sentence.
This the first time researchers have documented a third way that users of sign language say “not”: just by making the face.
“This facial expression not only exists, but in some instances, it is the only marker of negation in a signed sentence,” Martinez said. “Sometimes the only way you can tell that the meaning of the sentence is negative is that person made the ‘not face’ when they signed it.”
Manual analysis of the facial expressions was painstaking, Martinez admitted, but now that he and his team have shown that the experiment works, they hope to make the next phase of the project fully automatic, with new algorithms that will extract and analyze facial movements without human help. They’re building those algorithms now.
Once they finish, they will take a “big data” approach to further explore the origins of language. First, they’ll analyze 1,000 hours of YouTube video of people talking, which corresponds to around 100 million still frames. Ultimately, they want to amass 10,000 hours of data, or 1 billion frames.
They also hope to identify the facial expressions that go along with other grammatical markers, including positive ones.
“That will likely take decades,” Martinez said. “Most expressions don’t stand out as much as the ‘not face.'”