Psychologists find that head movement is more important than gender in nonverbal communication

May 21, 2009 ? It is well known that people use head motion during conversation to convey a range of meanings and emotions, and that women use more active head motion when conversing with each other than men use when they talk with each other.

When women and men converse together, the men use a little more head motion and the women use a little less. But the men and women might be adapting because of their gender-based expectations or because of the movements they perceive from each other.

What would happen if you could change the apparent gender of a conversant while keeping all of the motion dynamics of head movement and facial expression?

Using new videoconferencing technology, a team of psychologists and computer scientists ? led by Steven Boker, a professor of psychology at the University of Virginia ? were able to switch the apparent gender of study participants during conversation and found that head motion was more important than gender in determining how people coordinate with each other while engaging in conversation.

The scientists found that gender-based social expectations are unlikely to be the source of reported gender differences in the way people coordinate their head movements during two-way conversation.

The researchers used synthesized faces ? known as avatars ? in videoconferences with naïve participants, who believed they were conversing onscreen with an actual person rather than a synthetic version of a person.

In some conversations, the researchers changed the gender of the avatars and the vocal pitch of the avatar’s voice ? while still maintaining their actual head movements and facial expressions ? convincing naïve participants that they were speaking with, for example, a male when they were in fact speaking with a female, or vice versa.

“We found that people simply adapt to each other’s head movements and facial expressions, regardless of the apparent sex of the person they are talking to,” Boker said. “This is important because it indicates that how you appear is less important than how you move when it comes to what other people feel when they speak with you.”

He will present the findings Sunday at the annual convention of the Association for Psychological Science in San Francisco. A paper detailing the results is scheduled for publication in the Journal of Experimental Psychology: Human Perception and Performance.

The study, funded by the National Science Foundation, used a low-bandwidth, high-frame-rate videoconferencing technology to record and recreate facial expressions to see how people alter their behavior based on the slightest changes in expression of another person. The U.Va.-based team also includes researchers at the University of Pittsburgh, University of East Anglia, Carnegie Mellon University and Disney Research.

A video demonstration is available online at: http://faculty.virginia.edu/humandynamicslab/.

The technology uses statistical representations of a person’s face to track and reconstruct that face. This allows the principal components of facial expression ? only dozens in number ? to be transmitted as a close rendition of the actual face. It’s a sort of connect-the-dots fabrication that can be transmitted frame by frame in near-real time.

Boker and his team are trying to understand how people interact during conversation, and how factors such as gender or race may alter the dynamics of a conversation. To do so, they needed a way to capture facial expressions people use when conversing.

“From a psychological standpoint, our interest is in how people interact and how they coordinate their facial expressions as they talk with one another, such as when one person nods while speaking, or listening, the other person likewise nods,” Boker said.

It is this “mirroring process” of coordination that helps people to feel a connection with each other.

“When I coordinate my facial expressions or head movements with yours, I activate a system that helps me empathize with your feelings,” Boker said.

The technology the team developed further allows them to map the facial expressions of one person onto the face of another in a real time videoconference. In this way they can change the apparent gender or race of a participant and closely track how a naïve participant reacts when speaking to a woman, say, as opposed to a man.

“In this way we can distinguish between how people coordinate their facial expressions and what their social expectation is,” Boker said.

###

Invited talk web page: http://www.psychologicalscience.org/convention/program_detail.cfm?abstract_id=15234


Substack subscription form sign up