Computer scientists at the University of East Anglia (UEA) have developed a new way of cloning facial expressions during live conversations to help us better understand what influences our behaviour when we communicate with others.
Published this month in the peer-reviewed journal Language & Speech, the new technique tracks in real time facial expressions and head movements during a video conference and maps these movements to models of faces ? producing a ‘cloned’ face.
These facial expressions and head movements can be manipulated live to alter the apparent expressiveness, identity, race, or even gender of a talker. Moreover, these visual cues can be manipulated such that neither participant in the conversation is aware of the manipulation.
Developed by Dr Barry-John Theobald of UEA’s School of Computing Sciences, in collaboration with Dr Iain Matthews (Disney Research), Prof Steven Boker (University of Virginia) and Prof Jeffrey Cohn (University of Pittsburgh), the new facial expression cloning technique is already being trialed by psychologists in the US to challenge pre-conceived assumptions about how humans behave during conversations.
For example, it is well-known that you move your head differently when speaking to a woman than when speaking to a man. The new software has helped show that this difference is not because of your conversational partner’s appearance, but instead due to the way they move. If a person appears to be a woman but moves like a man, others will respond with movements similar to those made when speaking to a man.
It is also likely to have application in the entertainment industry where life-like animated characters might be required.
“Spoken words are supplemented with non-verbal visual cues to enhance the meaning of what we are saying, signify our emotional state, or provide feedback during a face-to-face conversation,” said Dr Theobald, lead author of the new paper. “Being able to manipulate these properties in a controlled manner allows us to measure precisely their effects on behaviour during conversation.
“This exciting new technology allows us to manipulate faces in this way for the first time. Many of these effects would otherwise be impossible to achieve, even using highly-skilled actors.”
The work is funded by the Engineering and Physical Sciences Research Council (EPSRC) and the National Science Foundation (NSF).
‘Mapping and Manipulating Facial Expression’ by Barry-John Theobald (UEA), Iain Matthews (Weta Digital Ltd, New Zealand), Michael Mangini (University of Notre Dame, US), Jeffrey Spies (University of Virginia), Timothy Brick (University of Virginia), Jeffrey Cohn (University of Pittsburgh) and Steven Boker (University of Virginia) is published in the June edition of Language and Speech.
Thank you . very nice news. very nice topic…