Studies of the chameleon effect confirm what salespeople, tricksters, and Lotharios have long known: Imitating another person’s postures and expressions is an important social lubricant.
But how do we learn to imitate with any accuracy when we can’t see our own facial expressions and we can’t feel the facial expressions of others?
Richard Cook of City University London, Alan Johnston of University College London, and Cecilia Heyes of the University of Oxford investigate possible mechanisms underlying our ability to imitate in two studies published in Psychological Science, a journal of the Association for Psychological Science.
In the first experiment, the researchers videotaped participants as they recited jokes and then asked them to imitate four randomly selected facial expressions from their videos. When they achieved what they perceived to be the target expression, the participants recorded the attempt with the click of a computer mouse.
A computer program evaluated the accuracy of participants’ imitation attempts against a map of the target expression. In contrast to previous studies that relied on subjective assessments, this new technology allowed for automated and objective measurement of imitative accuracy.
In one experiment, the researchers found that participants who were able to see their imitation attempts through visual feedback improved over successive attempts. But participants who had to rely solely on proprioception – sensing the relative position of their facial features – got progressively worse.
These results are consistent with the associative sequence-learning model, which holds that our ability to imitate accurately depends on learned associations between what we see (in the mirror or through feedback from others) and what we feel.
Cook and colleagues conclude that contingent visual feedback may be a useful component of rehabilitation and skill-training programs that are designed to improve individuals’ ability to imitate facial gestures.