Brain Responds Differently to Human and AI Voices, Despite Difficulty Distinguishing Them

A new study has found that while people struggle to differentiate between human and AI-generated voices, their brains respond differently to each type of voice. The research, conducted by Christine Skjegstad and Professor Sascha Frühholz from the University of Oslo, Norway, sheds light on the potential cognitive and social implications of AI voice technology.

Human Perception of AI and Human Voices

In the study, presented at the Federation of European Neuroscience Societies (FENS) Forum 2024, 43 participants listened to human and AI-generated voices expressing five different emotions: neutral, angry, fear, happy, and pleasure. They were asked to identify the voices as synthetic or natural while their brain activity was monitored using functional magnetic resonance imaging (fMRI). Participants correctly identified human voices only 56% of the time and AI voices 50.5% of the time, indicating equal difficulty in identifying both types of voices.

The results suggest that people assume neutral voices are more AI-like, with female AI neutral voices being identified correctly more often than male AI neutral voices. In contrast, happy human voices were more likely to be identified as human-like. Both AI and human neutral voices were perceived as least natural, trustworthy, and authentic, while human happy voices were perceived as most natural, trustworthy, and authentic.

Brain Responses to AI and Human Voices

Despite the difficulty in distinguishing between AI and human voices, the brain imaging revealed that human voices elicited stronger responses in areas associated with memory (right hippocampus) and empathy (right inferior frontal gyrus). AI voices, on the other hand, elicited stronger responses in areas related to error detection (right anterior mid cingulate cortex) and attention regulation (right dorsolateral prefrontal cortex).

Ms Skjegstad said: “While we are not very good at identifying human from AI voices, there does seem to be a difference in the brain’s response. AI voices may elicit heightened alertness while human voices may elicit a sense of relatedness.”

The researchers plan to further investigate whether personality traits, such as extraversion or empathy, make people more or less sensitive to noticing the differences between human and AI voices.

Professor Richard Roche, chair of the FENS Forum communication committee and Deputy Head of the Department of Psychology at Maynooth University, Ireland, highlighted the importance of this research, stating that it will help understand the potential cognitive and social implications of AI voice technology and support the development of policies and ethical guidelines.

While the risks of AI voice technology being used for scams are evident, there are also potential benefits, such as providing voice replacements for individuals who have lost their natural voice or using AI voices in therapy for certain mental health conditions.


Substack subscription form sign up