How the brain sorts babble into auditory streams

Known as “the cocktail party problem,” the ability of the brain’s auditory processing centers to sort a babble of different sounds, like cocktail party chatter, into identifiable individual voices has long been a mystery. Now, researchers analyzing how both humans and monkeys perceive sequences of tones have created a model that can predict the central features of this process, offering a new approach to studying its mechanisms.

The research team–Christophe Micheyl, Biao Tian, Robert Carlyon, and Josef Rauschecker–published their findings in the October 6, 2005, issue of Neuron.

For both the humans and the monkeys, the researchers used an experimental method in which they played repetitive triplet sequences of tones of two alternating frequencies. Researchers know that when the frequencies are close together and alternate slowly, the listener perceives a single stream that sounds like a galloping horse. However when the tones are at widely separated frequencies or played in rapid succession, the listener perceives two separate streams of beeps.

Importantly, at intermediate frequency separations or speeds, after a few seconds the listeners’ perceptions can shift from the single galloping sounds to the two streams of beeps. The researchers could use this phenomenon to explore the neurobiology of perception of auditory streams, because they could explore how perception altered with the same stimulus.

In the human studies, Micheyl, working in the MIT laboratory of Andrew Oxenham, asked subjects to listen to such tone sequences and signal when their perceptions changed. The researchers found that the subjects showed the characteristic perception changes at the intermediate frequency differences and speeds.

Then, Carlyon, working in Rauschecker’s laboratory at Georgetown University Medical Center, recorded signals from neurons in the auditory cortex of monkeys as the same sequences of tones were played to the animals. These neuronal signals could be used to indicate the monkeys’ perceptions of the tone sequences.

From the data on the monkeys, the researchers developed a model that aimed to predict in humans the change in perception between one or two auditory streams under different frequency separations and tone presentation rates.

“Using this approach, we demonstrate a striking correspondence between the temporal dynamics of neural responses to alternating-tone sequences in the primary cortex…of awake rhesus monkeys and the perceptual build-up of auditory stream segregation measured in humans listening to similar sound sequences,” concluded the researchers.

In a commentary on the paper in the same issue of Neuron, Michael DeWeese and Anthony Zador wrote that the new approach “promises to elucidate the neural mechanisms underlying both our conscious experience of the auditory world and our impressive ability to extract useful auditory streams from a sea of distracters.”

From Cell Press

The material in this press release comes from the originating research organization. Content may be edited for style and length. Want more? Sign up for our daily email.

Comments are closed.