Scientists at the University of Rochester have shown for the first time that our brains automatically consider many possible words and their meanings before we’ve even heard the final sound of the word.
Previous theories have proposed that listeners can only keep pace with the rapid rate of spoken language—up to 5 syllables per second—by anticipating a small subset of all words known by the listener, much like Google search anticipates words and phrases as you type. This subset consists of all words that begin with the same sounds, such as “candle”, “candy,” and “cantaloupe,” and makes the task of understanding the specific word more efficient than waiting until all the sounds of the word have been presented.
But until now, researchers had no way to know if the brain also considers the meanings of these possible words. The new findings are the first time that scientists, using an MRI scanner, have been able to actually see this split-second brain activity. The study was a team effort among former Rochester graduate student Kathleen Pirog Revill, now a postdoctoral researcher at Georgia Tech, and three faculty members in the Department of Brain and Cognitive Sciences at the University of Rochester.
“We had to figure out a way to catch the brain doing something so fast that it happens literally between spoken syllables,” says Michael Tanenhaus, the Beverly Petterson Bishop and Charles W. Bishop Professor. “The best tool we have for brain imaging of this sort is functional MRI, but an fMRI takes a few seconds to capture an image, so people thought it just couldn’t be done.”
But it could be done. It just took inventing a new language to do it.
With William R. Kenan Professor Richard Aslin and Professor Daphne Bavelier, Pirog Revill focused on a tiny part of the brain called “V5,” which is known to be activated when a person sees motion. The idea was to teach undergraduates a set of invented words, some of which meant “movement,” and then to watch and see if the V5 area became activated when the subject heard words that sounded similar to the ones that meant “movement.”
For instance, as a person hears the word “kitchen,” the Rochester team would expect areas of the brain that would normally become active when a person thought of words like “kick” to momentarily show increased blood flow in an fMRI scan. But the team couldn’t use English words because a word as simple as “kick” has so many nuances of meaning. To one person it might mean to kick someone in anger, to another it might mean to be kicked, or to kick a winning goal. The team had to create a set of words that had similar beginning syllables, but with different ending syllables and distinct meanings—one of which meant motion of the sort that would activate the V5 area.
The team created a computer program that showed irregular shapes and gave the shapes specific names, like “goki.” They also created new verb words. Some, like “biduko” meant “the shape will move across the screen,” whereas some, like “biduka,” meant the shape would just change color.
After a number of students learned the new words well enough, the team tested them as they lay in an fMRI scanner. The students would see one of the shapes on a monitor and hear “biduko,” or “biduka.” Though only one of the words actually meant “motion,” the V5 area of the brain still activated for both, although less so for the color word than for the motion word. The presence of some activation to the color word shows that the brain, for a split-second, considered the motion meaning of both possible words before it heard the final, discriminating syllable—ka rather than ko.
“Frankly, we’re amazed we could detect something so subtle,” says Aslin. “But it just makes sense that your brain would do it this way. Why wait until the end of the word to try to figure out what its meaning is? Choosing from a little subset is much faster than trying to match a finished word against every word in your vocabulary.”
The Rochester team is already planning more sophisticated versions of the test that focus on other areas of the brain besides V5—such as areas that activate for specific sounds or touch sensations. Bavelier says they’re also planning to watch the brain sort out meaning when it is forced to take syntax into account. For instance, “blind venetian” and “venetian blind” are the same words but mean completely different things. How does the brain narrow down the meaning in such a case? How does the brain take the conversation’s context into consideration when zeroing in on meaning?
“This opens a doorway into how we derive meaning from language,” says Tanenhaus. “This is a new paradigm that can be used in countless ways to study how the brain responds to very brief events. We’re very excited to see where it will lead us.”
Comments are closed.