Quantcast

Human brain works heavy statistics learning language

A research team has found that the human brain makes much more extensive use of highly complex statistics when learning a language than scientists ever realized. The research, appearing in a recent issue of Cognitive Psychology, shows that the human brain is wired to quickly grasp certain relationships between spoken sounds even though those relationships may be so complicated they’re beyond our ability to consciously comprehend. From University of Rochester:Human brain works heavy statistics learning language
A team at the University of Rochester has found that the human brain makes much more extensive use of highly complex statistics when learning a language than scientists ever realized. The research, appearing in a recent issue of Cognitive Psychology, shows that the human brain is wired to quickly grasp certain relationships between spoken sounds even though those relationships may be so complicated they’re beyond our ability to consciously comprehend.
”We’re starting to learn just how intuitively our minds are able to analyze amazingly complex information without our even being aware of it,” says Elissa Newport, professor of brain and cognitive sciences at the University and lead author of the study. ”There is a powerful correlation between what our brains are able to do and what language demands of us.”
Newport and Richard Aslin, professor of brain and cognitive sciences, began by looking at how people are able to recognize the division between spoken words when spoken language is really a stream of unbroken syllables. They wanted to know how it is that we perceive breaks between spoken words, when in fact there are no pauses. This is why it often seems as if speakers of foreign languages are talking very quickly; we don’t perceive pauses.
So how is a baby supposed to make out where one word begins and another ends? Newport and Aslin devised a test where babies and adults listened to snippets of a synthetic language: a few syllables arranged into nonsense words and played in random order for 20 minutes. During that time, the listeners were taking in information about the syllables, such as how often each occurred, and how often they occurred in relation to other syllables. For instance, in the real words ”pretty baby,” the syllable ”pre” is followed by ”ty,” which happens more frequently in English than the syllable ”ty” being followed by ”ba”–thus the brain notes that ”ty” is more likely to be associated with ”pre” than with ”ba,” and so we hear a pause between those two syllables.
After listening to the synthesized string of syllables for the full 20 minutes, adults were played some of the invented words along with some words made up of syllables from the beginning and ending of words–like ”ty-ba.” More than 85 percent of the time, adults were able to recognize words from non-words. Five-year-olds also reacted definitively to words and non-words, showing that the human mind is wired to statistically track how often certain sounds arise in relationship to other sounds.
”If you were given paper and a calculator, you’d be hard-pressed to figure out the statistics involved,” says Newport. ”Yet after listening for a while, certain syllables just pop out at you and you start imagining pauses between the ‘words.’ It’s a reflection of the fact that somewhere in your brain you’re actually absorbing and processing a staggering amount of information.”
Newport and Aslin take the research a step farther in the Cognitive Psychology piece. Language does not only consist of relationships between adjacent syllables or other language elements. For instance, in the sentence, ”He is going,” the element ”is” is linked to the element ”ing,” even though they are not adjacent to each other. Newport and Aslin devised a new, more complex, synthetic language where three-syllable words had constant first and last syllables, but the middle syllable was interchangeable. Despite being somewhat similar to the original test, ”people were terrible at this,” notes Newport. One test subject never identified a single pattern, despite taking the test numerous times.
Though the new test was significantly more complicated than the first, Newport and Aslin were surprised that people performed so poorly. The team looked carefully at the non-adjacent aspects of languages, like Hebrew, which is replete with non-adjacent elements, and discovered that while whole syllables were rarely related in this way, vowels and consonants often were. They restructured the test so that the invented words had consistent consonants and variable vowels–like ”ring”, ”rang,” and ”rung.” Immediately, test scores skyrocketed. People were able to distinguish the regularity of certain consonant relationships and use them to properly divide the stream of sounds into words even though the statistics involved were at least as complicated as the earlier test that was universally failed.
Even switching the roles of consonants and vowels so that the vowels remained steady as the consonants varied, resulted in the test subjects picking out the words with great accuracy. Turkish, as an example, uses this ”vowel harmony” quite regularly.
”These results suggest that human learning ability is not just limited to a few elementary computations, but encompasses a variety of mechanisms,” says Newport. ”A question to explore now is: How complex and extensive are these learning mechanisms, and what kinds of computational abilities do people bring to the process of learning languages?”




The material in this press release comes from the originating research organization. Content may be edited for style and length. Want more? Sign up for our daily email.