It’s been shown that the left side of the brain processes language and the right side processes music; but what about a language like Mandarin Chinese, which is musical in nature with wide tonal ranges?
UC Irvine researcher Fan-Gang Zeng and Chinese colleagues studied brain scans of subjects as they listened to spoken Mandarin. They found that the brain processes the music, or pitch, of the words first in the right hemisphere before the left side of the brain processes the semantics, or meaning, of the information.
The results show that language processing is more complex than previously thought, and it gives clues to why people who use auditory prosthetic devices have difficulty understanding Mandarin. The study appears in this week’s online early edition of the Proceedings of the National Academy of Sciences.
In the English language, Zeng says, changes in pitch dictate the difference between a spoken statement and question, or in mood, but the meaning of the words does not change. This is different in Mandarin, in which changes in pitch affect the meaning of words.
“Most cochlear implant devices lack the ability to register large tonal ranges, which is why these device users have difficulty enjoying music … or understanding a tonal language,” says Zeng, a professor of otolaryngology, biomedical engineering, cognitive sciences, and anatomy and neurobiology.
In his hearing and speech lab at UCI, Zeng has made advances in cochlear implant development, discovering that enhancing the detection of frequency modulation (FM) significantly boosts the performance of many hearing aids devices by increasing tonal recognition, which is essential to hearing music and understanding certain spoken languages like Mandarin.
Lin Chen, Hao Luo, Jing-Tian Ni, Zhi-Ou Li and Da-Ren Zhang of the University of Science and Technology of China, Hefei, are study co-authors. The National Natural Science Foundation of China and the National Basic Research Program of China provided support.
From UC Irvine