Quantcast

Perceptual learning relies on local motion signals to learn global motion

Rockville, MD — Researchers have long known of the brain’s ability to learn based on visual motion input, and a recent study has uncovered more insight into where the learning occurs.

The brain first perceives changes in visual input (local motion) in the primary visual cortex. The local motion signals are then integrated in the later visual processing stages and interpreted as global motion in the higher-level processes.

But when subjects in a recent experiment using moving dots were asked to detect global motion (the overall direction of the dots moving together), the results show that their learning relied on more local motion processes (the movement of dots in small areas) than global motion areas.

“We had expected that higher-level processing could be more involved in task-relevant perceptual learning investigated in this study,” said Dr. Shigeaki Nishina who conducted the research in Boston University and now belongs to the Honda Research Institute Japan. “Contrary to the expectation, the result suggested local motion signals are predominantly used for task-relevant perceptual learning of global motion, which was surprising to us.”

Nishina said the results, which appear in the latest issue of Journal of Vision (http://www.journalofvision.org/9/9/15/) show that the improvement in detection of global motion is not due to learning of the global motion but to learning of local motion of the moving dots in the test.

The researchers said the study of perceptual learning can give scientists deeper insight not only about sensory systems but also the whole brain’s adaptable nature.

“This line of study could give a guideline for optimizing human machine interface,” said Nishina. “When we use a new machine, we need to learn how to get information from the machine. In our study, local motion signals were more important for the brain to learn a task based on global motion. This suggests that the optimal information for efficient learning could be different from the visual information that is directly related to the task to be learned.”

In addition, Nishina said the new understanding of where the brain processes task-relevant perceptual learning can lead to further understanding of how a brain makes decisions based on sensory input.

“We expect that our results will help the understanding of decision-making process and constructing a more concrete model of the process,” he said.

The Journal of Vision is an online-only, peer-reviewed, open-access publication devoted to visual function in humans and animals. It is published by the Association for Research in Vision and Ophthalmology. It explores topics such as spatial vision, perception, low vision, color vision and more, spanning the fields of neuroscience, psychology and psychophysics. JOV is known for hands-on datasets and models that users can manipulate online.

The Association for Research in Vision and Ophthalmology (ARVO) is the largest eye and vision research organization in the world. Members include more than 12,500 eye and vision researchers from over 80 countries. The Association encourages and assists research, training, publication and knowledge-sharing in vision and ophthalmology. For more information, visit www.arvo.org.




The material in this press release comes from the originating research organization. Content may be edited for style and length. Want more? Sign up for our daily email.