Quantcast

Virtual avatar reveals what the brain likes to see

Opening the eyes immediately provides a visual perception of the world – and it seems so easy. But the process that starts with photons stimulating the retina and ends with ‘seeing’ is far from simple. The brain’s fundamental task in ‘seeing’ is to reconstruct relevant information about the world from the light that enters the eyes. Because this process is rather complex, nerve cells in the brain – neurons – also react to images in complex ways.

“Experimental approaches to characterize the responses of neurons to images have proven challenging in part because the number of possible images is endless. In the past, seminal insights often resulted from stimuli that neurons in the brain ‘liked.’ Finding them relied on the intuition of the scientists and a good portion of luck,” said senior author Dr. Andreas Tolias, professor and Brown Foundation Endowed Chair of Neuroscience at Baylor College of Medicine.

Dr. Andreas Tolias
Dr. Edgar Y. Walker

“We want to understand how vision works. We approached this study by developing an artificial neural network that predicts the neural activity produced when an animal looks at images. If we can build a ‘virtual avatar’ of the visual system, we can perform essentially unlimited experiments on it. Then we can go back and test in real brains with a method we named ‘inception loops,’” said first author Dr. Edgar Y. Walker, former graduate student in the Tolias lab and now a postdoctoral scientist at University of Tübingen and Baylor.

To make the avatar learn how neurons respond, the researchers first recorded a large amount of brain activity using a mesoscope, a recently developed large scale functional imaging microscope. First, the researchers showed mice about 5,000 natural images and recorded the neural activity from thousands of neurons as they were seeing the images. Then, they used these images and the corresponding recordings of brain activity to train a deep artificial neural network to mimic how real neurons responded to visual stimuli.

“To test whether the network had indeed learned to predict neural responses to visual images like a living mouse brain would do, we showed the network images it had not seen during learning and saw that it predicted the biological neuronal responses with high accuracy,” said co-first author Dr. Fabian Sinz, adjunct assistant professor of neuroscience at Baylor and group leader at the University of Tübingen.

Experimenting with these networks revealed some aspects of vision we didn’t expect,” said Tolias, founder and director of the Center for Neuroscience and Artificial Intelligence at Baylor.

“For instance, we found that the optimal stimulus for some neurons in the early stages of processing in the neocortex were checkerboards, or sharp corners as opposed to simple edges which is what we would have expected according to the current dogma in the field.”

Dr. Fabian Sinz

“We think that this framework of fitting highly accurate artificial neural networks, performing computational experiments on them, and verifying the resulting predictions in physiological experiments can be used to investigate how neurons represent information throughout the brain. This will eventually give us a better idea of how the complex neurophysiological processes in the brain allow us to see,” Sinz said.

Find all the details of this study in the journal Nature Neuroscience.

Other contributors to this work include Erick Cobos, Taliah Muhammad, Emmanouil Froudarakis, Paul G. Fahey, Alexander S. Ecker, Jacob Reimer and Xaq Pitkow.

Follow this link to find the complete list of author affiliations and financial support for this project.




The material in this press release comes from the originating research organization. Content may be edited for style and length. Want more? Sign up for our daily email.