For more than a decade, researchers have been working to create artificial digital retinas that can be implanted in the eye to allow the blind to see again. Many challenges stand in the way, but researchers at Stanford University may have found the key to solving one of the most vexing: heat. The artificial retina requires a very small computer chip with many metal electrodes poking out. The electrodes first record the activity of the neurons around them to create a map of cell types. This information is then used to transmit visual data from a camera to the brain. Unfortunately, the eye produces so much data during recording that the electronics get too darn hot.
“The chips required to build a high-quality artificial retina would essentially fry the human tissue they are trying to interface with,” says E.J. Chichilnisky, a professor in the Neurosurgery and Ophthalmology departments, who is on Stanford’s artificial retina team.
Members of the team, including Chichilnisky and his collaborators in Stanford’s Electrical Engineering and Computer Science departments, recently announced they have devised a way to solve that problem by significantly compressing the massive amounts of visual data that all those neurons in the eye create. They discuss their advance in a study published in the IEEE Transactions on Biomedical Circuits and Systems.
To convey visual information, neurons in the retina send electrical impulses, known as spikes, to the brain. The problem is that the digital retina needs to record and decode those spikes to understand the properties of the neurons, but that generates a lot of heat in the digitization process, even with only a few hundred electrodes used in today’s prototypes. The first true digital retina will need to have tens of thousands of such electrodes, complicating the issue further.
Boris Murmann, a professor of electrical engineering on the retina project, says the team found a way to extract the same level of visual understanding using less data. By better understanding which signal samples matter and which can be ignored, the team was able to reduce the amount of data that has to be processed. It’s a bit like being at a party trying to extract a single coherent conversation amid the din of a crowded room — a few voices matter a lot, but most are noise and can be ignored.
“We compress the data by being more selective, ignoring the noise and baseline samples and digitizing only the unique spikes,” Murmann says.
Previously, digitization and compression were done separately, leading to a lot of extra data storage and data transfer. “Our innovation inserts compression techniques into the digitization process,” says team member Subhasish Mitra, a professor of electrical engineering and of computer science. This approach retains the most useful information and is easier to implement in hardware.
Dante Muratore, a postdoctoral researcher on the team, says the process is surprisingly straightforward conceptually. Each spike has its own wave-like shape that helps researchers determine what sort of cell produced it — a key bit of knowledge in the retina, where different cells have different functions. Whenever two or more electrodes in the artificial retina record identical signal samples it is treated as a “collision,” effectively wiping out the data. The collisions can be safely ignored. On the other hand, whenever a unique signal sample is recorded by a single electrode, it is considered to have high value and gets stored for further processing. In testing their approach, the researchers say their efficient data-gathering method misses just 5% of cells, yet reduces the acquired data by 40 times.
The researchers believe it is a first step to a day of efficient, cool-running implantable chips that would work not just in the eye but in other so-called “neuroprosthetic” brain-machine interfaces that turn nerve impulses into computer signals. Such applications might include brain-controlled machines that restore motion to the paralyzed and hearing to the deaf, or that open new approaches that aid memory, alleviate mental illness or even improve self-driving vehicles.
“This is an important step that might someday allow us to build a digital retina with over 10,000 channels,” Muratore says.