Neuronerds in the Mist: On BMI

I left Neuroscience 2008 early to do other things, but the goodness of the conference has not left me.

I promised that I would share what I saw about Brain-Machine Interfaces. The new technology is astounding. In my humble opinion, we are ten to fifteen years away from having an autonomous robotic prosthetic that can both move and feel, a optical prosthetic that can overcome blindness just as well as the cochlear implant, and technology that will allow locked-in patients to interact at an unprecedented level.

The first thing I saw that was BMI-related was a presentation by G-tec. They demonstrated how a simple scalp EEG cap with eight surface electrodes could be used to interact with specialized computer displays. For example, they had a display with four flashing strobe arrows (up, down, left, right) each of which flashed at a different frequency (10, 11, 12, 13 Hz). The EEG of the person who looks at the strobe will start to fluctuate at that frequency. The computer can recognize the frequency and interpret it as a command for a direction. In the demonstration, the computer was hooked up to a robot, which would change direction as the demonstrator looked at different arrows. This could eventually translate into an effective wheel-chair mover for a severely tetraplegic or ALS patient. They had a word speller that worked on similar EEG-priming principles.

I saw technology dealing with robotic prosthetics for amputees. When someone gets an amputation nowadays, the doctor redirects the arm nerves into the chest, where they can be easily accessed by an electrode grid. The grid can patch into a robotic prosthetic, and the person can relatively easily learn to control the arm using neuronal circuitry already in place. What they’re trying now is to patch information from mechanosensors on the prosthetic into sensory information, so that the amputee feels what the robot feels, and interprets the information as coming from their by-gone limb.

I saw a presentation by a guy who could possibly be the world’s most arrogant engineer. Fortunately, his technique looked very cool. He’s approaching optical prosthetic technology from a deep-stim perspective. Optical information is routed down the optic nerve into the lateral geniculate nucleus of the thalamus, where it’s redistributed out to the occipital lobe for processing. While others are trying to place electrodes into the retina or the optic nerve, he wants to place an electrode, similar to a paintbrush, with hundreds of tiny microeclectrodes fraying out from the end of a pole. A camera mounted on glasses would break down an optical image and send it into the electrode, which would stimulate the thalamus. The person’s brain would eventually learn to interpret the signals as vision.

The big concern in the field right now seems to be the reaction of brain material to electrical material. Electrodes can induce gliosis, essentially causing scarring around the electrode tip and reducing their effectiveness. The people I talked to disagreed about how long an electrode can be implanted in the brain and still maintain a high degree of functionality. Right now, it’s somewhere between 18 months and 3 years. Once they get above five years, I think most of those debates will quiet down.

The world we live in is amazing. In labs in Arizona, Michigan, Pennsylvania, Rhode Island and California, among many others, this technology is being perfected. And how will we know that it’s perfect? When deafness, blindness, paralyisis, epilepsy, MS, ALS, parkinsons, huntingtons, and every other circuitry-based disorder is overcome. For now, it looks like researchers in those fields still have excellent job security. But who knows, down the road?


Substack subscription form sign up