The Emergence of Neurotechnology

Scientists estimate that there are 100 billion to a trillion neurons that make up the human brain. Somehow this three-pound heap of interconnected cells, which resembles a firm pudding, is responsible for all of our actions, thoughts, feelings, desires, and our most wonderful subjective experiences.

This organ is by far the most complex system the scientific community has ever encountered. Scientists were first made aware of its gross organization in 1664 with Thomas Willis’s Cerebri anatomi. Then in the 1940s Wilder Penfield preformed a series of ground breaking experiments on the brains of conscious humans undergoing neurosurgery for epilepsy. Penfield electrically stimulated various regions of the brain and noted the patient’s responses. Penfield constructed a series of maps laying out the function of different areas within the brain. One of the most striking observations that Penfield made was the existence of several replicas of the human body mapped out onto the surface of the brain. Each map is called a homunculus and is the brain’s representation of a certain aspect of the human body. ( )

Almost 60 years later a group of neuroscientists used maps similar to those drawn by Penfield to record motor commands from an area on the surface of a monkey’s brain. This area is called the motor cortex. Using complex math algorithms to interpret the information recorded from the hand region of the monkey’s motor cortex and to control the position of a robotic arm, these scientists were able to show that brain activity can be used to mimic a monkey’s ongoing movements in real time (2-5). What was previously science fiction began its transformation into hard science. The device these monkeys were controlling is called a brain machine interface (BMI) or a neural prosthetic.

There are two basic classes of BMIs, input and output. An output BMI detects neural activity, recording it directly from the brain, and sends the information to a computer. The computer uses advanced mathematics and modeling to decode the neural activity into a useful command, in the case of these monkeys, the command was the intended movement of their arms. Once the command has been detected it is used to drive robotics or another computer. Input BMIs work in the opposite way. They take information collected from the environment, which is processed by a computer and used to stimulate the nervous system, inputting or mimicking neural representations of information. The current goals for developing neural prosthetics are to compensate for a lost ability of an individual’s nervous system, whether that be the ability to move ones limbs, to see, hear, or even remember. (2,8)

So how did we get to this point?
Neuroscience is a relatively young science. The idea that the brain is made up of individual cells, a theory called the neuron doctrine, was not accepted until 1891. And it was not until 1929 that Berger developed the technique of electroencephalography (EEG), a means of observing brain activity in a live animal. (1) EEG works by placing a series of electrodes on top of the scalp or in some cases underneath the skull of the subject. One of the primary advantages of EEG is that it can be hooked up in non-invasive manners; however, EEG has very poor special resolution, which means that rather than getting information about what individual cells are doing, EEG collects information about large populations of neurons. It is analogous to holding a microphone over a football stadium. You can tell when someone scores a touch down, but not when an individual audience member stands up. EEG is also limited to recording activity from the outer most layers of the brain. It does not have the ability detect what is going on in the brain’s inner portion.

EEG has been used as the sensing component for output BMIs. This type of output BMI is called an indirect BMI because the neural information it records does not correspond directly to natural intention. Patients using indirect BMIs have to train extensively to learn to control their EEG rhythms. Because this task requires a considerable amount of focus it can interfere with performing other tasks. It can also be easily foiled by simple distractions. Another considerable disadvantage to EEG BMI is that the electrode setups are very cumbersome and can take a long time to setup properly, which is an obstacle that must be overcome before the BMI can be marketed for daily use.

In order to advance beyond indirect BMIs a device must have the ability to detect activity on the level of a single cell. It wasn’t until 1938 that Hodgkin and Huxley discovered the action potential and the ability to record the activity of a single neuron. (1) The action potential is one of the primary means by which neurons communicate and is essential to decoding neural activity. Unfortunately, the activity of one cell doesn’t tell you much about the intentions of the entire system. Returning to the football stadium analogy, single-cell recordings would be like observing the activity of one individual in the stadium and trying to infer what everyone else, including the players, are doing. In order to deal with this limitation and find a middle ground between EEG and single-cell recordings, multicellular and multielectrode recoding techniques and technologies were developed. One such technology was developed by Cyberkinetics, a neurotechnology startup co-founded by Brown University professor John Donoghue. They developed an array of extra-cellular recording electrodes, each thinner than a human hair, with an overall size of a baby aspirin. The array is made out of silicon, like a standard computer chip, and is coated in a plastic which makes it highly biocompatible (6). The Cyberkinetics’ array has the ability to record from over a hundred neurons at once, and it is possible for multiple arrays to be implanted in a single brain. This type of technology, which has also been developed by other groups, is what enabled the monkeys to control a robotic arm and what has enabled the first ever direct output BMI in humans, a system called BrainGate, that will be discussed later in this article.

What is coming next?
The work that groups like Cyberkinetics are doing will first materialize as neural controllers for computers; which has been tested but is far from perfected at this time. This will be the first step towards establishing more independence for quadriplegics and other immobile individuals. In addition to refining the processing of the signal, there is also a need for the information to be sent wirelessly from the brain, and eliminate the protruding adaptor. This will reduce the fear of infection and increase the longevity of the implant. The next step will be finding a way for quadriplegics to interact with their environment directly, rather than through a computer program. This innovation will likely take one of two forms. The first is neural control of robotic limbs—real time communication between an individual’s brain and a robotic limb. This limb will likely be mounted outside of the body: either strapped to the individual, or attached to a wheel chair. The eventual goal is to integrate the robotics into the person’s body as a fully functional prosthetic. The other way to restore the ability to interact with the environment would be to use neurotechnology to connect the brain to muscle stimulators, in patients with intact periphery but damaged spinal cord, to bypass the spine. This would enable patient to use their native muscles, the prosthetic would only be the connection and so could be completely hidden.

What is standing in our way?
The first and most substantial obstacle that scientists have to overcome is in deciphering and translating the language that neurons use to communicate. This will be necessary before devices like BrainGate can work as smoothly and effortlessly as we would like. The next large hurdle is integrating the prosthetic into the human body. Traditional prosthetics are secured to the body from outside, which is very uncomfortable and can lead to damage of tissue, or can be attached to bolts which are mounted to the bone and protrude from the skin. Unfortunately such setups lead to chronic infection at the transcutaneous sight. Dr. Clyde Briant of the engineering department of Brown University, among others, is working on osseointegration, which is the process of connecting living tissue to titanium. It is a difficult process since skin treats implants as foreign objects and tries to isolate it, which results in a lack of wound healing.10

The next large obstacle involves resolution of imaging techniques. If we want to create an artificial hippocampus or figure out how the brain encodes for thought, we are going to have to find out how the brain is wired. We need to know every single connection the all of the 100 billion to a trillion neurons make. And we have to be able to visualize all of the connections and activity in an awake human. Currently functional MRI, which is the closest tool we have to this, has resolution on the scale of centimeters. We need resolution on the scale of microns. Future challenges will involve precision recording and stimulating devices, advanced robotics, energy storage and recharging, and miniaturizing every component to fit within the body. Overcoming all of these obstacles will require the cooperation of scientists from many different fields.


Substack subscription form sign up

Comments are closed.