A Florida scientist has grown a living ”brain” that can fly a simulated plane, giving scientists a novel way to observe how brain cells function as a network. The ”brain” — a collection of 25,000 living neurons, or nerve cells, taken from a rat’s brain and cultured inside a glass dish — gives scientists a unique real-time window into the brain at the cellular level. By watching the brain cells interact, scientists hope to understand what causes neural disorders such as epilepsy and to determine noninvasive ways to intervene. As living computers, they may someday be used to fly small unmanned airplanes or handle tasks that are dangerous for humans, such as search-and-rescue missions or bomb damage assessments.
From University of Florida:
UF scientist: ‘Brain’ in a dish acts as autopilot, living computer
A University of Florida scientist has grown a living ”brain” that can fly a simulated plane, giving scientists a novel way to observe how brain cells function as a network.
The ”brain” — a collection of 25,000 living neurons, or nerve cells, taken from a rat’s brain and cultured inside a glass dish — gives scientists a unique real-time window into the brain at the cellular level. By watching the brain cells interact, scientists hope to understand what causes neural disorders such as epilepsy and to determine noninvasive ways to intervene. As living computers, they may someday be used to fly small unmanned airplanes or handle tasks that are dangerous for humans, such as search-and-rescue missions or bomb damage assessments.
”We’re interested in studying how brains compute,” said Thomas DeMarse, the UF professor of biomedical engineering who designed the study. ”If you think about your brain, and learning and the memory process, I can ask you questions about when you were 5 years old and you can retrieve information. That’s a tremendous capacity for memory. In fact, you perform fairly simple tasks that you would think a computer would easily be able to accomplish, but in fact it can’t.”
While computers are very fast at processing some kinds of information, they can’t approach the flexibility of the human brain, DeMarse said. In particular, brains can easily make certain kinds of computations — such as recognizing an unfamiliar piece of furniture as a table or a lamp — that are very difficult to program into today’s computers.
”If we can extract the rules of how these neural networks are doing computations like pattern recognition, we can apply that to create novel computing systems,” he said.
DeMarse experimental ”brain” interacts with an F-22 fighter jet flight simulator through a specially designed plate called a multi-electrode array and a common desktop computer.
”It’s essentially a dish with 60 electrodes arranged in a grid at the bottom,” DeMarse said. ”Over that we put the living cortical neurons from rats, which rapidly begin to reconnect themselves, forming a living neural network — a brain.”
The brain and the simulator establish a two-way connection, similar to how neurons receive and interpret signals from each other to control our bodies. By observing how the nerve cells interact with the simulator, scientists can decode how a neural network establishes connections and begins to compute, DeMarse said.
When DeMarse first puts the neurons in the dish, they look like little more than grains of sand sprinkled in water. However, individual neurons soon begin to extend microscopic lines toward each other, making connections that represent neural processes. ”You see one extend a process, pull it back, extend it out — and it may do that a couple of times, just sampling who’s next to it, until over time the connectivity starts to establish itself,” he said. ”(The brain is) getting its network to the point where it’s a live computation device.”
To control the simulated aircraft, the neurons first receive information from the computer about flight conditions: whether the plane is flying straight and level or is tilted to the left or to the right. The neurons then analyze the data and respond by sending signals to the plane’s controls. Those signals alter the flight path and new information is sent to the neurons, creating a feedback system.
”Initially when we hook up this brain to a flight simulator, it doesn’t know how to control the aircraft,” DeMarse said. ”So you hook it up and the aircraft simply drifts randomly. And as the data comes in, it slowly modifies the (neural) network so over time, the network gradually learns to fly the aircraft.”
Although the brain currently is able to control the pitch and roll of the simulated aircraft in weather conditions ranging from blue skies to stormy, hurricane-force winds, the underlying goal is a more fundamental understanding of how neurons interact as a network, DeMarse said.
”There’s a lot of data out there that will tell you that the computation that’s going on here isn’t based on just one neuron. The computational property is actually an emergent property of hundreds or thousands of neurons cooperating to produce the amazing processing power of the brain.”
With Jose Principe, a UF distinguished professor of electrical engineering and director of UF’s Computational NeuroEngineering Laboratory, DeMarse has a $500,000 National Science Foundation grant to create a mathematical model that reproduces how the neurons compute.
These living neural networks are being used to pursue a variety of engineering and neurobiology research goals, said Steven Potter, an assistant professor in the Georgia Tech/Emory Department of Biomedical Engineering who uses cultured brain cells to study learning and memory. DeMarse was a postdoctoral researcher in Potter’s laboratory at Georgia Tech before he arrived at UF.
”A lot of people have been interested in what changes in the brains of animals and people when they are learning things,” Potter said. ”We’re interested in getting down into the network and cellular mechanisms, which is hard to do in living animals. And the engineering goal would be to get ideas from this system about how brains compute and process information.”
Though the ”brain” can successfully control a flight simulation program, more elaborate applications are a long way off, DeMarse said. ”We’re just starting out. But using this model will help us understand the crucial bit of information between inputs and the stuff that comes out,” he said. ”And you can imagine the more you learn about that, the more you can harness the computation of these neurons into a wide range of applications.”
So you did find something the little buggers didn’t like. They don’t like the one-per-second “stimulations” (electrical?) that you mentioned, but they did like (or respond positively to) the slower stimulations.
Actually, that does appear to fall under the definition of classical conditioning. For instance, when a pidgeon pecks at a bar and gets a bit of food, he isn’t aware of any consequences related to his pecking the bar, other than that he gets food. He could be launching nuclear missiles, for all he knows. Similarly, the clusters of cells can’t know the consequences of their actions–their perspective is even more limited than the bird’s.
The main difference I see here is that you’ve found a way to get the cells to modulate their response in reaction to the feedback signal. Depressing or enhancing their response with a stimulation regime seems quite comparable to reward/punishment, since the outcome is the same–behavior modification. And a modulated response is much more useful than a simple peck at a bar.
What fascinating work – I am not sure what “depress the response of the network” means – does depression or enhancement lead to reconfiguration of the newtork – if so how ?
I would be so bold as to guess that there are millions of technophiles out in cyberspace who would love a fuller accounting of what you have done.
The neurons/network’s behavior is not goal directed in the classic sense. Instead, we are modifying the connectivity within the network and using those “weights” to control the aircraft. Here is a brief description of the system:
The neural flight control that is being reported is very rudimentary. The in-vitro network of rat cortical neurons simply controls the pitch and roll of the aircraft to produce straight and level flight, the neural equivalent of an autopilot. This is accomplished using an effect reported by Eytan, D., Brenner, N., and Marom, S., Selective Adaptation in Networks of Cortical Neurons. Journal of Neuroscience, 2003. 23(28): p. 9349-9356 in which “high” frequency stimulations (once every second) was reported to depress the response of the network while “low” frequency stimulations resulted in an enhanced response. For our system we tied the network’s response to the control surfaces, dedicating stimulations on one channel for pitch, and a second for roll control. Each channel is stimulated separately, and the response (PSTH) is recorded. Control movements are proportional to the current error from straight and level by mapping the error (0 to 180 degrees) to the interval 0 to 100 ms of the PSTH and integrating the difference in response before training, to the current or enhanced or depressed levels. The more error, the more the control surface is moved. The networks only gradually control the aircraft since the Marom effect requires over 15 minutes to develop. The two frequencies are then used to adjust these weights (i.e. number of spikes in the PSTH) to produce optimal flight. The neurons/network don’t seek optimal flight in the classic sense. Instead, we adjust the weights (using high and low Freq. stims) in the network to produce that result.
It is a very simple system and our only interest in it is in terms of those changes within the network and the possibility to extend it to more of the network than just two or three different channels.
Hope that helps..
Tom DeMarse
I 2nd that: more information needed! The concept of good and bad from a homeostasis perspective is quite simple. One wonders if after enough networking the neurons would develop more sophisticated models of what stasis “is”.
Your article skipped the part about how they made their little blob of neurons “care” about flying the simulator correctly. I have to assume that the experimentors somehow correlated a homeostasis factor with performance. Otherwise, are we to believe that the blob just naturally knew that it was supposed to fly the airplane straight and level? That would seem an extraordinary demonstration of ESP!
So, what did they do, shock the little booger when it failed? Give it a little extra glucose for good behavior? How does a little blob of neurons (lacking nerve cells to detect pain or pleasure) even know when it’s being rewarded?