First artificial neural network created from DNA

Artificial intelligence has been the inspiration for countless books and movies, as well as the aspiration of countless scientists and engineers. Researchers at the California Institute of Technology (Caltech) have now taken a major step toward creating artificial intelligence—not in a robot or a silicon chip, but in a test tube. The researchers are the first to have made an artificial neural network out of DNA, creating a circuit of interacting molecules that can recall memories based on incomplete patterns, just as a brain can.

“The brain is incredible,” says Lulu Qian, a Caltech senior postdoctoral scholar in bioengineering and lead author on the paper describing this work, published in the July 21 issue of the journal Nature. “It allows us to recognize patterns of events, form memories, make decisions, and take actions. So we asked, instead of having a physically connected network of neural cells, can a soup of interacting molecules exhibit brainlike behavior?”

The answer, as the researchers show, is yes.

Consisting of four artificial neurons made from 112 distinct DNA strands, the researchers’ neural network plays a mind-reading game in which it tries to identify a mystery scientist. The researchers “trained” the neural network to “know” four scientists, whose identities are each represented by a specific, unique set of answers to four yes-or-no questions, such as whether the scientist was British.

After thinking of a scientist, a human player provides an incomplete subset of answers that partially identifies the scientist. The player then conveys those clues to the network by dropping DNA strands that correspond to those answers into the test tube. Communicating via fluorescent signals, the network then identifies which scientist the player has in mind. Or, the network can “say” that it has insufficient information to pick just one of the scientists in its memory or that the clues contradict what it has remembered. The researchers played this game with the network using 27 different ways of answering the questions (out of 81 total combinations), and it responded correctly each time.

This DNA-based neural network demonstrates the ability to take an incomplete pattern and figure out what it might represent—one of the brain’s unique features. “What we are good at is recognizing things,” says coauthor Jehoshua “Shuki” Bruck, the Gordon and Betty Moore Professor of Computation and Neural Systems and Electrical Engineering. “We can recognize things based on looking only at a subset of features.” The DNA neural network does just that, albeit in a rudimentary way.

Biochemical systems with artificial intelligence—or at least some basic, decision-making capabilities—could have powerful applications in medicine, chemistry, and biological research, the researchers say. In the future, such systems could operate within cells, helping to answer fundamental biological questions or diagnose a disease. Biochemical processes that can intelligently respond to the presence of other molecules could allow engineers to produce increasingly complex chemicals or build new kinds of structures, molecule by molecule.

“Although brainlike behaviors within artificial biochemical systems have been hypothesized for decades,” Qian says, “they appeared to be very difficult to realize.”

The researchers based their biochemical neural network on a simple model of a neuron, called a linear threshold function. The model neuron receives input signals, multiplies each by a positive or negative weight, and only if the weighted sum of inputs surpass a certain threshold does the neuron fire, producing an output. This model is an oversimplification of real neurons, says paper coauthor Erik Winfree, professor of computer science, computation and neural systems, and bioengineering. Nevertheless, it’s a good one. “It has been an extremely productive model for exploring how the collective behavior of many simple computational elements can lead to brainlike behaviors, such as associative recall and pattern completion.”

To build the DNA neural network, the researchers used a process called a strand-displacement cascade. Previously, the team developed this technique to create the largest and most complex DNA circuit yet, one that computes square roots.

This method uses single and partially double-stranded DNA molecules. The latter are double helices, one strand of which sticks out like a tail. While floating around in a water solution, a single strand can run into a partially double-stranded one, and if their bases (the letters in the DNA sequence) are complementary, the single strand will grab the double strand’s tail and bind, kicking off the other strand of the double helix. The single strand thus acts as an input while the displaced strand acts as an output, which can then interact with other molecules.

Because they can synthesize DNA strands with whatever base sequences they want, the researchers can program these interactions to behave like a network of model neurons. By tuning the concentrations of every DNA strand in the network, the researchers can teach it to remember the unique patterns of yes-or-no answers that belong to each of the four scientists. Unlike with some artificial neural networks that can directly learn from examples, the researchers used computer simulations to determine the molecular concentration levels needed to implant memories into the DNA neural network.

While this proof-of-principle experiment shows the promise of creating DNA-based networks that can—in essence—think, this neural network is limited, the researchers say. The human brain consists of 100 billion neurons, but creating a network with just 40 of these DNA-based neurons—ten times larger than the demonstrated network—would be a challenge, according to the researchers. Furthermore, the system is slow; the test-tube network took eight hours to identify each mystery scientist. The molecules are also used up—unable to detach and pair up with a different strand of DNA—after completing their task, so the game can only be played once. Perhaps in the future, a biochemical neural network could learn to improve its performance after many repeated games, or learn new memories from encountering new situations. Creating biochemical neural networks that operate inside the body—or even just inside a cell on a Petri dish—is also a long way away, since making this technology work in vivo poses an entirely different set of challenges.

Beyond technological challenges, engineering these systems could also provide indirect insight into the evolution of intelligence. “Before the brain evolved, single-celled organisms were also capable of processing information, making decisions, and acting in response to their environment,” Qian explains. The source of such complex behaviors must have been a network of molecules floating around in the cell. “Perhaps the highly evolved brain and the limited form of intelligence seen in single cells share a similar computational model that’s just programmed in different substrates.”

“Our paper can be interpreted as a simple demonstration of neural-computing principles at the molecular and intracellular levels,” Bruck adds. “One possible interpretation is that perhaps these principles are universal in biological information processing.”

 


Substack subscription form sign up