Imitating the brain to make computers more efficient

Computers can perform operations much faster than the human brain and store more information. Despite these disadvantages, the human brain is a more efficient computer than the most sophisticated supercomputers — by a factor of a million, according to Saptarshi Das, assistant professor of engineering science and mechanics at Penn State.

This efficiency gap is a result of differences in energy consumption, volume and complexity of computing, Das said. With a $690,000 grant from the U.S. Army Research Office (ARO), Das plans to target each of those areas to start closing that gap.

“In 10 years, nearly 20% of the world’s energy will be spent on computation,” Das said. “This project is centered on finding ways to make new devices more efficient while continuing to expand their capabilities.”

Das is developing a component to mimic the brain’s computational structure. Memory and processing units are distributed throughout the brain in the form of synapses and neurons respectively, while computers separate memory and processing into confined units. The brain’s more extensive spread of stored data and processors prevents the bottleneck that results during data transfers between the confined units in computers. Das’ device will combine memory and processing into one small, low-energy unit that conceptually resembles a synapse and its adjacent neurons.

Probabilistic neural networks (PNNs) can also reduce energy consumption while increasing the capacity of computers to handle complex problems, Das said. Also modeled after the cognition of human brains, PNNs could make approximations to draw a conclusion where most computers would approach a problem to find an absolute, definite answer. This is the equivalent of how a person can identity a shrub as likely a shrub with some certainty but still retain understanding that it might actually be a small tree. Das plans to implement this method that allows for some ambivalence in a computer.

Das also plans to experiment with different materials to reduce transistor size, leading to more efficient computer area use. Silicon, conventionally used for transistors, can only be scaled to a certain point because it loses electrical properties when it is made too thin. Das is investigating a few materials that can be made atomically thin, such as molybdenum disulfide, as silicon alternatives for small electronic components. Smaller transistors can reduce energy consumption as well as the heat energy released during an operation, Das said, meaning that a computer would need to dedicate fewer resources and space to cooling.

With a smaller size, improved energy efficiency and the problem-solving capacity offered by PNNs, the devices in development could have several applications, including speech recognition, medical diagnostics, stock trend prediction or any other area requiring significant data processing, according to Das.

“A lot of computing is happening right now as more and more smart devices are made,” Das said. “We hope to address the fundamental problem of minimizing the required energy for computation to meet our massive energy needs in the future.”

Das is collaborating on the project with the Laboratory for Physical Sciences, Penn State’s 2D Crystal Consortium and materials science and engineering faculty in the College of Earth and Mineral Sciences at Penn State. He is also working with the ARO Computing Science Division.

“Reducing computational complexity will have a direct impact on Army applications such as robotics, speech and face recognition and data classification,” said Michael Coyle, ARO program manager. “Higher energy-efficiency computing will lead to longer battery life for mobile systems and ultra-low power processing for embedded systems.”


Substack subscription form sign up