New Algorithm Lets Neural Networks Learn Continuously Without Forgetting

Caltech researchers have developed an innovative algorithm that allows neural networks to be updated with new information without losing previously acquired knowledge, mimicking the flexibility of biological brains.


Summary: A novel functionally invariant path (FIP) algorithm created at Caltech enables neural networks to continuously learn new tasks without starting from scratch, potentially revolutionizing applications from online recommendations to self-driving cars.

Estimated reading time: 5 minutes


In a significant breakthrough for artificial intelligence, researchers at the California Institute of Technology (Caltech) have developed an algorithm that enables neural networks to learn continuously without experiencing “catastrophic forgetting.” This new approach, detailed in a study published in Nature Machine Intelligence on October 3, could dramatically improve the adaptability and efficiency of AI systems across various applications.

Overcoming the Forgetting Problem

Neural networks, the backbone of many AI systems, have long struggled with a limitation known as “catastrophic forgetting.” This occurs when a network, trained to perform a specific task, loses its ability to perform that task when trained on a new one. For example, a neural network trained to identify handwritten digits might lose this capability if subsequently trained to recognize faces.

This limitation has posed significant challenges in developing more flexible AI systems, particularly in fields requiring continuous adaptation, such as autonomous vehicles or personalized recommendation systems.

Inspiration from Biological Brains

The Caltech team drew inspiration from the remarkable adaptability of biological brains. The researchers were particularly influenced by neuroscience research at Caltech, including studies on how birds can rewire their brains to relearn singing after brain injuries.

“This was a yearslong project that started with the basic science of how brains flexibly learn,” says Matt Thomson, assistant professor of computational biology and a Heritage Medical Research Institute (HMRI) Investigator. “How do we give this capability to artificial neural networks?”

The Functionally Invariant Path (FIP) Algorithm

The team’s solution, named the functionally invariant path (FIP) algorithm, uses a mathematical technique called differential geometry. This approach allows neural networks to be modified and updated with new information without losing previously encoded knowledge.

Key features of the FIP algorithm include:

  1. Continuous learning: Networks can learn new tasks without forgetting old ones.
  2. Flexibility: The algorithm can be applied to various types of neural networks.
  3. Efficiency: It eliminates the need to retrain networks from scratch for new tasks.

Potential Applications and Impact

The implications of this research are far-reaching. The FIP algorithm could potentially improve:

  • Recommendation systems on online platforms
  • Self-driving car algorithms
  • Adaptive AI assistants
  • Personalized learning systems

These applications could become more responsive to new data and user behaviors without losing their core functionalities.

From Research to Real-World Application

Recognizing the potential impact of their work, the researchers have taken steps to bring this technology to market. In 2022, former graduate student Guru Raghavan (PhD ’23) and Thomson started a company called Yurts to further develop the FIP algorithm and deploy machine learning systems at scale.

Looking Ahead: Challenges and Opportunities

While the development of the FIP algorithm represents a significant step forward in machine learning, it also opens up new avenues for research. Future studies may explore:

  • The algorithm’s performance in complex, real-world scenarios
  • Potential limitations or edge cases
  • Integration with existing AI systems and infrastructures

As AI continues to play an increasingly important role in our daily lives, advancements like the FIP algorithm bring us closer to creating more adaptable, efficient, and human-like artificial intelligence systems.

However, researchers caution that there are still challenges to overcome. The computational resources required for implementing the FIP algorithm on large-scale networks may be substantial. Additionally, ensuring the ethical use of such adaptable AI systems will be crucial as they become more prevalent in decision-making processes.


Quiz: Test Your Knowledge

  1. What is the main problem that the FIP algorithm addresses in neural networks? a) Slow processing speed b) High energy consumption c) Catastrophic forgetting d) Limited data storage
  2. Which field of mathematics was used to develop the FIP algorithm? a) Linear algebra b) Differential geometry c) Calculus d) Statistics
  3. What inspired the development of the FIP algorithm? a) Computer simulations b) Quantum computing c) Biological brain flexibility d) Social network analysis

Answers: 1. c) Catastrophic forgetting, 2. b) Differential geometry, 3. c) Biological brain flexibility


Glossary of Terms

  1. Neural Network: A computing system inspired by biological neural networks, used in machine learning.
  2. Catastrophic Forgetting: The tendency of artificial neural networks to completely and abruptly forget previously learned information upon learning new information.
  3. Functionally Invariant Path (FIP) Algorithm: The new algorithm developed by Caltech researchers to enable continuous learning in neural networks.
  4. Differential Geometry: A mathematical discipline that uses techniques of differential calculus, integral calculus, linear algebra and multilinear algebra to study problems in geometry.
  5. Machine Learning: A subset of artificial intelligence that focuses on the development of algorithms that can learn from and make decisions based on data.
  6. Artificial Intelligence (AI): The simulation of human intelligence processes by machines, especially computer systems.

Enjoy this story? Get our newsletter! https://scienceblog.substack.com/


Substack subscription form sign up