news-details

Overcoming 'catastrophic forgetting': Algorithm inspired by brain allows neural networks to retain knowledge

Neural networks have a remarkable ability to learn specific tasks, such as identifying handwritten digits. However, these models often experience "catastrophic forgetting" when taught additional tasks: They can successfully learn the new assignments, but "forget" how to complete the original. For many artificial neural networks, like those that guide self-driving cars, learning additional tasks thus requires being fully reprogrammed.

Biological brains, on the other hand, are remarkably flexible. Humans and animals can easily learn how to play a new game, for instance, without having to re-learn how to walk and talk.

Inspired by the flexibility of human and animal brains, Caltech researchers have now developed a new type of algorithm that enables neural networks to be continuously updated with new data that they are able to learn from without having to start from scratch. The algorithm, called a functionally invariant path (FIP) algorithm, has wide-ranging applications from improving recommendations on online stores to fine-tuning self-driving cars.

The algorithm was developed in the laboratory of Matt Thomson, assistant professor of computational biology and a Heritage Medical Research Institute (HMRI) Investigator. The research is described in a new study appearing in the journal Nature Machine Intelligence.

Thomson and former graduate student Guru Raghavan, Ph.D. were inspired by neuroscience research at Caltech, particularly in the laboratory of Carlos Lois, Research Professor of Biology. Lois studies how birds can rewire their brains to learn how to sing again after a brain injury. Humans can do this too; people who have experienced brain damage from a stroke, for instance, can often forge new neural connections to learn everyday functions again.

Related Posts
Advertisements
Market Overview
Top US Stocks
Cryptocurrency Market