SeriesFusion
Science, curated & edited by AI
Nature Is Weird  /  AI

Electrical currents moving through a simple network of resistors can suffer from the same catastrophic forgetting that plagues advanced AI systems.

Machine learning models often lose old information when they learn something new, a problem that usually exists in abstract digital code. This study shows that the same phenomenon happens in physical circuits when high-current pathways reconfigure themselves. Electricity essentially carves a new path through the hardware that erases the previous state. Seeing this problem in a bucket of resistors provides a physical map for how information is lost. It suggests that forgetting is a fundamental property of how systems adapt to new data.

Original Paper

Sequential Learning and Catastrophic Forgetting in Differentiable Resistor Networks

Maniru Ibrahim

arXiv  ·  2605.01383

Differentiable physical networks provide a simple setting in which learning can be studied through the interaction between trainable parameters and physical equilibrium constraints. We investigate sequential learning in differentiable resistor networks governed by Kirchhoff's laws. Although individual input--output mappings can be learned by gradient-based adjustment of edge conductances, sequential training on conflicting tasks produces catastrophic forgetting. We show that forgetting is contro