Back to Research

Latent Topology Networks

A continual learning framework that addresses catastrophic forgetting by discovering and permanently committing sparse network structure during training.

Continual Learning · Catastrophic Forgetting · Neural Architecture · Machine Learning

Every neural network treats every connection as equally available for change, all the time. Train it on one task, then train it on another, and the second task quietly degrades the first. This is catastrophic forgetting, and it is why most deployed AI systems are trained once, frozen, and expensively retrained when the world changes enough to matter.

Latent Topology Networks (LTN) import a principle from synaptic plasticity: individual connections can be structurally stabilized. A connection that has earned its place in a learned representation does not automatically get overwritten when the network learns something new.

The mechanism: connections start silent — present but contributing nothing. They activate when two conditions simultaneously hold: the neurons they connect must be genuinely co-active on the current input (a local Hebbian signal), and a global novelty signal must indicate that the input represents something the network has not reliably encoded before. Neither condition alone is sufficient. Once a connection accumulates enough joint evidence, it permanently commits. Its structure is sealed. Gradient descent continues adjusting its numeric value, but the connection itself cannot be removed or repurposed.

Results show 95.91% accuracy on Permuted-MNIST (5 tasks) and a +5.8 point improvement over EWC on Split-CIFAR-100 (20 tasks), with advantages growing as task count increases — structural protection compounds where parameter-only methods degrade.

Request Access to This Paper

Submit your details and we'll follow up with access to the full paper.