A synthesizable RTL implementation of Predictive Coding allows for fully distributed, non-backprop learning directly in hardware.
March 20, 2026
Original Paper
A Synthesizable RTL Implementation of Predictive Coding Networks
arXiv · 2603.18066
The Takeaway
This moves AI training away from centralized memory and global error propagation (backprop) toward local layer-to-layer updates in a digital circuit. It provides a blueprint for edge devices that can learn online and in real-time without the heavy computational and architectural overhead of standard deep learning pipelines.
From the abstract
Backpropagation has enabled modern deep learning but is difficult to realize as an online, fully distributed hardware learning system due to global error propagation, phase separation, and heavy reliance on centralized memory. Predictive coding offers an alternative in which inference and learning arise from local prediction-error dynamics between adjacent layers. This paper presents a digital architecture that implements a discrete-time predictive coding update directly in hardware. Each neural