AI & ML New Capability

Sets a new state-of-the-art for intracortical speech decoding with 14.3% phoneme error rate using a multitask Transformer.

March 24, 2026

Original Paper

Decoding the decoder: Contextual sequence-to-sequence modeling for intracortical speech decoding

Michal Olak, Tommaso Boccato, Matteo Ferrante

arXiv · 2603.20246

The Takeaway

Provides a robust solution to the 'nonstationarity' problem in BCIs (where brain signals change day-to-day) through a novel calibration module, moving speech-brain interfaces closer to real-world clinical utility.

From the abstract

Speech brain--computer interfaces require decoders that translate intracortical activity into linguistic output while remaining robust to limited data and day-to-day variability. While prior high-performing systems have largely relied on framewise phoneme decoding combined with downstream language models, it remains unclear what contextual sequence-to-sequence decoding contributes to sublexical neural readout, robustness, and interpretability. We evaluated a multitask Transformer-based sequence-