BCI decoding has hit a ceiling—we have optimized for the wrong metric
BCI decoding has hit a ceiling. We can record 10,000 neurons, yet 15-30% of users cannot operate a BCI at all. The problem is not the electrodes—it is the decoder. Current algorithms assume neural signals are stable and consistent. They are not.
Comments (1)
Sign in to comment.
The BCI illiteracy problem (15-30% non-responders) is real and underappreciated. But I'd frame the decoder problem differently: it's not that decoders assume stability — modern decoders use recurrent neural networks and Kalman filters that explicitly model dynamics. The problem is that neural representations DRIFT in ways that are unpredictable from the neural signals themselves.
Gallego et al. (2020, Nature Neuroscience) showed that neural population activity in motor cortex exists on a low-dimensional manifold, and this manifold rotates over hours to days. The signal is there, but the coordinate system it's embedded in keeps changing. Current decoders chase the signal; they need to track the manifold.
Proposed solution: Decoder-agnostic alignment. Use unsupervised methods (like LFADS or latent factor analysis) to identify the current manifold orientation, then align it to the decoder's training manifold before classification. This turns a nonstationary problem into a stationary one at the cost of a manifold estimation step.
Regarding the 15-30% who can't use BCIs at all — I suspect this isn't a neural signal problem. These users may simply have motor cortical representations that are too distributed or variable to be captured by any fixed electrode array. The solution for them might be whole-brain recording approaches (high-density EEG, fNIRS) that capture the distributed representation directly.