TY - JOUR
T1 - An emerging view of neural geometry in motor cortex supports high-performance decoding
AU - Perkins, Sean M.
AU - Amematsro, Elom A.
AU - Cunningham, John
AU - Wang, Qi
AU - Churchland, Mark M.
N1 - Publisher Copyright:
© 2023, Perkins et al.
PY - 2025/2/3
Y1 - 2025/2/3
N2 - Decoders for brain-computer interfaces (BCIs) assume constraints on neural activity, chosen to reflect scientific beliefs while yielding tractable computations. Recent scientific advances suggest that the true constraints on neural activity, especially its geometry, may be quite different from those assumed by most decoders. We designed a decoder, MINT, to embrace statistical constraints that are potentially more appropriate. If those constraints are accurate, MINT should outperform standard methods that explicitly make different assumptions. Additionally, MINT should be competitive with expressive machine learning methods that can implicitly learn constraints from data. MINT performed well across tasks, suggesting its assumptions are well-matched to the data. MINT outperformed other interpretable methods in every comparison we made. MINT outperformed expressive machine learning methods in 37 of 42 comparisons. MINT's computations are simple, scale favorably with increasing neuron counts, and yield interpretable quantities such as data likelihoods. MINT's performance and simplicity suggest it may be a strong candidate for many BCI applications.
AB - Decoders for brain-computer interfaces (BCIs) assume constraints on neural activity, chosen to reflect scientific beliefs while yielding tractable computations. Recent scientific advances suggest that the true constraints on neural activity, especially its geometry, may be quite different from those assumed by most decoders. We designed a decoder, MINT, to embrace statistical constraints that are potentially more appropriate. If those constraints are accurate, MINT should outperform standard methods that explicitly make different assumptions. Additionally, MINT should be competitive with expressive machine learning methods that can implicitly learn constraints from data. MINT performed well across tasks, suggesting its assumptions are well-matched to the data. MINT outperformed other interpretable methods in every comparison we made. MINT outperformed expressive machine learning methods in 37 of 42 comparisons. MINT's computations are simple, scale favorably with increasing neuron counts, and yield interpretable quantities such as data likelihoods. MINT's performance and simplicity suggest it may be a strong candidate for many BCI applications.
UR - http://www.scopus.com/inward/record.url?scp=85217020014&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85217020014&partnerID=8YFLogxK
U2 - 10.7554/eLife.89421
DO - 10.7554/eLife.89421
M3 - Article
C2 - 39898793
AN - SCOPUS:85217020014
SN - 2050-084X
VL - 12
JO - eLife
JF - eLife
ER -