|
 |
Bimonthly Since 1986 |
ISSN 1004-9037
|
|
 |
|
|
Publication Details |
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
|
|
|
|
|
|
|
|
|
|
|
05 July-September 2023, Volume 38 Issue 4
|
|
|
Abstract
The brain is a very complex persistent neural network. The Reinforcement Learning (RL) theory has become one such approach applied in studies of brain-and-machine interfaces. We design experiment-based neural models to enable information processing system. In present study we proposed a well-organized learning technique, such as attention-gated (AG) reinforcement learning, to use a three-layer neural network to instantly understand the neuronal position at each time of action compilation. Three models discussed in this study had similar neural (firing) inputs, and similar neural network structures (nonlinear), but dissimilar policies to pick the weights and actions. The TARs of the three models demonstrated that attention-gated (AG) reinforcement learning has higher TAR values as compared to Q-greedy, and Q-softmax. The decoders begin to track the non-stationary fresh neural information every day, after the adaptation of one data segments, it showed better performance. Attention-gated (AG) reinforcement learning shows decoding ability while maintaining the performance of non-stationary nerve activity over several days of recording. The RL-based BMI architecture is an effective reinforcement learning method for designing adaptive neural decoders in a sophisticated process space that accelerates performance and reliably improves performance in complex artificial control tasks.
Keyword
Neural control, Brain-machine interfaces (BMIs); Trajectory tracking; Attention-gated reinforcement learning
PDF Download (click here)
|
|
|
|
|