AI Predicts Movement from Brain Data

[ad_1]

Summary: Researchers developed an AI algorithm capable of predicting mouse movement with a 95% accuracy by analyzing whole-cortex functional imaging data, potentially revolutionizing brain-machine interface technology. The team’s end-to-end deep learning method requires no data preprocessing and can make accurate predictions based on just 0.17 seconds of imaging data.

Furthermore, they devised a technique to discern which parts of the data were pivotal for the prediction, offering a glimpse into the AI’s decision-making process. This advancement not only enhances our understanding of neural decoding but also paves the way for developing non-invasive, near real-time brain-machine interfaces.

Key Facts:

  1. High Prediction Accuracy: The AI model can accurately predict a mouse’s behavioral state—moving or resting—based on brain imaging data with a 95% success rate, without the need for noise removal or pre-defined regions of interest.
  2. Rapid, Individualized Predictions: The model’s ability to generate predictions from 0.17 seconds of data and its effectiveness across different mice demonstrate its potential for personalized, near real-time applications in brain-machine interfaces.
  3. Opening the AI Black Box: By identifying critical cortical regions for behavioral classification, the researchers have provided valuable insights into the data that inform the AI’s decisions, enhancing the interpretability of deep learning in neuroscience.

Source: Kobe University

An AI image recognition algorithm can predict whether a mouse is moving or not based on brain functional imaging data. The Kobe University researchers also developed a method to identify which input data is relevant, shining light into the AI black box with the potential to contribute to brain-machine interface technology.

For the production of brain-machine interfaces, it is necessary to understand how brain signals and affected actions relate to each other. This is called “neural decoding,” and most research in this field is done on the brain cells’ electrical activity, which is measured by electrodes implanted into the brain.

This shows a brain and computer monitor.
The neuroscientists then went on to identify which parts of the image data were mainly responsible for the prediction by deleting portions of the data and observing the performance of the model in that state. Credit: Neuroscience News

On the other hand, functional imaging technologies, such as fMRI or calcium imaging, can monitor the whole brain and can make active brain regions visible by proxy data. Out of the two, calcium imaging is faster and offers better spatial resolution. But these data sources remain untapped for neural decoding efforts.

One particular obstacle is the need to preprocess the data such as by removing noise or identifying a region of interest, making it difficult to devise a generalized procedure for neural decoding of many different kinds of behavior.

Kobe University medical student AJIOKA Takehiro used the interdisciplinary expertise of the team led by neuroscientist TAKUMI Toru to tackle this issue.

“Our experience with VR-based real time imaging and motion tracking systems for mice and deep learning techniques allowed us to explore ‘end-to-end’ deep learning methods, which means that they don’t require preprocessing or pre-specified features, and thus assess cortex-wide information for neural decoding,” says Ajioka.

They combined two different deep learning algorithms, one for spatial and one for temporal patterns, to whole-cortex film data from mice resting or running on a treadmill and trained their AI-model to accurately predict from the cortex image data whether the mouse is moving or resting.

In the journal PLoS Computational Biology, the Kobe University researchers report that their model has an accuracy of 95% in predicting the true behavioral state of the animal without the need to remove noise or pre-define a region of interest.

In addition, their model made these accurate predictions based on just 0.17 seconds of data, meaning that they could achieve near real-time speeds. Also, this worked across five different individuals, which shows that the model could filter out individual characteristics.

The neuroscientists then went on to identify which parts of the image data were mainly responsible for the prediction by deleting portions of the data and observing the performance of the model in that state. The worse the prediction became, the more important that data was.

“This ability of our model to identify critical cortical regions for behavioral classification is particularly exciting, as it opens the lid of the ‘black box’ aspect of deep learning techniques,” explains Ajioka.

Taken together, the Kobe University team established a generalizable technique to identify behavioral states from whole-cortex functional imaging data and developed a technique to identify which portions of the data the predictions are based on. Ajioka explains why this is relevant.

“This research establishes the foundation for further developing brain-machine interfaces capable of near real-time behavior decoding using non-invasive brain imaging.”

Funding: This research was funded by the Japan Society for the Promotion of Science (grants JP16H06316, JP23H04233, JP23KK0132, JP19K16886, JP23K14673 and JP23H04138), the Japan Agency for Medical Research and Development (grant JP21wm0425011), the Japan Science and Technology Agency (grants JPMJMS2299 and JPMJMS229B), the National Center of Neurology and Psychiatry (grant 30-9), and the Takeda Science Foundation. It was conducted in collaboration with researchers from the ATR Neural Information Analysis Laboratories.

About this AI and movement research news

Author: Daniel Schenz
Source: Kobe University
Contact: Daniel Schenz – Kobe University
Image: The image is credited to Neuroscience News

Original Research: Open access.
“End-to-end deep learning approach to mouse behavior classification from cortex-wide calcium imaging” by TAKUMI Toru et al. PLOS Computational Biology


Abstract

End-to-end deep learning approach to mouse behavior classification from cortex-wide calcium imaging

Deep learning is a powerful tool for neural decoding, broadly applied to systems neuroscience and clinical studies.

Interpretable and transparent models that can explain neural decoding for intended behaviors are crucial to identifying essential features of deep learning decoders in brain activity. In this study, we examine the performance of deep learning to classify mouse behavioral states from mesoscopic cortex-wide calcium imaging data.

Our convolutional neural network (CNN)-based end-to-end decoder combined with recurrent neural network (RNN) classifies the behavioral states with high accuracy and robustness to individual differences on temporal scales of sub-seconds. Using the CNN-RNN decoder, we identify that the forelimb and hindlimb areas in the somatosensory cortex significantly contribute to behavioral classification.

Our findings imply that the end-to-end approach has the potential to be an interpretable deep learning method with unbiased visualization of critical brain regions.

[ad_2]

Source link

Leave a Comment