KEYWORDS: Data modeling, Neuroimaging, Animals, Performance modeling, Animal model studies, Deep learning, Education and training, Data acquisition, Brain, Machine learning
The field of systems neuroscience strives to comprehend the intricate links between brain activity and behavior, with the goal of understanding neurological disorders and targeted interventions, notably in feedback-based therapies like Deep Brain Stimulation (DBS). However, challenges persist in correlating neuroimaging data with behavior, especially in freely behaving animals and across different imaging modalities. In the last decade, deep learning has emerged as a crucial tool for analyzing image data and enabling the correlation or prediction from different imaging modalities. This study investigates the application of a sophisticated deep learning architecture, specifically a pre-trained ResNet50-BiLSTM model, to predict behaviors from neuroimaging data. This study acquired multicontrast neuroimaging data (fluorescence or FL, intrinsic optical signal or IOS, and laser speckle contrast or LSC) synchronized with behavioral data from five awake, healthy mice. Behavioral "syllables" (i.e. running, nest building/eating, and minimally mobile) or classes were annotated using Behavior Cloud software. Neural activity (i.e. FL channel) image sequences were associated with behavioral syllables, consisting of 12 image frames spanning one minute. A pre-trained ResNet50-BiLSTM architecture was used for image sequence classification. We added a fully connected layer after BiLSTM and fine-tuned it to compute class scores. Since we had only five mice, we evaluated the model’s performance and generalizability using a 5-fold cross-testing strategy. Test accuracy was utilized to assess the reliability and robustness of predictions, and performance evaluation. Generalizability results showcase the promising performance of the ResNet50-BiLSTM model for predicting behavior from neural activity images with test accuracy reaching 93.75%. We intend to conduct additional research on this pipeline’s generalizability by analyzing confusion matrices, ROC curves, and AUCs, within a One-vs-All (OvA) framework. This initial study lays the groundwork for the exploration of multi-modal approaches (e.g. vision transformers and autoencoders) that encompass different neuroimaging modalities for accurate behavior prediction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.