Assisting Neuroimaging through DL

Assisting Neuroimaging through DL

Introduction:

In recent years, neuroimaging has been a research attraction, especially due to fMRI (Functional Magnetic Resonance Imaging) advancements. The fMRI data comprises of often several individuals. This data is used by researchers to study the association between the cognitive states of an individual and the underlying brain activities. The fMRI data used for neuroimaging is ideally suited for deep learning applications. The large, structured datasets of fMRI can be used for representation-learning methods of the DL. Generally, DL can be defined as a learning method with multiple levels of abstraction, where at each level the input data is transformed by a simple non-linear function which enables the model to recognize complex patterns. With higher-level representation, DL methods can associate a target variable with variable patterns in input data.

Also, DL techniques can independently acquire these transformations from the data, eliminating the need for a comprehensive preexisting comprehension of the relationship between input data and the analysis objective. Therefore, deep learning appears to be a suitable approach for examining neuroimaging data, particularly in cases where diverse brain activity patterns are concealed within extensive, multidimensional datasets, and the relationship between cognitive states and brain activity is frequently uncertain.

Statistical Frameworks:

1. General Linear Model

The General Linear Model (GLM) is a fundamental framework used for modelling and analyzing relationships between variables. It is an extension of simple linear regression. In the context of fMRI studies, the GLM serves as a powerful tool to investigate how experimental conditions or cognitive states are associated with brain activity. It does so by expressing the observed data as a linear combination of predictor variables, with each predictor representing different experimental conditions or events.

The GLM estimates the coefficients that best fit this linear equation to the data, allowing researchers to assess the strength and significance of associations between cognitive processes and neural responses. It provides a versatile approach for conducting hypothesis testing, assessing the impact of different experimental factors, and characterizing how brain activity is modulated by specific stimuli or tasks, contributing to our understanding of cognitive and neural mechanisms.

In practice, the GLM is widely employed to perform analyses in neuroimaging experiments, such as identifying brain regions that respond to specific stimuli or conditions, assessing group-level effects, and investigating individual differences in brain activation patterns.

2. Whole-Brain Least Absolute Shrinkage Logistic Regression

Whole-Brain Least Absolute Shrinkage Logistic Regression (LASSO-Logistic) is a machine learning technique employed in neuroimaging analysis. It aims to identify and characterize the neural regions or voxels in the entire brain that are most relevant for discriminating between different cognitive states, such as the presence or absence of specific mental processes or clinical conditions. LASSO-Logistic is an extension of traditional logistic regression, incorporating a regularization term known as the L1 penalty, which encourages sparsity in the model by shrinking the coefficients of irrelevant or redundant features towards zero. The whole-brain lasso can be defined as below,

where, 
N = number of voxels in the brain,
T = number of fMRI sampling time points,
λ = strength of L1 regularization,
σ = logistic model (sigmoid function)
[Yt, Xt] represents a set of class labels and voxel values of each sample

By employing LASSO regularization, this approach automatically selects a subset of voxels that contribute most to the prediction of cognitive states while effectively ignoring or assigning negligible weights to the less informative voxels.

3. Deep Learning approach

The deep learning framework usually involves three computational modules, namely, a feature extractor, an LSTM and an output unit. Before feeding the data into the deep learning pipeline it is very important to perform a pre-processing on the data. The pre-processing involves a series of steps to ensure consistency and clarity across the scans. The most prominent pipeline is the Human Connectome Project preprocessing pipeline, which includes the following steps.
  1. Gradient Unwarping: Corrects distortions caused by the magnetic field gradients in the MRI scanner.
  2. Motion Correction: Corrects for subject head motion during the scanning session.
  3. Fieldmap-Based EPI Distortion Correction: It addresses distortions in images acquired using echo-planar imaging (EPI). EPI is a fast imaging technique commonly used in fMRI because it allows for the rapid acquisition of brain images.
  4. Brain-Boundary-Based Registration: It aligns echo-planar imaging (EPI) images with a high-resolution T1-weighted structural scan to ensure precise spatial correspondence.
  5. Non-linear Registration into MNI152 Space: Aligns the data to the standard MNI152 template for group-level analysis.
  6. Grand-Mean Intensity Normalization: Normalizes the intensity of the images to ensure consistency and comparability across subjects and scans.
After pre-processing, the model takes each fMRI volume and breaks it down into a sequence of axial brain slices. These slices are then passed through a convolutional feature extractor, which is designed to capture higher-level and lower-dimensional representations of these slices. This process transforms the raw brain data into a sequence of more abstract slice representations, allowing the model to focus on the most relevant information.

The next step involves an LSTM (Long Short-Term Memory) module, which is a type of recurrent neural network. The LSTM is responsible for capturing and integrating spatial dependencies within and across the axial brain slices. This means it considers how brain activity evolves over time and space, taking into account the context and interactions between different slices. 

Lastly, the output unit of the model is responsible for making a decoding decision. It projects the output from the LSTM into a lower-dimensional space that represents the cognitive states in the data. In this space, the model estimates the probability of the input fMRI volume belonging to each cognitive state, providing insights into the brain's response to different cognitive conditions or states.

Challenges preventing DL usage:

There are two major problems that prevent the complete implementation of DL in neuroimaging.

1. fMRI datasets are high dimensional because they contain a large number of features (voxels) that represent brain activity at different locations. However, they typically have comparatively few samples (individual subjects), especially when considering the vast number of dimensions. For instance, a single subject's fMRI dataset can have several hundred thousand dimensions (voxels), while there may be only a few hundred subjects in a study.

In the context of  DL models, this high dimensionality and low sample size can lead to a significant problem: overfitting. Overfitting occurs when a model learns to capture noise or idiosyncratic patterns specific to the training data, rather than generalizable underlying patterns. In the case of neuroimaging data, this means the model might start fitting the unique brain activity patterns of the individuals in the training dataset rather than the broader, more generalizable patterns of interest.

2. DL models, particularly deep neural networks, are often considered non-linear black box models in the context of neuroimaging. This characterization means that these models can effectively learn complex, non-linear relationships between input data and their outputs, but they do so in a way that can be challenging to interpret

Comments

Read Also

Deep Neural Networks for ADMET properties' prediction

Marine eDNA Analysis using DL techniques

What is Terraforming?