Multimodal neuroimaging data boosts the prediction of multifaceted cognition
Abstract
Relating individual brain patterns to behavioural phenotypes through predictive modelling has been increasingly popular. Several recent studies have focused on the fundamental challenge of improving behavioural prediction based on individual brain patterns, by integrating information from multimodal neuroimaging data. However, the benefit of multimodal integration in brain-based behaviour prediction remains debated due to inconsistent findings. This issue raises the need of a systematic and extensive evaluation. Here, we investigated the necessity and benefit of multimodal integration in 3 large datasets covering different age ranges, using 25 to 33 feature types from different imaging modalities, and 21 behavioural measures from different domains. By setting up multiple predictive models corresponding to increasing levels of multimodal integration, we demonstrated that prediction performance saturates after integrating a few types of features. In general, our analyses revealed that multifaceted cognitive scores tend to require higher levels of multimodal integration, while other predictions may depend on single feature types. In most cases, multimodal integration can remain focused on functional features, especially in young adults. However, predictions in aging can also require structural and diffusion features. Along the same line, while model-free rest and task functional connectivity may provide relevant brain phenotype for behavioural prediction in most applications, in aging, effective connectivity appears relevant too. Thus, our study demonstrates that alternatives to model-free functional connectivity and, more generally, to functional imaging features should be considered for predictive modelling of behaviour, especially in aging populations where understanding interindividual variability in remain as a key challenge.
Related articles
Related articles are currently not available for this article.