Neuroimaging fusion is the process of combining brain imaging data from multiple imaging modalities to create a composite image containing complementary information such as structural and functional changes in the brain. Recent advancements in transform domain fusion are promising, but challenges remain in accurately representing empirical distributions and maximizing energy in fused images. The most common neurodegenerative disease is Alzheimer’s disease, which demands accurate detection and classification for the care of the patient. Recent advancements in convolutional neural networks (CNNs)-based methods often overlook local features and do not pay attention to the discriminability of extracted features for the classification of Alzheimer’s disease. Moreover, existing architectures often end up using numerous parameters to enhance feature richness. The objective of this thesis has two parts. In the first part, a novel statistically driven approach for fusing multimodal neuroimaging data is developed. In the second part, a lightweight deep CNN capable of extracting both local and global contextual features for the classification of Alzheimer’s disease is proposed. In the first part of the thesis, a novel multimodal fusion algorithm using statistical properties of nonsubsampled shearlet transform coefficients and an energy maximization fusion rule is developed. The Student’s t probability density function is used to model heavy-tailed non-Gaussian statistics of empirical coefficients. This model is then employed to develop a maximum a posteriori estimator to obtain noise-free coefficients. Finally, a novel fusion rule is proposed for obtaining fused coefficients by maximizing the energy in the high-frequency subbands. In the second part of the thesis, a novel lightweight deep CNN that extracts local and global contextual features for Alzheimer’s disease classification is proposed. The network is designed to process local and global features separately using specialized modules that enhance feature extraction relevant to the disease. Finally, the impact of fused images, obtained using the fusion approach of the first part, on the classification accuracy of Alzheimer’s disease is investigated. Extensive experiments are carried out to validate the effectiveness of the various ideas and strategies proposed in this thesis for developing multimodal neuroimaging fusion and Alzheimer’s disease classification schemes.