Jabason, Emimal ORCID: https://orcid.org/0000-0002-2537-9470 (2024) Neuroimaging Fusion in Nonsubsampled Shearlet Domain by Maximizing the High-Frequency Subband Energy and Classification of Alzheimer's Disease using Local and Global Contextual CNN Features of Neuroimaging Data. PhD thesis, Concordia University.
Preview |
Text (application/pdf)
35MBJABASON_PhD_S2024.pdf - Accepted Version Available under License Spectrum Terms of Access. |
Abstract
Neuroimaging fusion is the process of combining brain imaging data from multiple imaging modalities to create a composite image containing complementary information such as structural and functional changes in the brain. Recent advancements in transform domain fusion are promising, but challenges remain in accurately representing empirical distributions and maximizing energy in fused images. The most common neurodegenerative disease is Alzheimer’s disease, which demands accurate detection and classification for the care of the patient. Recent advancements in convolutional neural networks (CNNs)-based methods often overlook local features and do not pay attention to the discriminability of extracted features for the classification of Alzheimer’s disease. Moreover, existing architectures often end up using numerous parameters to enhance feature richness.
The objective of this thesis has two parts. In the first part, a novel statistically driven approach for fusing multimodal neuroimaging data is developed. In the second part, a lightweight deep CNN capable of extracting both local and global contextual features for the classification of Alzheimer’s disease is proposed.
In the first part of the thesis, a novel multimodal fusion algorithm using statistical properties of nonsubsampled shearlet transform coefficients and an energy maximization fusion rule is developed. The Student’s t probability density function is used to model heavy-tailed non-Gaussian statistics of empirical coefficients. This model is then employed to develop a maximum a posteriori estimator to obtain noise-free coefficients. Finally, a novel fusion rule is proposed for obtaining fused coefficients by maximizing the energy in the high-frequency subbands.
In the second part of the thesis, a novel lightweight deep CNN that extracts local and global contextual features for Alzheimer’s disease classification is proposed. The network is designed to process local and global features separately using specialized modules that enhance feature extraction relevant to the disease. Finally, the impact of fused images, obtained using the fusion approach of the first part, on the classification accuracy of Alzheimer’s disease is investigated.
Extensive experiments are carried out to validate the effectiveness of the various ideas and strategies proposed in this thesis for developing multimodal neuroimaging fusion and Alzheimer’s disease classification schemes.
Divisions: | Concordia University > Gina Cody School of Engineering and Computer Science > Electrical and Computer Engineering |
---|---|
Item Type: | Thesis (PhD) |
Authors: | Jabason, Emimal |
Institution: | Concordia University |
Degree Name: | Ph. D. |
Program: | Electrical and Computer Engineering |
Date: | 28 April 2024 |
Thesis Supervisor(s): | Ahmad, M. Omair and Swamy, M.N.S |
Keywords: | Image Fusion, Statistical Modeling, Image Classification, Convolutional Neural Network, Alzheimer's Disease |
ID Code: | 993846 |
Deposited By: | EMIMAL JABASON |
Deposited On: | 05 Jun 2024 15:26 |
Last Modified: | 05 Jun 2024 15:26 |
References:
[1] “2023 Alzheimer’s disease facts and fgures,” vol. 19, no. 4, pp. 1598–1695, Mar. 2023.[2] Y.L. Rao, B. Ganaraja, B.V. Murlimanju, T. Joy, A. Krishnamurthy, and A. Agrawal, “Hippocampus and its involvement in Alzheimer’s disease: A review,” 3 Biotech, vol. 12, no. 2, p. 55, Feb. 2022.
[3] S.M. Nestor, R. Rupsingh, M. Borrie, et al., “Ventricular enlargement as a possible measure of Alzheimer’s disease progression validated using the Alzheimer’s disease neuroimaging initiative database,” Brain, vol. 131, no. 9, pp. 2443–2454, Sep. 2008.
[4] M. Park and W.J. Moon, “Structural MR imaging in the diagnosis of Alzheimer’s disease and other neurodegenerative dementia: Current imaging approach and future perspectives,” Korean Journal of Radiology, vol. 17, no. 6, pp. 827–845, Nov. 2016.
[5] A. Calvi, L. Haider, F. Prados, C. Tur, D. Chard, and F. Barkhof, “In vivo imaging of chronic active lesions in multiple sclerosis,” Multiple Sclerosis Journal, vol. 28, no. 5, pp. 683–690, Apr. 2022.
[6] K. Misquitta, M. Dadar, D.L. Collins, M.C. Tartaglia, Alzheimer’s Disease Neuroimaging Initiative, et al., “White matter hyperintensities and neuropsychiatric
symptoms in mild cognitive impairment and Alzheimer’s disease,” NeuroImage: Clinical, vol. 28, p. 102 367, Jan. 2020.
[7] M.A. Oghabian, S.A.H. Batouli, M. Norouzian, M. Ziaei, and H. Sikaroodi, “Using functional magnetic resonance imaging to differentiate between healthy aging subjects, mild cognitive impairment, and Alzheimer’s patients,” Journal of research in medical sciences: the offcial journal of Isfahan University of Medical Sciences, vol. 15, no. 2, p. 84, Mar. 2010.
[8] D.M. Wilson, M.R. Cookson, L. Van Den Bosch, H. Zetterberg, D.M. Holtzman, and I. Dewachter, “Hallmarks of neurodegenerative diseases,” Cell, vol. 186, no. 4, pp. 693–714, Feb. 2023.
[9] G. Musa, A. Slachevsky, C. Munoz-Neira, ˜ et al., “Alzheimer’s disease or behavioral variant frontotemporal dementia? Review of key points toward an accurate clinical and neuropsychological diagnosis,” Journal of Alzheimer’s Disease, vol. 73, no. 3, pp. 833–848, Jan. 2020.
[10] R.T. Vieira and L. Caixeta, “Subcortical atrophy in frontotemporal dementia and Alzheimer’s disease: Signifcance for differential diagnosis and correlation with
clinical manifestations,” Dementia & Neuropsychologia, vol. 2, pp. 284–288, Oct. 2008.
[11] R.A. Armstrong, P.L. Lantos, and N.J. Cairns, “Overlap between neurodegenerative disorders,” Neuropathology, vol. 25, no. 2, pp. 111–124, Jun. 2005.
[12] Chelsea Ekstrand, “Neuroimaging,” in Oxford Research Encyclopedia of Psychology, Dec. 2022.
[13] S. Das and M.K. Kundu, “A neuro-fuzzy approach for medical image fusion,” IEEE Trans. Biomed. Eng., vol. 60, no. 12, pp. 3347–3353, Dec. 2013.
[14] M. Yin, X. Liu, Y. Liu, and X. Chen, “Medical image fusion with parameter adaptive pulse coupled neural network in nonsubsampled shearlet transform domain,” IEEE Trans. Instrum. Meas., no. 99, pp. 1–16, Jun. 2018.
[15] J. Du, W. Li, and B. Xiao, “Anatomical-functional image fusion by information of interest in local Laplacian fltering domain,” IEEE Trans. Image Processing,
vol. 26, no. 12, pp. 5855–5866, Dec. 2017.
[16] D.P. Bavirisetti, V. Kollu, X. Gang, and R. Dhuli, “Fusion of MRI and CT images using guided image flter and image statistics,” International Journal of Imaging
Systems and Technology, vol. 27, no. 3, pp. 227–237, Sep. 2017.
[17] Shuqin Liu, “Study on medical image enhancement based on wavelet transform fusion algorithm,” Journal of Medical Imaging and Health Informatics, vol. 7, no. 2, pp. 388–392, Apr. 2017.
[18] W. Zhao and H. Lu, “Medical image fusion and denoising with alternating sequential flter and adaptive fractional order total variation,” IEEE Trans. Instrum. Meas., vol. 66, no. 9, pp. 2283–2294, Sep. 2017.
[19] J.J. Zong and T.S. Qiu, “Medical image fusion based on sparse representation of classifed image patches,” Biomedical Signal Processing and Control, vol. 34, pp. 195–205, Apr. 2017.
[20] V.D. Calhoun and J. Sui, “Multimodal fusion of brain imaging data: A key to fnding the missing links in complex mental illness,” Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, vol. 1, no. 3, pp. 230–244, May 2016.
[21] Y. Liu, X. Chen, R.K. Ward, and Z.J. Wang, “Image fusion with convolutional sparse representation,” IEEE Signal Process. Lett., vol. 23, no. 12, pp. 1882–1886, Oct. 2016.
[22] M. Manchanda and R. Sharma, “A novel method of multimodal medical image fusion using fuzzy transform,” Journal of Visual Communication and Image Representation, vol. 40, pp. 197–217, Oct. 2016.
[23] J. Du, W. Li, B. Xiao, and Q. Nawaz, “Union Laplacian pyramid with multiple features for medical image fusion,” Neurocomputing, vol. 194, pp. 326–339, Jun.
2016.
[24] J. Du, W. Li, K. Lu, and B. Xiao, “An overview of multi-modal medical image fusion,” Neurocomputing, vol. 215, pp. 3–20, Nov. 2016.
[25] V. Bhateja, H. Patel, A. Krishn, A. Sahu, and A. Lay-Ekuakille, “Multimodal medical image sensor fusion framework using cascade of wavelet and contourlet transform domains,” IEEE Sensors J., vol. 15, no. 12, pp. 6783–6790, Aug. 2015.
[26] S. Singh, D. Gupta, R.S. Anand, and V. Kumar, “Nonsubsampled shearlet based CT and MR medical image fusion using biologically inspired spiking neural network,” Biomedical Signal Processing and Control, vol. 18, pp. 91–101, Apr. 2015.
[27] L. Wang, B. Li, and L. Tian, “Multimodal medical volumetric data fusion using 3- D discrete shearlet transform and global-to-local rule,” IEEE Trans. Biomed. Eng., vol. 61, no. 1, pp. 197–206, Jan. 2014.
[28] R. Shen, I. Cheng, and A. Basu, “Cross-scale coeffcient selection for volumetric medical image fusion,” IEEE Trans. Biomed. Eng., vol. 60, no. 4, pp. 1069–1079, Apr. 2013.
[29] G. Bhatnagar, Q.M.J. Wu, and Z. Liu, “Directive contrast based multimodal medical image fusion in NSCT domain,” IEEE Trans. Multimedia, vol. 15, no. 5, pp. 1014–1024, Aug. 2013.
[30] F.E. Ali, I.M. El-Dokany, A.A. Saad, and F.E. Abd El-Samie, “A curvelet transform approach for the fusion of MR and CT images,” Journal of Modern Optics, vol. 57, no. 4, pp. 273–286, Feb. 2010.
[31] S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: A survey of the state of the art,” Information Fusion, vol. 33, pp. 100–112, Jan. 2017.
[32] M. Kumar and S. Dass, “A total variation-based algorithm for pixel-level image fusion,” IEEE Trans. Image Process., vol. 18, no. 9, pp. 2137–2143, Sep. 2009.
[33] H. Li, Z. Yu, and C. Mao, “Fractional differential and variational method for image fusion and super-resolution,” Neurocomputing, vol. 171, pp. 138–148, Jan. 2016.
[34] Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multiscale transform and sparse representation,” Information Fusion, vol. 24, pp. 147– 164, Jul. 2015.
[35] A. Loza, D. Bull, N. Canagarajah, and A. Achim, “Non-Gaussian model-based fusion of noisy images in the wavelet domain,” Computer Vision and Image Understanding, vol. 114, no. 1, pp. 54–65, Jan. 2010.
[36] P.A. Hagargi and D.C. Shubhangi, “Brain tumor MR image fusion using most dominant features extraction from wavelet and curvelet transforms,” Brain, vol. 5, no. 05, May 2018.
[37] S. Paris, S. W. Hasinoff, and J. Kautz, “Local Laplacian flters: Edge-aware image processing with a Laplacian pyramid,” Communications of the ACM, vol. 58, no. 3, pp. 81–91, Jul. 2015.
[38] E. Candes, L. Demanet, D. Donoho, and L. Ying, “Fast discrete curvelet transforms,” Multiscale Modeling & Simulation, vol. 5, no. 3, pp. 861–899, Sep. 2006.
[39] G. Kutyniok and D. Labate, “Introduction to shearlets,” Shearlets, pp. 1–38, Jan. 2012.
[40] D. D-Y Po and M.N. Do, “Directional multiscale modeling of images using the contourlet transform,” IEEE Trans. Image Processing, vol. 15, no. 6, pp. 1610–
1620, May 2006.
[41] G. Easley, D. Labate, and W.-Q. Lim, “Sparse directional image representations using the discrete shearlet transform,” Applied and Computational Harmonic Analysis, vol. 25, no. 1, pp. 25–46, Jul. 2008.
[42] R.R. Coifman and D.L. Donoho, “Translation-invariant de-noising,” Wavelets and statistics, vol. 103, pp. 125–150, May 1995.
[43] W.-Q. Lim, “The discrete shearlet transform: A new directional transform and compactly supported shearlet frames,” IEEE Trans. Image Processing, vol. 19, no. 5, pp. 1166–1180, Jan. 2010.
[44] J.M. Fadili and L. Boubchir, “Analytical form for a bayesian wavelet estimator of images using the Bessel K form densities,” IEEE Trans. Image Processing, vol. 14, no. 2, pp. 231–240, Jan. 2005.
[45] J. Zhao, R. Laganiere, and Z. Liu, “Performance assessment of combinative pixel level image fusion based on an absolute feature measurement,” International Journal of Innovative Computing, Information and Control, vol. 3, no. 6, pp. 1433–1447, Dec. 2007.
[46] L. Alzubaidi, J. Zhang, A.J. Humaidi, et al., “Review of deep learning: Concepts, cnn architectures, challenges, applications, future directions,” J. Big Data, vol. 8, no. 53, pp. 1–74, Mar. 2021.
[47] A. Gupta, M. Ayhan, and A. Maida, “Natural image bases to represent neuroimaging data,” in Proc. Int. Conf. Mach. Learn., Atlanta, GA, USA, May 2013, pp. 987–994.
[48] B. Cheng, M. Liu, D. Zhang, D. Shen, and Alzheimer’s Disease Neuroimaging Initiative, “Robust multi-label transfer feature learning for early diagnosis of Alzheimer’s disease,” Brain Imaging Behav., vol. 13, no. 1, pp. 138–153, Feb. 2019.
[49] A. Payan and G. Montana, “Predicting Alzheimer’s disease: A neuroimaging study with 3D convolutional neural networks,” in Proc. Int. Conf. Pattern Recognit. App. Methods (2), Lisbon, Portugal, Feb. 2015, pp. 355–362.
[50] E. Hosseini-Asl, R. Keynton, and A. El-Baz, “Alzheimer’s disease diagnostics by adaptation of 3D convolutional network,” in Proc. IEEE Int. Conf. Image Process., Phoenix, AZ, USA, Sep. 2016, pp. 126–130.
[51] S. Korolev, A. Safullin, M. Belyaev, and Y. Dodonova, “Residual and plain convolutional neural networks for 3D brain MRI classifcation,” in Proc. IEEE Int. Symp. Biomed. Imag., Melbourne, VIC, Australia, Apr. 2017, pp. 835–838.
[52] R. Cui and M. Liu, “Hippocampus analysis by combination of 3-D densenet and shapes for Alzheimer’s disease diagnosis,” IEEE J. Biomed. Health Inform., vol. 23, no. 5, pp. 2099–2107, Sep. 2019.
[53] C. Lian, M. Liu, J. Zhang, and D. Shen, “Hierarchical fully convolutional network for joint atrophy localization and Alzheimer’s disease diagnosis using structural MRI,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 4, pp. 880–893, Apr. 2020.
[54] W. Zhu, L. Sun, J. Huang, L. Han, and D. Zhang, “Dual attention multi-instance deep learning for Alzheimer’s disease diagnosis with structural MRI,” IEEE Trans. Med. Imaging, vol. 40, no. 9, pp. 2354–2366, Sep. 2021.
[55] X. Zhang, L. Han, W. Zhu, L. Sun, and D. Zhang, “An explainable 3D residual self-attention deep neural network for joint atrophy localization and Alzheimer’s
disease diagnosis using structural MRI,” IEEE J Biomed Health Inform., vol. 26, no. 11, pp. 5289–5297, Nov. 2022.
[56] A. Ng, “Sparse autoencoder,” vol. 72, pp. 1–19, Jan. 2011.
[57] C. Li, N.M.A. Elsayed Bakheet, W. Huang, and S. Wang, “Machine learning for automatic Alzheimer’s disease detection: Addressing domain shift issues for building robust models,” Radiology Science, vol. 2, no. 1, pp. 10–21, Mar. 2023.
[58] G.B. Karas, P. Scheltens, S.A. Rombouts, et al., “Global and local gray matter loss in mild cognitive impairment and Alzheimer’s disease,” Neuroimage, vol. 23, no. 2, pp. 708–716, Aug. 2004.
[59] J. Ashburner and K.J. Friston, “Voxel-based morphometry—the methods,” Neuroimage, vol. 11, no. 6, pp. 805–821, Jun. 2000.
[60] A.L. Da Cunha, J. Zhou, and M.N. Do, “The nonsubsampled contourlet transform: Theory, design, and applications,” IEEE Trans. Image Process., vol. 15, no. 10, pp. 3089–3101, Oct. 2006.
[61] K. Guo, D. Labate, W.-Q. Lim, G. Weiss, and E. Wilson, “Wavelets with composite dilations and their MRA properties,” Applied and Computational Harmonic
Analysis, vol. 20, no. 2, pp. 202–236, Mar. 2006.
[62] K. Guo, D. Labate, and W.-Q. Lim, “Edge analysis and identifcation using the continuous shearlet transform,” Applied and Computational Harmonic Analysis,
vol. 27, no. 1, pp. 24–46, Jul. 2009.
[63] D. Labate, W.-Q. Lim, G. Kutyniok, and G. Weiss, “Sparse multidimensional representation using shearlets,” Optics & Photonics, vol. 2005, 59140U–59140U, Aug. 2005.
[64] L. Wang, B. Li, and L. Tian, “Multimodal medical volumetric data fusion using 3-d discrete shearlet transform and global-to-local rule,” IEEE Trans. Biomed. Eng., vol. 61, no. 1, pp. 197–206, Jan. 2014.
[65] Haitao Yin, “Tensor sparse representation for 3-D medical image fusion using weighted average rule,” IEEE Trans. Biomed. Eng., vol. 65, no. 11, pp. 2622–2633, Nov. 2018.
[66] K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—an introduction, review and comparison,” ISPRS Journal of photogrammetry and Remote Sensing, vol. 62, no. 4, pp. 249–263, Sep. 2007.
[67] Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process.,
vol. 13, no. 4, pp. 600–612, Apr. 2004.
[68] R.C. Gonzalez and R.E. Woods, Digital Image Processing (3rd Edition). Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 2006, ISBN: 013168728X.
[69] C.S. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electronics Letters, vol. 36, no. 4, pp. 308–309, Feb. 2000.
[70] E. Jabason, M.O. Ahmad, and M.N.S. Swamy, “Statistical modeling of multimodal neuroimaging data in non-subsampled shearlet domain using the student’s t location-scale distribution,” in Proc. 2017 IEEE Int. Symp. on Circuits and Systems (ISCAS), Baltimore, MD, USA, May 2017, pp. 1–4.
[71] E. Jabason, M.O. Ahmad, and M.N.S. Swamy, “Multimodal neuroimaging fusion in nonsubsampled shearlet domain using location-scale distribution by maximizing the high frequency subband energy,” IEEE Access, vol. 7, pp. 97 865–97 886, Jul. 2019.
[72] Min Xu, Image registration and image fusion: Algorithms and performance bounds. Syracuse University, 2009.
[73] L. Sendur and I.W. Selesnick, “Bivariate shrinkage with local variance estimation,” IEEE Signal Process. Lett., vol. 9, no. 12, pp. 438–441, Dec. 2002.
[74] Simon Jackman, Bayesian analysis for the social sciences. John Wiley & Sons, 2009.
[75] K.L. Lange, R.J.A. Little, and J.M.G. Taylor, “Robust statistical modeling using the t distribution,” Journal of the American Statistical Association, vol. 84, no. 408, pp. 881–896, Dec. 1989.
[76] D.M. Endres and J.E. Schindelin, “A new metric for probability distributions,” IEEE Trans. Inf. Theory, vol. 49, no. 7, pp. 1858–1860, Jul. 2003.
[77] Tania Stathaki, Image fusion: algorithms and applications. Academic Press, 2011.
[78] J. Du, W. Li, and B. Xiao, “Anatomical-functional image fusion by information of interest in local Laplacian fltering domain,” IEEE Trans. Image Processing,
vol. 26, no. 12, pp. 5855–5866, Dec. 2017.
[79] R.K.S. Kwan, A.C. Evans, and G.B. Pike, “MRI simulation-based evaluation of image-processing and classifcation methods,” IEEE Trans. Med. Imag., vol. 18,
no. 11, pp. 1085–1097, Nov. 1999.
[80] C.R. Jack, M.A. Bernstein, N.C. Fox, et al., “The Alzheimer’s disease neuroimaging initiative (ADNI): MRI methods [online],” Journal of magnetic resonance imaging, vol. 27, no. 4, pp. 685–691, Apr. 2008.
[81] K.A. Johnson and J.A. Becker, “The whole brain atlas [online],” Available: http://www.med.harvard.edu/aanlib/ (accessed May. 11, 2018).
[82] E. Jabason, M.O. Ahmad, and M.N.S. Swamy, “Deep structural and clinical feature learning for semi-supervised multiclass prediction of Alzheimer’s disease,” in Proc. 2018 IEEE 61st Int. Midwest Symp. on Circuits and Systems (MWSCAS), Windsor, ON, Canada, Aug. 2018, pp. 791–794.
[83] E. Jabason, M.O. Ahmad, and M.N.S. Swamy, “Missing structural and clinical features imputation for semi-supervised Alzheimer’s disease classifcation using
stacked sparse autoencoder,” in Proc. 2018 IEEE Biomedical Circuits and Systems Conf. (BioCAS), Cleveland, OH, USA, Oct. 2018, pp. 1–4.
[84] E. Jabason, M.O. Ahmad, and M.N.S. Swamy, “Hybrid feature fusion using RNN and pre-trained CNN for classifcation of Alzheimer’s disease (poster),” in Proc. 2019 22nd Int. Conf. on Information Fusion (FUSION), Ottawa, ON, CA, Jul. 2019, pp. 1–4.
[85] E. Jabason, M.O. Ahmad, and M.N.S. Swamy, “Classifcation of Alzheimer’s disease from MRI data using an ensemble of hybrid deep convolutional neural networks,” in Proc. 2019 IEEE 62nd Int. Midwest Symp. on Circuits and Systems (MWSCAS), Dallas, TX, USA, Aug. 2019, pp. 481–484.
[86] E. Jabason, M.O. Ahmad, and M.N.S. Swamy, “Classifcation of Alzheimer’s disease from MRI data using a lightweight deep convolutional model,” in Proc. 2022 IEEE Int. Symp. on Circuits and Systems (ISCAS), Austin, TX, USA, May 2022, pp. 1279–1283.
[87] E. Jabason, M.O. Ahmad, and M.N.S. Swamy, “A lightweight deep convolutional neural network extracting local and global contextual features for the classifcation of Alzheimer’s disease using structural MRI*,” submitted for publication.
[88] J. Mazziotta, A. Toga, A. Evans, et al., “A probabilistic atlas and reference system for the human brain: International consortium for brain mapping (ICBM),”
Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, vol. 356, no. 1412, pp. 1293–1322, Aug. 2001.
[89] S.M. Smith, M. Jenkinson, M.W. Woolrich, et al., “Advances in functional and structural MR image analysis and implementation as FSL,” Neuroimage, vol. 23,
S208–S219, Jan. 2004.
[90] R. Sharma, T. Goel, M. Tanveer, and R. Murugan, “FDN-ADNet: Fuzzy LS-TWSVM based deep learning network for prognosis of the Alzheimer’s disease using the sagittal plane of MRI scans,” Appl. Soft Comput., vol. 115, no. 108099, pp. 1–11, Jan. 2022.
[91] M. Abadi, A. Agarwal, P. Barham, et al., TensorFlow: Large-scale machine learning on heterogeneous systems, Software available from tensorfow.org, 2015. [Online]. Available: https://www.tensorflow.org/.
[92] R. Cui and M. Liu, “Hippocampus analysis by combination of 3-D densenet and shapes for Alzheimer’s disease diagnosis,” IEEE J. Biomed. Health Inform., vol. 23, no. 5, pp. 2099–2107, Sep. 2019.
[93] C. Lian, M. Liu, J. Zhang, and D. Shen, “Hierarchical fully convolutional network for joint atrophy localization and Alzheimer’s disease diagnosis using structural MRI,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 4, pp. 880–893, Apr. 2020.
[94] W. Zhu, L. Sun, J. Huang, L. Han, and D. Zhang, “Dual attention multi-instance deep learning for Alzheimer’s disease diagnosis with structural MRI,” IEEE Trans. Med. Imaging, vol. 40, no. 9, pp. 2354–2366, Sep. 2021.
[95] X. Zhang, L. Han, W. Zhu, L. Sun, and D. Zhang, “An explainable 3D residual self-attention deep neural network for joint atrophy localization and Alzheimer’s
disease diagnosis using structural MRI,” IEEE J. Biomed. Health Inform., vol. 26, no. 11, pp. 5289–5297, Nov. 2022.
[96] S. El-Sappagh, T. Abuhmed, S.M.R. Islam, and K.S. Kwak, “Multimodal multitask deep learning model for Alzheimer’s disease progression detection based on time series data,” Neurocomputing, vol. 412, pp. 197–215, Oct. 2020.
[97] S. Basheera and M.S. Ram, “Alzheimer’s disease classifcation using Leung-Malik fltered bank features and weak classifer,” Int. J. Recent Technol. Eng, vol. 8, pp. 1956–61, Sep. 2019.
[98] S. Basheera and M.S. Ram, “Convolution neural network–based Alzheimer’s disease classifcation using hybrid enhanced independent component analysis based segmented gray matter of T2 weighted magnetic resonance imaging with clinical valuation,” Alzheimer’s & Dementia: Translational Research & Clinical Interventions, vol. 5, pp. 974–986, Jan. 2019.
[99] M. Liu, D. Cheng, W. Yan, and Alzheimer’s Disease Neuroimaging Initiative, “Classifcation of Alzheimer’s disease by combination of convolutional and recurrent neural networks using FDG-PET images,” Frontiers in neuroinformatics, vol. 12, p. 35, Jun. 2018.
[100] P. Zhou, R. Zeng, L. Yu, et al., “Deep-learning radiomics for discrimination conversion of Alzheimer’s disease in patients with mild cognitive impairment: A study based on 18F-FDG PET imaging,” Frontiers in Aging Neuroscience, vol. 13, Oct. 2021.
[101] L. Teng, Y. Li, Y. Zhao, et al., “Predicting MCI progression with FDG-PET and cognitive scores: A longitudinal study,” BMC neurology, vol. 20, pp. 1–10, Apr.
2020
Repository Staff Only: item control page