Login | Register

A Computation-Efficient CNN System for High-Quality Brain Tumor Segmentation


A Computation-Efficient CNN System for High-Quality Brain Tumor Segmentation

Sun, Yanming ORCID: https://orcid.org/0000-0003-0471-8215 (2020) A Computation-Efficient CNN System for High-Quality Brain Tumor Segmentation. Masters thesis, Concordia University.

[thumbnail of Sun_MASc_F2020.pdf]
Text (application/pdf)
Sun_MASc_F2020.pdf - Accepted Version
Available under License Spectrum Terms of Access.

Video (video/mp4)
Sun_MASc_F2020_video1.mp4 - Supplemental Material
Available under License Spectrum Terms of Access.


Brain tumor diagnosis is an important issue in health care. Automated brain tumor segmentation can help timely diagnosis. It is, however, very challenging to achieve high-quality segmentation results, because the shapes, sizes, textures and locations of brain tumors vary from patient to patient. To develop a Convolutional Neural Network (CNN) system for a high-quality brain tumor segmentation at the lowest computation cost, the CNN should be custom-designed to extract efficiently sufficient critical features particularly related to the tumors from brain images for the multi-class segmentation of tumor areas.
In this thesis, a CNN system is proposed for brain tumor segmentation. The system consists of three parts, a pre-processing block to reduce the data volume, an application-specific CNN (ASCNN) to segment tumor areas precisely, and a refinement block to detect false positive voxels. The CNN, designed specifically for the task, has 7 convolution layers, and the number of output channels per layer is no more than 16. The convolutions combined with max-pooling in the first half of the CNN are performed to localize brain tumor areas. Two convolution modes, namely depthwise convolution and standard convolution, are performed in parallel in the first 2 layers to extract elementary features efficiently. In the second half of the CNN, the convolutions combined with upsampling are to segment different tumor areas. For a fine classification of pixel-wise precision, the feature maps are modulated by adding the weighted local feature maps generated in the first half of the CNN. The system has only 11716 parameters to be trained and, for a patient case of (240x240x155 x3) voxels, it requires only 21.14G Flops to complete the test. Hence, it is likely the simplest CNN system, so far reported, for brain tumor segmentation.
The performance of the proposed system has been evaluated by means of CBICA Image Processing Portal with samples from dataset BRATS2018. Requiring a very low computation volume, the proposed system delivers a high segmentation quality indicated by its average Dice scores of 0.75, 0.88 and 0.76 for enhancing tumor, whole tumor and tumor core, respectively, and the median Dice scores of 0.85, 0.92, and 0.86. Its processing quality is comparable to the best ones so far reported. The consistency in system performance has also been measured, and the results have demonstrated that the system is able to reproduce almost the same output to the same input after retraining.
In conclusion, the proposed CNN system has been designed to meet the specific needs to segment brain tumors or other kinds of tumors in medical images. In this way, the redundancy in computation can be minimized, the information density in data flow increased, and the computation efficiency/quality improved. This design demonstrates that a CNN system can be made to perform a high-quality processing, at a very low computation cost, for a specific application. Hence, ASCNN is an effective approach to lower the barrier of computation resource requirement of CNN systems in order to make them more implementable and applicable for general public.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Electrical and Computer Engineering
Item Type:Thesis (Masters)
Authors:Sun, Yanming
Institution:Concordia University
Degree Name:M.A. Sc.
Program:Electrical and Computer Engineering
Date:August 2020
Thesis Supervisor(s):Wang, Chunyan
Keywords:Application-specific convolutional neural network (ASCNN), brain tumor segmentation, convolutional neural network (CNN), image processing, machine learning, 2D filtering.
ID Code:987235
Deposited By: Yanming Sun
Deposited On:25 Nov 2020 16:30
Last Modified:25 Nov 2020 16:30


[1] https://www.britannica.com/science/brain
[2] B. H. Menze et al., "The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)," IEEE Trans. Med. Imag., vol. 34, no. 10, pp. 1993-2024, Oct. 2015.
[3] P. Gibbs, D. L. Buckley, S. J. Blackband, and A. Horsman, “Tumour volume determination from MR images by morphological segmentation,” Phys. Med. Biol., vol. 41, no. 11, pp. 2437-2446, 1996.
[4] T. Imtiaz, S. Rifat, S. A. Fattah and K. A. Wahid, "Automated Brain Tumor Segmentation Based on Multi-Planar Superpixel Level Features Extracted From 3D MR Images," IEEE Access, vol. 8, pp. 25335-25349, 2020.
[5] N. Geschwind and W. Levitsky, “Human Brain: Left-right asymmetry in temporal speech region,” Science, vol. 161, pp. 186–187, 1968.
[6] A. Kermi, K. Andjouh and F. Zidane, "Fully automated brain tumour segmentation system in 3D-MRI using symmetry analysis of brain and level sets," IET Image Process., vol. 12, no. 11, pp. 1964-1971, 11 2018.
[7] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015.
[8] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” Proc. of International Conference on Machine Learning, pp. 807–814, 2010.
[9] A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” Proc. of International Conference on Machine Learning, vol. 30, p. 3, 2013.
[10] M. N. Gibbs and D. J. MacKay, “Variational gaussian process classifiers,” IEEE Trans. Neural Networks, vol. 11, no. 6, pp. 1458–1464, 2000.
[11] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556, 2014.
[12] V. Badrinarayanan, A. Kendall and R. Cipolla, "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation," IEEE Trans. Pattern Anal. Machine Intell., vol. 39, no. 12, pp. 2481-2495, 1 Dec. 2017.
[13] C. Szegedy et al., "Going deeper with convolutions," Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2015, pp. 1-9.
[14] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2016, pp. 2818-2826.
[15] K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image Recognition," Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2016, pp. 770-778.
[16] G. Huang, Z. Liu, L. Van Der Maaten and K. Q. Weinberger, "Densely Connected Convolutional Networks," Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), July 2017, pp. 2261-2269.
[17] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, p. 436-444, 2015.
[18] D. Ciregan, U. Meier and J. Schmidhuber, "Multi-column deep neural networks for image classification," 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 3642-3649
[19] Robbins, Herbert; Monro, Sutton. “A Stochastic Approximation Method,” Ann. Math. Statist., vol. 22, no. 3, pp. 400-407, 1951.
[20] Diederik P. Kingma, Jimmy Ba, "Adam: A method for stochastic optimization", arXiv:1412.6980, 2014.
[21] J. Long, E. Shelhamer and T. Darrell, "Fully convolutional networks for semantic segmentation," Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2015, pp. 3431-3440.
[22] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” Proc. MICCAI, Nov. 2015, pp. 234–241.
[23] Haichun Li, Ao Li and Minghui Wang, “A novel end-to-end brain tumor segmentation method using improved fully convolutional networks,” Comput. Biol. Med., vol. 108, p. 150-160, May 2019.
[24] Y. Ding, F. Chen, Y. Zhao, Z. Wu, C. Zhang and D. Wu, "A Stacked Multi-Connection Simple Reducing Net for Brain Tumor Segmentation," IEEE Access, vol. 7, pp. 104011-104024, 2019.
[25] L. Chen, P. Bentley, K. Mori, K. Misawa, M. Fujiwara and D. Rueckert, "DRINet for Medical Image Segmentation," IEEE Trans. Med. Imag., vol. 37, no. 11, pp. 2453-2462, Nov. 2018.
[26] S. Pereira, A. Pinto, J. Amorim, A. Ribeiro, V. Alves and C. A. Silva, "Adaptive feature recombination and recalibration for semantic segmentation with Fully Convolutional Networks," IEEE Trans. Med. Imag., May 2019.
[27] Y. Ding, C. Li, Q. Yang, Z. Qin and Z. Qin, "How to Improve the Deep Residual Network to Segment Multi-Modal Brain Tumor Images," IEEE Access, vol. 7, pp. 152821-152831, 2019.
[28] K. Hu et al., "Brain Tumor Segmentation Using Multi-Cascaded Convolutional Neural Networks and Conditional Random Field," IEEE Access, vol. 7, pp. 92615-92629, Jul. 2019.
[29] K. Kamnitsas, C. Ledig, V.F. Newcombe, J.P. Simpson, A.D. Kane, D.K. Menon, D. Rueckert, B. Glocker, “Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation,” Med. Image Anal. vol. 36, pp. 61–78, 2017.
[30] Chen, Shengcong, Ding, Changxing and Liu, Minfeng, “Dual-force convolutional neural networks for accurate brain tumor segmentation,” Pattern Recognit., vol. 88, pp. 90-100, Apr. 2019.
[31] Zexun Zhou, Zhongshi He, Meifeng Shi, Jinglong Du and Dingding Chen, “3D dense connectivity network with atrous convolutional feature pyramid for brain tumor segmentation in magnetic resonance imaging of human heads,” Comput. Biol. Med., vol. 121, June 2020.
[32] Tuzikov, A., Colliot, O., Bloch, I. “Evaluation of the symmetry plane in 3D MR brain images,” Pattern Recognit. Lett., 2003, 24, pp. 2219–2233.
[33] S. Prima, S. Ourselin and N. Ayache, "Computation of the mid-sagittal plane in 3-D brain images," IEEE Trans. Med. Imag., vol. 21, no. 2, pp. 122-138, Feb. 2002.
[34] Zhou Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE Trans. Image Process., vol. 13, no. 4, pp. 600-612, April 2004.
[35] L. Nyúl and J. Udupa, “On standardizing the MR image intensity scale,” Magn. Reson. Med., vol. 42, no. 6, pp. 1072–1081, 1999.
[36] L. G. Nyúl, J. K. Udupa, and X. Zhang, “New variants of a method of MRI scale standardization,” IEEE Trans. Med. Imag., vol. 19, no. 2, pp. 143–150, Feb. 2000.
[37] https://www.med.upenn.edu/sbia/brats2018/data.html
[38] S. Bakas et al., "Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features", Nature Scientific Data, 4:170117 (2017) DOI: 10.1038/sdata.2017.117
[39] S. Bakas et al., "Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge", arXiv preprint arXiv:1811.02629 (2018)
[40] I. Loshchilov and F. Hutter, “Sgdr: stochastic gradient descent with restarts,” International Conference on Learning Representations 2016, 2016.
[41] Yoav Benjamini and Yosef Hochberg. “Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing,” J. R. Stat. Soc. Series B Stat. Methodol., vol. 57, no. 1, pp. 289–300, 1995.
[42] Ralph M. Richart M. D. “Evaluation of the true false negative rate in cytology,” Am. J. Obstet. Gynecol., vol. 89, no. 6, pp. 723-726, July 1964.
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top