Login | Register

Ensemble Graph Neural Networks on Melanoma and Cervical Cancer Screening Datasets using SLIC Superpixels


Ensemble Graph Neural Networks on Melanoma and Cervical Cancer Screening Datasets using SLIC Superpixels

Ashouri, Negin (2021) Ensemble Graph Neural Networks on Melanoma and Cervical Cancer Screening Datasets using SLIC Superpixels. Masters thesis, Concordia University.

[thumbnail of Ashouri_MCompSc_F2021.pdf]
Text (application/pdf)
Ashouri_MCompSc_F2021.pdf - Accepted Version
Available under License Spectrum Terms of Access.


Graph neural networks (GNNs) have become the standard procedure to deal with graphstructured data and data in non-Euclidean spaces. Since 2017, numerous researchers have been
using GNN models in their experiments. However, despite GNNs’ recent rapid growth, there are not yet many real-world applications that benefit from these models. In this thesis, we use GNNs
as image classifiers. To improve the efficiency and reduce the complexity of the models, we first generate graphs from the images by creating superpixels of the images and use them as our graph nodes instead of individual pixels. Then, we define edges on these nodes based on the distance
of each superpixel to their closest ones. We propose two ensemble frameworks containing a pre-trained ResNet18 and two Graph Neural Network (GNN) models (GAT and GIN). We call these
frameworks GATRes and GATGIN, respectively.
We test these frameworks on two real-world medical applications: Cervical Cancer Screening and Melanoma detection. Cervical cancer is among the top four common cancer in women worldwide.
Cervical cancer can be easily prevented if caught in its pre-cancerous stage. Determining the appropriate treatment method depends on patients’ physiological differences. A treatment that
works effectively for one woman may obscure future cancerous growth in another woman due to differences in the type of their cervix. In this thesis, we experiment with multiple GNNs on this
dataset to distinguish the cervix types and examine if these models enhance detection performance and accuracy.
The other problem we consider is Melanoma skin cancer. Melanoma is the most lethal skin cancer. If melanoma gets diagnosed in the early stages, the patients’ survival rate will increase significantly. This research applies GNN models on the Melanoma dataset to discriminate between melanoma and benign skin lesions.
We show that our models’ sensitivity and accuracy outperform the individual models in our classification tasks. Our GATRes model also outperforms the accuracies achieved by previously
published papers on the Cervical Cancer Screening dataset.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Computer Science and Software Engineering
Item Type:Thesis (Masters)
Authors:Ashouri, Negin
Institution:Concordia University
Degree Name:M. Comp. Sc.
Program:Computer Science
Date:24 June 2021
Thesis Supervisor(s):Fevens, Thomas
Keywords:Graph Neural Networks, GNN, SLIC, superpixels, Melanoma, Cervical Cancer Screening, Ensemble Graph model, GATRes, GATGIN
ID Code:988571
Deposited By: Negin Ashouri
Deposited On:29 Nov 2021 16:26
Last Modified:29 Nov 2021 16:26


[1] R. Achanta et al. “SLIC Superpixels Compared to State-of-the-Art Superpixel Methods”. In:
IEEE Transactions on Pattern Analysis and Machine Intelligence 34.11 (2012), pp. 2274–
2282. DOI: 10.1109/tpami.2012.120.
[2] O. E. Aina, Steve A. Adeshina, and A. Aibinu. “Classification of Cervix types using Convolution
Neural Network (CNN)”. In: 2019 15th International Conference on Electronics,
Computer and Computation (ICECCO) (2019), pp. 1–4.
[3] Mahmoud H Annaby et al. “Melanoma Detection Using Spatial and Spectral Analysis on
Superpixel Graphs”. In: Journal of digital imaging 34.1 (2021), pp. 162–181.
[4] Mahmoud H. Annaby et al. “Melanoma Detection Using Spatial and Spectral Analysis on
Superpixel Graphs”. In: Journal of Digital Imaging 34.1 (2021), pp. 162–181. DOI: 10.
[5] C.M Bishop. Neural networks for pattern recognition. Oxford University Press, 1995.
[6] Titus J Brinker et al. “Deep neural networks are superior to dermatologists in melanoma
image classification”. In: European Journal of Cancer 119 (2019), pp. 11–17.
[7] Vijay Prakash Dwivedi et al. “Benchmarking Graph Neural Networks”. In: arXiv preprint
arXiv:2003.00982 (2020).
[8] Andre Esteva et al. “Dermatologist-level classification of skin cancer with deep neural networks”.
In: nature 542.7639 (2017), pp. 115–118.
[9] Mark Everingham et al. “The pascal visual object classes (voc) challenge”. In: International
journal of computer vision 88.2 (2010), pp. 303–338.
[10] Pedro F. Felzenszwalb and Daniel P. Huttenlocher. “Efficient Graph-Based Image Segmentation”.
In: International Journal of Computer Vision 59.2 (2004), pp. 167–181. DOI: 10.
[11] Justin Gilmer et al. “Neural Message Passing for Quantum Chemistry”. In: CoRR abs/1704.01212
(2017). arXiv: 1704.01212. URL: http://arxiv.org/abs/1704.01212.
[12] Palash Goyal et al. “Graph representation ensemble learning”. In: 2020 IEEE/ACM International
Conference on Advances in Social Networks Analysis and Mining (ASONAM). IEEE.
2020, pp. 24–31.
[13] David Gutman et al. “Skin Lesion Analysis toward Melanoma Detection: A Challenge at
the International Symposium on Biomedical Imaging (ISBI) 2016, hosted by the International
Skin Imaging Collaboration (ISIC)”. In: CoRR abs/1605.01397 (2016). arXiv: 1605.
01397. URL: http://arxiv.org/abs/1605.01397.
[14] William L. Hamilton, Rex Ying, and Jure Leskovec. “Inductive Representation Learning on
Large Graphs”. In: CoRR abs/1706.02216 (2017). arXiv: 1706.02216. URL: http://
PUBLISH, 2020.
[16] Balazs Harangi. “Skin lesion classification with ensembles of deep convolutional neural networks”.
In: Journal of biomedical informatics 86 (2018), pp. 25–32.
[17] Charles R. Harris et al. “Array programming with NumPy”. In: Nature 585.7825 (Sept. 2020),
pp. 357–362. DOI: 10.1038/s41586-020-2649-2. URL: https://doi.org/10.
[18] Kaiming He et al. “Deep Residual Learning for Image Recognition”. In: CoRR abs/1512.03385
(2015). arXiv: 1512.03385. URL: http://arxiv.org/abs/1512.03385.
[19] J. D. Hunter. “Matplotlib: A 2D graphics environment”. In: Computing in Science & Engineering
9.3 (2007), pp. 90–95. DOI: 10.1109/MCSE.2007.55.
[20] Forrest N. Iandola et al. “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and
¡1MB model size”. In: CoRR abs/1602.07360 (2016). arXiv: 1602.07360. URL: http:
[21] Joanna Jaworek-Korjakowska and Pawel Kleczek. “Region Adjacency Graph Approach for
Acral Melanocytic Lesion Segmentation”. In: Applied Sciences 8.9 (2018). ISSN: 2076-3417.
DOI: 10.3390/app8091430. URL: https://www.mdpi.com/2076-3417/8/9/
[22] Jianbo Shi and J. Malik. “Normalized cuts and image segmentation”. In: IEEE Transactions
on Pattern Analysis and Machine Intelligence 22.8 (2000), pp. 888–905. DOI: 10.1109/
[23] Kaggle Competition. Intel & MobileODT Cervical Cancer Screening. Accessed: 2020-06-15.
URL: https://www.kaggle.com/c/intel-mobileodt-cervical-cancerscreening.
[24] Kaggle Competition. SIIM-ISIC Melanoma Classification. Accessed: 2021-02. URL: https:
[25] Navdeep Kaur, Nikson Panigrahi, and Ajay Mittal. “AUTOMATED CERVICAL CANCER
[26] Diederik P Kingma and Jimmy Ba. “Adam: A method for stochastic optimization”. In: arXiv
preprint arXiv:1412.6980 (2014).
[27] Thomas N. Kipf and Max Welling. “Semi-Supervised Classification with Graph Convolutional
Networks”. In: CoRR abs/1609.02907 (2016). arXiv: 1609.02907. URL: http:
[28] Boris Knyazev, Graham W. Taylor, and Mohamed R. Amer. “Understanding attention in
graph neural networks”. In: CoRR abs/1905.02850 (2019). arXiv: 1905 . 02850. URL:
[29] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. “ImageNet Classification with
Deep Convolutional Neural Networks”. In: Advances in Neural Information Processing Systems
25. Ed. by F. Pereira et al. Curran Associates, Inc., 2012, pp. 1097–1105. URL: http:
/ /papers. nips. cc /paper/ 4824- imagenet- classification - withdeep-
[30] Saima Anwar Lashari and Rosziati Ibrahim. “A Framework for Medical Images Classification
Using Soft Set”. In: Procedia Technology 11 (2013), pp. 548–556. DOI: 10.1016/j.
[31] A. Levinshtein et al. “TurboPixels: Fast Superpixels Using Geometric Flows”. In: IEEE
Transactions on Pattern Analysis and Machine Intelligence 31.12 (2009), pp. 2290–2297.
DOI: 10.1109/TPAMI.2009.96.
[32] Yin Li et al. “Lazy snapping”. In: ACM Transactions on Graphics 23.3 (2004), pp. 303–308.
DOI: 10.1145/1015706.1015719.
[33] Adria Romero Lopez et al. “Skin lesion classification from dermoscopic images using deep
learning techniques”. In: 2017 13th IASTED International Conference on Biomedical Engineering
(BioMed) (2017), pp. 49–54.
[34] Tomas Majtner, Sule Yildirim-Yayilgan, and Jon Yngve Hardeberg. “Combining deep learning
and hand-crafted features for skin lesion classification”. In: 2016 Sixth International Conference
on Image Processing Theory, Tools and Applications (IPTA). IEEE. 2016, pp. 1–6.
[35] David Martin et al. “A database of human segmented natural images and its application
to evaluating segmentation algorithms and measuring ecological statistics”. In: Proceedings
Eighth IEEE International Conference on Computer Vision. ICCV 2001. Vol. 2. IEEE. 2001,
pp. 416–423.
[36] Afonso Menegola et al. “RECOD titans at ISIC challenge 2017”. In: arXiv preprint arXiv:1703.04819
[37] Alastair P. Moore et al. “Superpixel lattices”. In: 2008 IEEE Conference on Computer Vision
and Pattern Recognition (2008). DOI: 10.1109/cvpr.2008.4587471.
[38] Reza Vatani Nezafat, Olcay Sahin, and Mecit Cetin. “Transfer learning using deep neural
networks for classification of truck body types based on side-fire lidar data”. In: Journal of
Big Data Analytics in Transportation 1.1 (2019), pp. 71–82.
[39] Roberta B Oliveira, Aledir S Pereira, and Jo˜ao Manuel RS Tavares. “Skin lesion computational
diagnosis of dermoscopic images: Ensemble models based on input feature manipulation”.
In: Computer methods and programs in biomedicine 149 (2017), pp. 43–53.
[40] Adam Paszke et al. “PyTorch: An Imperative Style, High-Performance Deep Learning Library”.
In: Advances in Neural Information Processing Systems 32. Ed. by H. Wallach et
al. Curran Associates, Inc., 2019, pp. 8024–8035. URL: http://papers.neurips.
[41] Jack Payette. “Intel and MobileODT Cervical Cancer Screening Kaggle Competition : Cervix
Type Classification Using Deep Learning and Image Classification”. In: 2017.
[42] Pedro Pedrosa Rebouc¸as Filho et al. “Automatic histologically-closer classification of skin
lesions”. In: Computerized Medical Imaging and Graphics 68 (2018), pp. 40–54.
[43] Joseph Redmon et al. “You Only Look Once: Unified, Real-Time Object Detection”. In:
CoRR abs/1506.02640 (2015). arXiv: 1506.02640. URL: http://arxiv.org/abs/
[44] Rami Al-Rfou et al. “Theano: A Python framework for fast computation of mathematical
expressions”. In: CoRR abs/1605.02688 (2016). arXiv: 1605.02688. URL: http://
[45] Brian D. Ripley. Pattern recognition and neural networks. Cambridge University Press, 1996.
[46] G. van Rossum. Python tutorial. Tech. rep. CS-R9526. Amsterdam: Centrum voorWiskunde
en Informatica (CWI), May 1995.
[47] Veronica Rotemberg et al. “A patient-centric dataset of images and metadata for identifying
melanomas using clinical context”. In: Scientific data 8.1 (Jan. 2021), p. 34. ISSN: 2052-
4463. DOI: 10.1038/s41597-021-00815-z. URL: https://europepmc.org/
[48] Sumit Saha. A Comprehensive Guide to Convolutional Neural Networks-the ELI5 way. Accessed:
2020-09. Dec. 2018. URL: https://towardsdatascience.com/a-comprehensiveguide-
[49] Bu¨lent Sankur and Mehmet Sezgin. “Survey over image thresholding techniques and quantitative
performance evaluation”. In: Journal of Electronic Imaging 13.1 (2004), p. 146. DOI:
[50] D. R. Sarvamangala and Raghavendra V. Kulkarni. “Convolutional neural networks in medical
image understanding: a survey”. In: Evolutionary Intelligence (2021). DOI: 10.1007/
[51] Alexander Schick, Mika Fischer, and Rainer Stiefelhagen. “Measuring and evaluating the
compactness of superpixels”. In: Proceedings of the 21st international conference on pattern
recognition (ICPR2012). IEEE. 2012, pp. 930–934.
[52] Neeraj Sharma et al. “Automated medical image segmentation techniques”. In: Journal of
Medical Physics 35.1 (2010), p. 3. DOI: 10.4103/0971-6203.58777.
[53] Jamie Shotton et al. “TextonBoost for Image Understanding: Multi-Class Object Recognition
and Segmentation by Jointly Modeling Texture, Layout, and Context”. In: International Journal
of Computer Vision 81 (Jan. 2009), pp. 2–23. DOI: 10.1007/s11263-007-0109-1.
[54] K. Simonyan and Andrew Zisserman. “Very Deep Convolutional Networks for Large-Scale
Image Recognition”. In: CoRR abs/1409.1556 (2015).
[55] Emilio Soria Olivas. Handbook of research on machine learning applications and trends.
Information Science Reference, 2010.
[56] P Umesh. “Image Processing in Python”. In: CSI Communications 23 (2012).
[57] Stefan Van der Walt et al. “scikit-image: image processing in Python”. In: PeerJ 2 (2014),
[58] Andrea Vedaldi and Stefano Soatto. “Quick Shift and Kernel Methods for Mode Seeking”. In:
Computer Vision – ECCV 2008. Ed. by David Forsyth, Philip Torr, and Andrew Zisserman.
Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, pp. 705–718. ISBN: 978-3-540-88693-
[59] Olga Veksler, Yuri Boykov, and Paria Mehrani. “Superpixels and Supervoxels in an Energy
Optimization Framework”. In: Computer Vision – ECCV 2010 Lecture Notes in Computer
Science (2010), pp. 211–224. DOI: 10.1007/978-3-642-15555-0_16.
[60] Petar Veliˇckovi´c et al. Graph Attention Networks. Accessed: 2021-03-06. 2018. arXiv: 1710.
10903 [stat.ML].
[61] W. N. Venables and B. D. Ripley. “Modern Applied Statistics with S-PLUS”. In: Statistics
and Computing (1999). DOI: 10.1007/978-1-4757-3121-7.
[62] Luc Vincent and Pierre Soille. “Watersheds in digital spaces: an efficient algorithm based on
immersion simulations”. In: IEEE Computer Architecture Letters 13.06 (1991), pp. 583–598.
[63] Pauli Virtanen et al. “SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python”.
In: Nature Methods 17 (2020), pp. 261–272. DOI: 10.1038/s41592-019-0686-2.
[64] Minjie Wang et al. “Deep Graph Library: A Graph-Centric, Highly-Performant Package for
Graph Neural Networks”. In: arXiv preprint arXiv:1909.01315 (2019).
[65] Weibin Wang et al. “Medical Image Classification Using Deep Learning”. In: Deep Learning
in Healthcare: Paradigms and Applications. Ed. by Yen-Wei Chen and Lakhmi C. Jain.
Cham: Springer International Publishing, 2020, pp. 33–51. ISBN: 978-3-030-32606-7. DOI:
10.1007/978-3-030-32606-7_3. URL: https://doi.org/10.1007/978-
[66] Boris Weisfeiler and Andrei Leman. “The reduction of a graph to canonical form and the
algebra which appears therein”. In: NTI, Series 2.9 (1968), pp. 12–16.
[67] Keyulu Xu et al. How Powerful are Graph Neural Networks? 2019. arXiv: 1810.00826
[68] Pengyi Yang et al. “A review of ensemble methods in bioinformatics”. In: Current Bioinformatics
5.4 (2010), pp. 296–308.
[69] Rex Ying et al. “Graph Convolutional Neural Networks for Web-Scale Recommender Systems”.
In: CoRR abs/1806.01973 (2018). arXiv: 1806.01973. URL: http://arxiv.
[70] C Lawrence Zitnick and Sing Bing Kang. “Stereo for image-based rendering using image
over-segmentation”. In: International Journal of Computer Vision 75.1 (2007), pp. 49–65.
[71] C. Lawrence Zitnick and Sing Bing Kang. “Stereo for Image-Based Rendering using Image
Over-Segmentation”. In: International Journal of Computer Vision 75.1 (2007), pp. 49–65.
DOI: 10.1007/s11263-006-0018-8.
[72] Marinka Zitnik, Monica Agrawal, and Jure Leskovec. “Modeling polypharmacy side effects
with graph convolutional networks”. In: CoRR abs/1802.00543 (2018). arXiv: 1802.
00543. URL: http://arxiv.org/abs/1802.00543.
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top