Login | Register

Enhancing Visual Interpretability in Computer-Assisted Radiological Diagnosis: Deep Learning Approaches for Chest X-Ray Analysis

Title:

Enhancing Visual Interpretability in Computer-Assisted Radiological Diagnosis: Deep Learning Approaches for Chest X-Ray Analysis

Qiu, Zirui (2024) Enhancing Visual Interpretability in Computer-Assisted Radiological Diagnosis: Deep Learning Approaches for Chest X-Ray Analysis. Masters thesis, Concordia University.

[thumbnail of Qiu_MCompSc_F2024.pdf]
Preview
Text (application/pdf)
Qiu_MCompSc_F2024.pdf - Accepted Version
Available under License Spectrum Terms of Access.
11MB

Abstract

This thesis delves into the realm of interpretability in medical image processing, focusing on deep learning's role in enhancing the transparency and understandability of automated diagnostics in chest X-ray analysis. As deep learning models become increasingly integral to medical diagnostics, the imperative for these models to be interpretable has never been more pronounced. This work is anchored in two main studies that address the challenge of interpretability from distinct yet complementary perspectives. The first study scrutinizes the effectiveness of Gradient-weighted Class Activation Mapping (Grad-CAM) across various deep learning architectures, specifically evaluating its reliability in the context of pneumothorax diagnosis in chest X-ray images. Through a systematic analysis, this research reveals how different neural network architectures and depths influence the robustness and clarity of Grad-CAM visual explanations, providing valuable insights for selecting and designing interpretable deep learning models in medical imaging. Building on the foundational understanding of interpretability, the second study introduces a novel deep learning framework that enhances the synergy between disease diagnosis and the prediction of visual saliency maps in chest X-rays. This dual-encoder, multi-task UNet architecture, augmented by a multi-stage cooperative learning strategy, offers a sophisticated approach to interpretability. By aligning the model's attention with that of clinicians, the framework not only enhances diagnostic accuracy but also provides intuitive visual explanations that resonate with clinical expertise. Together, these studies contribute to the field of medical image processing by offering innovative approaches to improve the interpretability of deep learning models. The findings underscore the potential of interpretability-enhanced models to foster trust among medical practitioners, facilitate better clinical decision-making, and pave the way for the broader acceptance and integration of AI in healthcare diagnostics. The thesis concludes by synthesizing the insights gained from both projects and outlining prospective pathways for future research to further advance the interpretability and utility of AI in medical imaging.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Computer Science and Software Engineering
Item Type:Thesis (Masters)
Authors:Qiu, Zirui
Institution:Concordia University
Degree Name:M. Comp. Sc.
Program:Computer Science
Date:15 July 2024
Thesis Supervisor(s):Xiao, Yiming and Rivaz, Hassan
ID Code:994170
Deposited By: Zirui Qiu
Deposited On:24 Oct 2024 16:24
Last Modified:24 Oct 2024 16:24
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top