Login | Register

Informative Machine Learning Model Explanation Techniques

Title:

Informative Machine Learning Model Explanation Techniques

Zhao, Ningsheng (2025) Informative Machine Learning Model Explanation Techniques. PhD thesis, Concordia University.

[thumbnail of Zhao_PhD_S2025.pdf]
Preview
Text (application/pdf)
Zhao_PhD_S2025.pdf - Accepted Version
Available under License Spectrum Terms of Access.
16MB

Abstract

Explainable AI (XAI) is an emerging field focused on providing human-interpretable insights into complex and often black-box machine learning (ML) models. Shapley value attribution (SVA) is an increasingly popular XAI method that quantifies the contribution of each feature to a model’s behavior, which can be either an individual prediction (local SVAs) or a performance metric (global SVAs). However, recent research has highlighted several limitations in existing SVA methods, leading to biased or incorrect explanations that fail to capture the true relationships between features and model behaviors. What's worse, these explanations are vulnerable to adversarial manipulation.

Additionally, global SVAs, while widely used in applied studies to gain insights into underlying information systems, face challenges when applied to ML models trained on imbalanced datasets, such as those used in fraud detection or disease prediction. In these scenarios, global SVAs can yield misleading or unstable explanations.

This thesis aims to address these challenges and improve the reliability and informativeness of SVA explanations. It makes three key contributions: 1) Proposing a novel error analysis framework that comprehensively examines the underlying sources of bias in existing SVA methods; 2) Introducing a series of refinement methods that significantly enhance the informativeness of SVA explanations, as well as their robustness against adversarial attacks; 3) Developing a standardization method for evaluating global model behaviors on imbalanced datasets, advancing the development of an explainable model monitoring system. Our experiments demonstrate that these methods substantially improve the ability of SVAs to uncover informative patterns in model behaviors, making them valuable tools for knowledge discovery, model debugging, and performance monitoring.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Concordia Institute for Information Systems Engineering
Item Type:Thesis (PhD)
Authors:Zhao, Ningsheng
Institution:Concordia University
Degree Name:Ph. D.
Program:Information and Systems Engineering
Date:21 January 2025
Thesis Supervisor(s):Yu, Jia Yuan and Zeng, Yong
ID Code:995021
Deposited By: NINGSHENG ZHAO
Deposited On:17 Jun 2025 15:00
Last Modified:17 Jun 2025 15:00
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top