Login | Register

Efficient Explainable AI And Adversarial Robustness using Formal Methods

Title:

Efficient Explainable AI And Adversarial Robustness using Formal Methods

Jemaa, Amira (2024) Efficient Explainable AI And Adversarial Robustness using Formal Methods. Masters thesis, Concordia University.

[thumbnail of JEMAA_MASc_S2024.pdf]
Preview
Text (application/pdf)
JEMAA_MASc_S2024.pdf - Accepted Version
Available under License Spectrum Terms of Access.
895kB

Abstract

Artificial Intelligence (AI) systems are increasingly used in critical applications, but their lack of transparency often hinders trust and reliability. Explainable AI (XAI) addresses this by making machine learning models more understandable and interpretable. Current XAI approaches, however, lack consistency or theoretical guarantees. Formal methods, which are rigorous mathematical reasoning tools, could play a significant role in overcoming these limitations by providing sound and consistent explanations for model decisions. This thesis builds upon an existing formal XAI tool, XReason, which uses logical reasoning to generate explanations for individual predictions. The contributions of this work are threefold. First, the tool is extended to support a powerful tree based model, Light Gradient Boosting Machine (LighGBM), which offers improved scalability and performance for large datasets. Second, it introduces explanations at the class level, enabling the analysis of general patterns in model behavior across different prediction categories. This provides insights into the factors shaping model decisions for each class and helps identify biases or inconsistencies in predictions. Third, adversarial robustness is explored by integrating methods to generate and detect adversarial examples. These adversarial samples expose vulnerabilities in the model by identifying subtle input changes that lead to incorrect predictions. Detection mechanisms are then developed to identify such inputs, enhancing the model’s reliability. Experiments on a variety of datasets from different domains demonstrate that the extended framework produces consistent and robust explanations, both at the individual prediction level and across broader trends. By integrating formal methods, this work provides a practical formal XAI framework applicable to areas where trust in AI systems is essential.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Electrical and Computer Engineering
Item Type:Thesis (Masters)
Authors:Jemaa, Amira
Institution:Concordia University
Degree Name:M.A. Sc.
Program:Electrical and Computer Engineering
Date:27 November 2024
Thesis Supervisor(s):Tahar, Sofiène
ID Code:994906
Deposited By: Amira Jemaa
Deposited On:17 Jun 2025 17:16
Last Modified:17 Jun 2025 17:16
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top