Login | Register

Weighted Federated Averaging in Verifying Sensor Reading in Smart Homes to Mitigate Malicious Attacks

Title:

Weighted Federated Averaging in Verifying Sensor Reading in Smart Homes to Mitigate Malicious Attacks

Mansouri, Armin (2025) Weighted Federated Averaging in Verifying Sensor Reading in Smart Homes to Mitigate Malicious Attacks. Masters thesis, Concordia University.

[thumbnail of Mansouri_MASc_F2025.pdf]
Text (application/pdf)
Mansouri_MASc_F2025.pdf - Accepted Version
Restricted to Repository staff only until 2 April 2027.
Available under License Spectrum Terms of Access.
6MB

Abstract

Federated learning (FL) in event verification of smart homes increases the accuracy of any events, e.g., door opening. If the global model in federated learning gets poisoned, the verification of the event for all the participants will potentially decrease. Existing solutions a. Not considering federated learning in event verification and b. They rely on keeping a reference model on the server and comparing the received model with it, magnifying the effect of the benign global model, clustering the received local models on the server to generate multiple global models for each cluster. However, these approaches have multiple limitations, such as privacy issues and the possibility of making the local model unbalanced and consuming the server's resources to generate multiple global models even for the compromised clients in a cluster. In this thesis, with Weighted Federated Averaging (WFedAvg), we address the previous limitations for defense against malicious clients in federated learning for event verification. By increasing the number of benign local models and lowering the effect of the compromised clients on the server before aggregation of the model, the effects of the benign clients will increase. Furthermore, this approach has two variations, first, being that the received feedback in addition to local models on the server, will be compared with others and cosine similarity will indicate how much similarities they have, which shows the effect of different clients, second variation, which is extended of the existing works, a sample reference model will get stored on the server and in the cases where the benign clients get deviated from the sample model, they will not be considered as malicious, instead, we extend it by clustering the models and measure how far they are from the cluster that has the reference model. Then, based on the similarity of the clients in that cluster, we control the contribution level of those clients. In this way, it would be possible to get to the minimum loss value and higher accuracy much faster, as is shown in the experiment section. Additionally, while we were investigating the IoT chipset for our purpose, we discovered a vulnerability from one of the most famous IoT chipsets that can be an entry point to compromise a client for performing a federated learning attack for sensor verification, which will be discussed.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Concordia Institute for Information Systems Engineering
Item Type:Thesis (Masters)
Authors:Mansouri, Armin
Institution:Concordia University
Degree Name:M.A.
Program:Information and Systems Engineering
Date:1 April 2025
Thesis Supervisor(s):Lucia, Walter
ID Code:995372
Deposited By: Armin Mansouri
Deposited On:17 Jun 2025 17:19
Last Modified:17 Jun 2025 17:19
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top