Login | Register

Smart Glove and Hand Gesture-Based Control Interface for Multi-Rotor Aerial Vehicles

Title:

Smart Glove and Hand Gesture-Based Control Interface for Multi-Rotor Aerial Vehicles

Haratiannejadi, Kianoush ORCID: https://orcid.org/0000-0001-9925-7890 (2020) Smart Glove and Hand Gesture-Based Control Interface for Multi-Rotor Aerial Vehicles. Masters thesis, Concordia University.

[thumbnail of Haratiannejadi_MASc_S2021.pdf]
Preview
Text (application/pdf)
Haratiannejadi_MASc_S2021.pdf - Accepted Version
Available under License Spectrum Terms of Access.
17MB

Abstract

This thesis introduces two types of adaptable human–computer interaction methods in single-subject and multi-subject unsupervised environments.

In a single-subject environment, a single shot multi-box detector (SSD) is used to detect the hand's regions of interest (RoI) in the input frame.
The RoI detected by the SSD model is fed to a convolutional neural network (CNN) to classify the right-hand gesture.
In a crowded environment, a region-based convolutional neural network (R-CNN) detects subjects in the frame and the RoI of their faces. These regions of interest are then fed to a facial recognition module to find the main subject in the frame.
After the main subject is found, the R-CNN model feeds the right-hand RoI of the main subject to the CNN to classify the right-hand gesture.
In both single-subject and multi-subject environments, a fixed number of gestures is used to describe specific commands from the right-hand gestures to the vehicle (take off, land, hover, etc.).

A smart glove on the left hand is used for more precise vehicular control.
A motion-processing unit (MPU) and a set of four flex sensors are used in the smart glove to produce discrete and continuous signals based on the bending value of each finger and the roll angle of the left hand.
The generated discrete signals with the flex sensors are fed to a support vector machine (SVM) to classify the left-hand gesture, and the generated continuous signals with the flex sensors and the MPU module are used to determine some characteristics of the gesture command such as throttle and angle of the vehicle.

Three simultaneous validation layers have been implemented, including (1) Human-Based Validation, where the main subject can reject the robot’s behavior by sending a command through the smart glove, (2) Classification Validation, where the main subject can retrain the classifier classes at will, and (3) System Validation, which is responsible for finding the error's source when the main subject performs a specific command through the smart glove.

The proposed algorithm is applied to groups consisting of one, two, three, and four subjects including the main subject to validate and investigate the behavior of the algorithm in various desirable and undesirable situations.
The proposed algorithm is implemented on an Nvidia Jetson AGX Xavier GPU.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Electrical and Computer Engineering
Item Type:Thesis (Masters)
Authors:Haratiannejadi, Kianoush
Institution:Concordia University
Degree Name:M.A. Sc.
Program:Electrical and Computer Engineering
Date:December 2020
Thesis Supervisor(s):Selmic, Rastko
ID Code:987735
Deposited By: Kianoush Haratiannejadi
Deposited On:23 Jun 2021 16:37
Last Modified:23 Jun 2021 16:37
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top