Login | Register

An Optimized Deep Machine Learning and Micro-Services Architecture based Proactive Elastic Cloud Framework

Title:

An Optimized Deep Machine Learning and Micro-Services Architecture based Proactive Elastic Cloud Framework

Daradkeh, Tariq ORCID: https://orcid.org/0000-0002-1294-7117 (2021) An Optimized Deep Machine Learning and Micro-Services Architecture based Proactive Elastic Cloud Framework. PhD thesis, Concordia University.

[thumbnail of Daradkeh_PhD_F2021.pdf]
Preview
Text (application/pdf)
Daradkeh_PhD_F2021.pdf - Accepted Version
9MB

Abstract

To achieve elasticity in cloud environment a holistic solution must be considered that measures all running applications and resources performance, including its cloud management system. Cloud resources and applications are continuously changing in their capacity and behavior, which implies a dynamic change in the cloud management system architecture and characteristics. The new era of application modeling is to decouple its components and make them as standalone cooperated modules following micro-service pattern architecture. This design gives an application a fast adaptation agility to change in requirements by customizing the application operation modules to match new tasks. The proposed elastic framework is achieved using multiple tasks as a sequence of steps. First, cloud resources monitoring, and workload changes are tracked. Second, workloads clustering using custom K-means method is used to categorize unlabeled workload sets. Third, workload demands, and datacenter configuration are predicted, classified, and labeled using deep machine learning techniques. Fourth, resources are scaled and scheduled based on workload characteristics and scaling dimension conditions. Fifth, a micro-service pattern based elastic framework is implemented for dynamic resources management and operation.

First task, monitoring system must provide needed information to cloud manager to describe cloud dynamic state by reading cloud-generated logs and sending them to cloud manager. Log updates should be accurate, instantaneous, and sent with minimum time delay. Data sources vary from low to high level of cloud infrastructure resources, or they can be generated from workload demands. Logs are used to discover cloud system status, which is input for future actions in cloud management resources orchestration. Point Estimator (PE) log tracker is proposed that can dynamically adapt to the type of workload providing accurate fresh logs values to cloud manager with minimum number of data transactions.
Second task, dynamic K-Means clustering using kernel density estimator is proposed to analyze and characterize both workloads and datacenter configurations. This method enhances K-Means clustering by automatically determining optimum number of classes and finding the mean centroids for the clusters. In addition, it improves the accuracy and the time complexity of standard K-Means clustering model, by best correlating between clustering attributes using statistical correlation methods.

Third task, cloud workload prediction is a very critical task for elastic scaling, because cloud manager decides what configuration sequence is to be considered for resource provisioning. Predicting workload demands to optimize datacenter configuration, such that increasing/decreasing datacenter resources provides an accurate and efficient configuration. Three methods of deep machine learning (namely NN, CNN and LSTM) are used and compared with analytical approach to model workload and datacenter actions. Analytical model is used as predictor to evaluate and test optimization solution set and find the best configuration and scaling actions before applying it on the real datacenter. Deep machine learning together with analytical approach is used to find the best prediction values of workload demands and evaluate the scaling and resources capacity required to be provisioned. Deep machine learning is used to find optimal configuration and to solve the elasticity scaling boundaries values. %Matching the demand guarantees Service Level Agreement (SLA) conditions and Quality of Service (QoS) performance.

Fourth task, resources scaling and scheduling in cloud elasticity involves timely provisioning and de-provisioning of computing resources and adjusting resources size to meet the dynamic workload demand. This requires fast and accurate resource scaling methods at minimum cost (e.g. pay as you go) that match with workload demands. Two dynamic changing parameters must be defined in an elastic model, the workload resource demand classes, and the data center resource reconfiguration classes. These parameters are not labeled for cloud management system while data center logs are being captured. A deep machine learning method is used to label datacenter configuration.
Fifth task, micro-service pattern architecture with open standard API is used to integrate between all elastic cloud framework components. Full stack micro-service based elastic cloud management system is implemented considering elastic scaling and management requirements of all resources. The model focuses on elasticity scaling performance by analyzing cloud micro-service management modules in different aspects: interactions, end to end delay, and communication. It also focuses on optimizing decoupling of system components and optimizing orchestration scheduling for elastic scaling.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Electrical and Computer Engineering
Item Type:Thesis (PhD)
Authors:Daradkeh, Tariq
Institution:Concordia University
Degree Name:Ph. D.
Program:Electrical and Computer Engineering
Date:17 December 2021
Thesis Supervisor(s):Agarwal, Anjali
ID Code:990076
Deposited By: TARIQ DARADKEH
Deposited On:16 Jun 2022 15:22
Last Modified:31 Dec 2023 01:00
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top