Login | Register

Towards the Use of the Readily Available Tests from the Release Pipeline as Performance Tests. Are We There Yet?

Title:

Towards the Use of the Readily Available Tests from the Release Pipeline as Performance Tests. Are We There Yet?

Ding, Zishuo (2019) Towards the Use of the Readily Available Tests from the Release Pipeline as Performance Tests. Are We There Yet? Masters thesis, Concordia University.

[thumbnail of Ding_MASc_F2019.pdf]
Preview
Text (application/pdf)
Ding_MASc_F2019.pdf - Accepted Version
Available under License Spectrum Terms of Access.
401kB

Abstract

Performance is one of the important aspects of software quality. In fact, performance issues exist widely in software systems, and the process of fixing the performance issues is an essential step in the release cycle of software systems. Although performance testing is widely adopted in practice, it is still expensive and time-consuming. In particular, the performance testing is usually conducted after the system is built in a dedicated testing environment. The challenge of performance testing makes it difficult to fit into the common DevOps process in software development. On the other hand, there exists a large number of tests readily available, that are executed regularly within the release pipeline during software development. In this paper, we perform an exploratory study to determine whether such readily available tests are capable of serving as performance tests. In particular, we would like to see whether the performance of these tests can demonstrate the performance improvements obtained from fixing real-life performance issues. We collect 127 performance issues from Hadoop and Cassandra and evaluate the performance of the readily available tests from the commits before and after the performance issue fixes. We find that most of the improvements from the fixes to performance issues can be demonstrated using the readily available tests in the release pipeline. However, only a very small portion of the tests can be used for demonstrating the improvements. By manually examining the tests, we identify eight reasons that a test cannot demonstrate performance improvement even though it covers the changed source code of the issue fix. Finally, we build random classifiers determining the important metrics influencing the readily available tests (not) being able to demonstrate performance improvements from issue fixes. We find that the test code itself and the source code covered by the test are important factors, while the factors related to the code changes in the performance issues fixes have low importance. Practitioners should focus on designing and improving the tests, instead of fine-tuning tests for different performance issues fixes. Our findings can be used as a guideline for practitioners to reduce the amount of effort spent on leveraging and designing tests that run in the release pipeline for performance assurance activities.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Computer Science and Software Engineering
Item Type:Thesis (Masters)
Authors:Ding, Zishuo
Institution:Concordia University
Degree Name:M.A. Sc.
Program:Software Engineering
Date:29 July 2019
Thesis Supervisor(s):Shang, Weiyi
ID Code:985970
Deposited By: Zishuo Ding
Deposited On:06 Feb 2020 02:40
Last Modified:06 Feb 2020 02:40
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top