Login | Register

Software Batch Testing to Reduce Build Test Executions

Title:

Software Batch Testing to Reduce Build Test Executions

Beheshtian, Mohammad Javad (2020) Software Batch Testing to Reduce Build Test Executions. Masters thesis, Concordia University.

[thumbnail of BeheshtianKhabbaz_MSc_F2020.pdf]
Preview
Text (application/pdf)
BeheshtianKhabbaz_MSc_F2020.pdf - Accepted Version
Available under License Spectrum Terms of Access.
1MB

Abstract

Testing is expensive and batching tests have the potential to reduce test costs. The continuous integration strategy of testing each commit or change individually helps to quickly identify faults but leads to a maximum number of test executions. Large companies that have a large number of commits, e.g. Google and Facebook, or have expensive test infrastructure, e.g. Ericsson, must batch changes together to reduce the number of total test runs. For example, if eight builds are batched together and there is no failure, then we have tested eight builds with one execution saving seven executions. However, when a failure occurs it is not immediately clear which build is the cause of the failure. A bisection is run to isolate the failing build, i.e. the culprit build. In our eight builds example, a failure will require an additional 6 executions, resulting in a saving of one execution.

The goal of this work is to improve the efficiency of the batch testing. We evaluate six approaches. The first is the baseline approach that tests each build individually. The second, is the existing bisection approach. The third uses a batch size of four, which we show mathematically reduces the number of execution without requiring bisection. The fourth combines the two prior techniques introducing a stopping condition to the bisection. The final two approaches use models
of build change risk to isolate risky changes and test them in smaller batches.

We evaluate the approaches on nine open source projects that use Travis CI. Compared to the TestAll baseline, on average, the approaches reduce the number of build test executions across projects by 46%, 48%, 50%, 44%, and 49% for BatchBisect, Batch4, BatchStop4, RiskTopN, and RiskBatch, respectively. The greatest reduction is BatchStop4 at 50%. However, the simple approach of Batch4 does not require bisection and achieves a reduction of 48%. We recommend that
all CI pipelines use a batch size of at least four. We release our scripts and data for replication.

Regardless of the approach, on average, we save around half the build test executions compared to testing each change individually. We release the BatchBuilder tool that automatically batches submitted changes on GitHub for testing on Travis CI. Since the tool reports individual results for each pull-request or pushed commit, the batching happens in the background and the development process is unchanged.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Computer Science and Software Engineering
Item Type:Thesis (Masters)
Authors:Beheshtian, Mohammad Javad
Institution:Concordia University
Degree Name:M. Sc.
Program:Computer Science
Date:1 September 2020
Thesis Supervisor(s):Rigby, Peter
Keywords:Software Testing, Batch Testing, Continuous Integration and Deployment, Bisection, Pool Testing, Reducing Testing Cost, Risk Modelling
ID Code:987450
Deposited By: Seyed Mohammad Javad Beheshtian Khabbaz
Deposited On:25 Nov 2020 16:11
Last Modified:25 Nov 2020 16:11

References:

[1] B. A. Alexeevich and D. M. Borisovich. Test bundling and batching optimizations, May 162019. US Patent App. 16/206,311.

[2] Y. Amannejad, V. Garousi, R. Irving, and Z. Sahaf. A search-based approach for cost-effective software test automation decision support and an industrial case study. In2014IEEE Seventh International Conference on Software Testing, Verification and ValidationWorkshops, pages 302–311. IEEE, 2014.

[3] D. Arag ́on-Caqueo, J. Fern ́andez-Salinas, and D. Laroze. Optimization of group size in pooltesting strategy for sars-cov-2: A simple mathematical model.Journal of Medical Virology,2020.

[4] L. Aversano, L. Cerulo, and C. Del Grosso. Learning from bug-introducing changes to pre-vent fault prone code. InNinth international workshop on Principles of software evolution:in conjunction with the 6th ESEC/FSE joint meeting, pages 19–26, 2007.

[5] M. J. Beheshtian and P. C. Rigby. Batchbuilder github app.https://github.com/apps/batchbuilder, 2020.

[6] M. J. Beheshtian and P. C. Rigby. Replication package.https://github.com/CESEL/BatchBuilderResearch, 2020.

[7] M. Beller, G. Gousios, and A. Zaidman. Oops, my tests broke the build: An explorativeanalysis of travis ci with github. In2017 IEEE/ACM 14th International Conference on MiningSoftware Repositories (MSR), pages 356–367. IEEE, 2017.

[8] M. Beller, G. Gousios, and A. Zaidman. Travistorrent: Synthesizing travis ci and githubfor full-stack research on continuous integration. In2017 IEEE/ACM 14th InternationalConference on Mining Software Repositories (MSR), pages 447–450. IEEE, 2017.

[9] J. Bendik, N. Benes, and I. Cerna. Finding regressions in projects under version controlsystems.arXiv preprint arXiv:1708.06623, 2017.

[10] A. Z. Broder and R. Kumar.A note on double pooling tests.arXiv preprintarXiv:2004.01684, 2020.

[11] T. Y. Chen and M. F. Lau. Dividing strategies for the optimization of a test suite.InformationProcessing Letters, 60(3):135 – 141, 1996.

[12] X. Chen, Y. Shen, Z. Cui, and X. Ju. Applying feature selection to software defect predic-tion using multi-objective optimization. In2017 IEEE 41st annual computer software andapplications conference (COMPSAC), volume 2, pages 54–59. IEEE, 2017.

[13] Y. Chen, R. L. Probert, and D. P. Sims. Specification-based regression test selection withrisk analysis. InProceedings of the 2002 conference of the Centre for Advanced Studies onCollaborative research, page 1. IBM Press, 2002.

[14] S. R. Chidamber and C. F. Kemerer. A metrics suite for object oriented design.IEEE Trans-actions on software engineering, 20(6):476–493, 1994.

[15] C. Cho, B. Chun, and J. Seo. Adaptive batching scheme for real-time data transfers in iotenvironment. InProceedings of the 2017 International Conference on Cloud and Big DataComputing, pages 55–59, 2017.

[16] L. Crispin and J. Gregory.Agile testing: A practical guide for testers and agile teams.Pearson Education, 2009.

[17] R. Cruz. Ricardo cruz.

[18] D. A. Da Costa, S. McIntosh, W. Shang, U. Kulesza, R. Coelho, and A. E. Hassan. A frame-work for evaluating the results of the szz approach for identifying bug-introducing changes.IEEE Transactions on Software Engineering, 43(7):641–657, 2016.

[19] T. de Wolff, D. Pfl ̈uger, M. Rehme, J. Heuer, and M.-I. Bittner.Evaluation of pool-based testing approaches to enable population-wide screening for covid-19.arXiv preprintarXiv:2004.11851, 2020.

[20] R. Dorfman. The detection of defective members of large populations.The Annals of Math-ematical Statistics, 14(4):436–440, 1943.

[21] S. D ̈osinger, R. Mordinyi, and S. Biffl. Communicating continuous integration servers forincreasing effectiveness of automated testing. In2012 Proceedings of the 27th IEEE/ACMInternational Conference on Automated Software Engineering, pages 374–377. IEEE, 2012.

[22] T. Durieux, R. Abreu, M. Monperrus, T. F. Bissyand ́e, and L. Cruz. An analysis of 35+million jobs of travis ci. In2019 IEEE International Conference on Software Maintenanceand Evolution (ICSME), pages 291–295. IEEE, 2019.

[23] P. M. Duvall, S. Matyas, and A. Glover.Continuous integration: improving software qualityand reducing risk. Pearson Education, 2007.

[24] S. Elbaum, G. Rothermel, and J. Penix. Techniques for improving regression testing in con-tinuous integration development environments. InProceedings of the 22Nd ACM SIGSOFTInternational Symposium on Foundations of Software Engineering, FSE 2014, pages 235–245, New York, NY, USA, 2014. ACM.

[25] D. Engler and M. Musuvathi. Static analysis versus software model checking for bug find-ing. InInternational Workshop on Verification, Model Checking, and Abstract Interpretation,pages 191–210. Springer, 2004.

[26] Y. Gajpal, S. Appadoo, V. Shi, and Y. Liao. Optimal multi-stage group partition for efficientcoronavirus screening.Available at SSRN 3591961, 2020.

[27] P. Gestwicki. The entity system architecture and its application in an undergraduate gamedevelopment studio. InProceedings of the International Conference on the Foundations ofDigital Games, pages 73–80, 2012.

[28] T. A. Ghaleb, D. A. Da Costa, and Y. Zou. An empirical study of the long duration ofcontinuous integration builds.Empirical Software Engineering, 24(4):2102–2139, 2019.

[29] M. Gligoric, L. Eloussi, and D. Marinov. Ekstazi: Lightweight test selection. In2015IEEE/ACM 37th IEEE International Conference on Software Engineering, volume 2, pages713–716. IEEE, 2015.

[30] T. Hall, S. Beecham, D. Bowes, D. Gray, and S. Counsell. A systematic literature reviewon fault prediction performance in software engineering.IEEE Transactions on SoftwareEngineering, 38(6):1276–1304, 2011.

[31] D. Hao, L. Zhang, L. Zhang, G. Rothermel, and H. Mei. A unified test case prioritizationapproach.ACM Transactions on Software Engineering and Methodology (TOSEM), 24(2):1–31, 2014.

[32] M. Harman and P. O’Hearn. From start-ups to scale-ups: Opportunities and open problemsfor static and dynamic program analysis. In2018 IEEE 18th International Working Confer-ence on Source Code Analysis and Manipulation (SCAM), pages 1–23. IEEE, 2018.

[33] A. E. Hassan. Predicting faults using the complexity of code changes. In2009 IEEE 31stinternational conference on software engineering, pages 78–88. IEEE, 2009.

[34] C. Heger, J. Happe, and R. Farahbod. Automated root cause isolation of performance re-gressions during software development. InProceedings of the 4th ACM/SPEC InternationalConference on Performance Engineering, pages 27–38, 2013.

[35] K. Herzig, J. Czerwonka, B. Murphy, and M. Greiler. Selecting tests for execution on asoftware product, Nov. 3 2016. US Patent App. 14/699,387.

[36] M. Hilton. Understanding and improving continuous integration. InProceedings of the2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineer-ing, pages 1066–1067, 2016.

[37] M. Hilton, N. Nelson, D. Dig, T. Tunnell, D. Marinov, et al. Continuous integration (ci) needsand wishes for developers of proprietary code. 2016.

[38] M. Hilton, T. Tunnell, K. Huang, D. Marinov, and D. Dig. Usage, costs, and benefits ofcontinuous integration in open-source projects. In2016 31st IEEE/ACM International Con-ference on Automated Software Engineering (ASE), pages 426–437. IEEE, 2016.

[39] D. Hoffman. Cost benefits analysis of test automation.STAR West, 99, 1999.

[40] J. Holck and N. Jørgensen. Continuous integration and quality assurance: A case study oftwo open source projects.Australasian Journal of Information Systems, 11(1), 2003.

[41] D. Jeffrey and N. Gupta. Test case prioritization using relevant slices. InProceedings ofthe 30th Annual International Computer Software and Applications Conference - Volume 01,COMPSAC ’06, pages 411–420, Washington, DC, USA, 2006. IEEE Computer Society.

[42] L. Jonsson, M. Borg, D. Broman, K. Sandahl, S. Eldh, and P. Runeson. Automated bugassignment: Ensemble-based machine learning in large scale industrial contexts.EmpiricalSoftware Engineering, 21(4):1533–1578, 2016.

[43] R. Just, G. M. Kapfhammer, and F. Schweiggert. Using non-redundant mutation operatorsand test suite prioritization to achieve efficient and scalable mutation analysis. In2012 IEEE23rd International Symposium on Software Reliability Engineering, pages 11–20. IEEE,2012.

[44] Y. Kamei, E. Shihab, B. Adams, A. E. Hassan, A. Mockus, A. Sinha, and N. Ubayashi. Alarge-scale empirical study of just-in-time quality assurance.IEEE Transactions on SoftwareEngineering, 39(6):757–773, 2012.

[45] A. Kaur and S. Goyal. A genetic algorithm for regression test case prioritization usingcode coverage.International journal on computer science and engineering, 3(5):1839–1847,2011.

[46] J.-M. Kim and A. Porter. A history-based test prioritization technique for regression testingin resource constrained environments. InProceedings of the 24th International Conferenceon Software Engineering, ICSE ’02, pages 119–129, New York, NY, USA, 2002. ACM.

[47] S. Kim, E. J. Whitehead, and Y. Zhang. Classifying software changes: Clean or buggy?IEEETransactions on Software Engineering, 34(2):181–196, 2008.

[48] S. Kim, T. Zimmermann, K. Pan, E. James Jr, et al. Automatic identification of bug-introducing changes. In21st IEEE/ACM international conference on automated softwareengineering (ASE’06), pages 81–90. IEEE, 2006.

[49] P. Konsaard and L. Ramingwong. Total coverage based regression test case prioritizationusing genetic algorithm. In2015 12th International Conference on Electrical Engineer-ing/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON),pages 1–6. IEEE, 2015.

[50] M. Kumar, A. Sharma, and R. Kumar. An empirical evaluation of a three-tier conduit frame-work for multifaceted test case classification and selection using fuzzy-ant colony optimisa-tion approach.Software: Practice and Experience, 45(7):949–971, 2015.

[51] M. Laali, H. Liu, M. Hamilton, M. Spichkova, and H. W. Schmidt. Test case prioritizationusing online fault detection information. InAda-Europe International Conference on ReliableSoftware Technologies, pages 78–93. Springer, 2016.

[52] W. Lam, R. Oei, A. Shi, D. Marinov, and T. Xie. idflakies: A framework for detecting andpartially classifying flaky tests. In2019 12th ieee conference on software testing, validationand verification (icst), pages 312–322. IEEE, 2019.

[53] M. Lepp ̈anen, S. M ̈akinen, M. Pagels, V.-P. Eloranta, J. Itkonen, M. V. M ̈antyl ̈a, andT. M ̈annist ̈o. The highways and country roads to continuous deployment.Ieee software,32(2):64–72, 2015.

[54] J. Li.Successfully merging the work of 1000+ developers [at shopify].https://engineering.shopify.com/blogs/engineering/successfully-merging-work-1000-developers, 2019.

[55] G. Manpages. git-bisect (1) manual page, 2015.

[56] D. Marijan, A. Gotlieb, and M. Liaaen. A learning algorithm for optimizing continuousintegration development and testing practice.Software: Practice and Experience, 49(2):192–213, 2019.

[57] D. Marijan and M. Liaaen. Test prioritization with optimally balanced configuration cover-age. In2017 IEEE 18th International Symposium on High Assurance Systems Engineering(HASE), pages 100–103. IEEE, 2017.

[58] M. Marr ́e and A. Bertolino. Using spanning sets for coverage testing.IEEE Trans. Softw.Eng., 29(11):974–984, Nov. 2003.

[59] A. Memon, Z. Gao, B. Nguyen, S. Dhanda, E. Nickell, R. Siemborski, and J. Micco. Taminggoogle-scale continuous testing. In2017 IEEE/ACM 39th International Conference on Soft-ware Engineering: Software Engineering in Practice Track (ICSE-SEIP), pages 233–242.IEEE, 2017.

[60] A. Najafi, P. C. Rigby, and W. Shang. Bisecting commits and modeling commit risk duringtesting. InProceedings of the 2019 27th ACM Joint Meeting on European Software Engineer-ing Conference and Symposium on the Foundations of Software Engineering, pages 279–289,2019.

[61] A. Najafi, W. Shang, and P. C. Rigby. Improving test effectiveness using test executionshistory: an industrial experience report. In2019 IEEE/ACM 41st International Conferenceon Software Engineering: Software Engineering in Practice (ICSE-SEIP), pages 213–222.IEEE, 2019.

[62] J. Nam, W. Fu, S. Kim, T. Menzies, and L. Tan. Heterogeneous defect prediction.IEEETransactions on Software Engineering, 44(9):874–896, 2017.

[63] C. Nguyen, P. Tonella, T. Vos, N. Condori, B. Mendelson, D. Citron, and O. Shehory. Testprioritization based on change sensitivity: an industrial case study, 2014.

[64] W. Niu, X. Zhang, X. Du, L. Zhao, R. Cao, and M. Guizani. A deep learning based statictaint analysis approach for iot software vulnerability location.Measurement, 152:107139,2020.

[65] T. B. Noor and H. Hemmati. Studying test case failure prediction for test case prioritiza-tion. InProceedings of the 13th International Conference on Predictive Models and DataAnalytics in Software Engineering, pages 2–11, 2017.

[66] H. Osman, M. Ghafari, and O. Nierstrasz. Hyperparameter optimization to improve bugprediction accuracy. In2017 IEEE Workshop on Machine Learning Techniques for SoftwareQuality Evaluation (MaLTeSQuE), pages 33–38. IEEE, 2017.

[67] S. K. Pandey, R. B. Mishra, and A. K. Tripathi. Bpdet: An effective software bug predictionmodel using deep representation and ensemble learning techniques.Expert Systems withApplications, 144:113085, 2020.

[68] G. Pinto, F. Castor, R. Bonifacio, and M. Rebouc ̧as. Work practices and challenges incontinuous integration: A survey with travis ci users.Software: Practice and Experience,48(12):2223–2236, 2018.

[69] A. Poth, M. Werner, and X. Lei. How to deliver faster with ci/cd integrated testing services?InEuropean Conference on Software Process Improvement, pages 401–409. Springer, 2018.

[70] X. Qu, M. B. Cohen, and G. Rothermel. Configuration-aware regression testing: an empiricalstudy of sampling and prioritization. InProceedings of the 2008 international symposium onSoftware testing and analysis, pages 75–86, 2008.

[71] D. Radjenovi ́c, M. Heriˇcko, R. Torkar, and A.ˇZivkoviˇc. Software fault prediction metrics: Asystematic literature review.Information and software technology, 55(8):1397–1418, 2013.

[72] D. M. Rafi, K. R. K. Moses, K. Petersen, and M. V. M ̈antyl ̈a. Benefits and limitations ofautomated software testing: Systematic literature review and practitioner survey. In2012 7thInternational Workshop on Automation of Software Test (AST), pages 36–42. IEEE, 2012.

[73] M. T. Rahman, L.-P. Querel, P. C. Rigby, and B. Adams. Feature toggles: Practitioner prac-tices and a case study. InProceedings of the 13th International Conference on Mining Soft-ware Repositories, MSR ’16, page 201–211, New York, NY, USA, 2016. Association forComputing Machinery.

[74] G. Rothermel and M. J. Harrold. A framework for evaluating regression test selection tech-niques. InProceedings of the 16th International Conference on Software Engineering, ICSE’94, pages 201–210, Los Alamitos, CA, USA, 1994. IEEE Computer Society Press.

[75] R. Saha and M. Gligoric. Selective bisection debugging. InInternational Conference onFundamental Approaches to Software Engineering, pages 60–77. Springer, 2017.

[76] A. Sarkar, P. C. Rigby, and B. Bartalos. Improving bug triaging with high confidence pre-dictions at ericsson. In2019 IEEE International Conference on Software Maintenance andEvolution (ICSME), pages 81–91, 2019.

[77] M. Shepperd, D. Bowes, and T. Hall. Researcher bias: The use of machine learning insoftware defect prediction.IEEE Transactions on Software Engineering, 40(6):603–616,2014.

[78] M. Sherriff, M. Lake, and L. Williams. Prioritization of regression tests using singular valuedecomposition with empirical change records. InProceedings of the The 18th IEEE Interna-tional Symposium on Software Reliability, ISSRE ’07, pages 81–90, Washington, DC, USA,2007. IEEE Computer Society.

[79] A. Shi, T. Yung, A. Gyori, and D. Marinov. Comparing and combining test-suite reductionand regression test selection. InProceedings of the 2015 10th joint meeting on foundationsof software engineering, pages 237–247, 2015.

[80] S. Shivaji, E. J. Whitehead Jr, R. Akella, and S. Kim. Reducing features to improve bug pre-diction. In2009 IEEE/ACM International Conference on Automated Software Engineering,pages 600–604. IEEE, 2009.

[81] M. Soni. End to end automation on cloud with build pipeline: the case for devops in insuranceindustry, continuous integration, continuous testing, and continuous delivery. In2015 IEEEInternational Conference on Cloud Computing in Emerging Markets (CCEM), pages 85–89.IEEE, 2015.

[82] S. Srivastva and S. Dhir. Debugging approaches on various software processing levels. In2017 International conference of Electronics, Communication and Aerospace Technology(ICECA), volume 2, pages 302–306. IEEE, 2017.

[83] D. St ̊ahl and J. Bosch. Industry application of continuous integration modeling: a multiple-case study. In2016 IEEE/ACM 38th International Conference on Software EngineeringCompanion (ICSE-C), pages 270–279. IEEE, 2016.

[84] A.-B. Taha, S. Thebaut, and S.-S. Liu. An approach to software fault localization and revali-dation based on incremental data flow analysis. InComputer Software and Applications Con-ference, 1989. COMPSAC 89., Proceedings of the 13th Annual International, pages 527–534,Sep 1989.

[85] S. Tallam and N. Gupta. A concept analysis inspired greedy algorithm for test suite mini-mization.ACM SIGSOFT Software Engineering Notes, 31(1):35–42, 2005.

[86] C. Tantithamthavorn, S. McIntosh, A. E. Hassan, and K. Matsumoto. Automated parameteroptimization of classification techniques for defect prediction models. InProceedings of the38th International Conference on Software Engineering, pages 321–332, 2016.

[87] D. A. Tomassi, N. Dmeiri, Y. Wang, A. Bhowmick, Y.-C. Liu, P. T. Devanbu, B. Vasilescu,and C. Rubio-Gonz ́alez. Bugswarm: mining and continuously growing a dataset of repro-ducible failures and fixes. In2019 IEEE/ACM 41st International Conference on SoftwareEngineering (ICSE), pages 339–349. IEEE, 2019.

[88] A. Viehweger, F. K ̈uhnl, C. Brandt, and B. K ̈onig. Increased pcr screening capacity using amulti-replicate pooling scheme.medRxiv, 2020.

[89] S. Wang, J. Nam, and L. Tan. Qtep: quality-aware test case prioritization. InProceedings ofthe 2017 11th Joint Meeting on Foundations of Software Engineering, pages 523–534, 2017.

[90] S. Wang and X. Yao. Using class imbalance learning for software defect prediction.IEEETransactions on Reliability, 62(2):434–443, 2013.

[91] C. Woskowski. Applying industrial-strength testing techniques to critical care medical equip-ment. InInternational Conference on Computer Safety, Reliability, and Security, pages 62–73. Springer, 2012.

[92] X. Wu, V. Kumar, J. R. Quinlan, J. Ghosh, Q. Yang, H. Motoda, G. J. McLachlan, A. Ng,B. Liu, S. Y. Philip, et al. Top 10 algorithms in data mining.Knowledge and informationsystems, 14(1):1–37, 2008.

[93] S.-Q. Xi, Y. Yao, X.-S. Xiao, F. Xu, and J. Lv. Bug triaging based on tossing sequencemodeling.Journal of Computer Science and Technology, 34(5):942–956, 2019.

[94] X. Yang, D. Lo, X. Xia, Y. Zhang, and J. Sun. Deep learning for just-in-time defect prediction.In2015 IEEE International Conference on Software Quality, Reliability and Security, pages17–26. IEEE, 2015.

[95] S. Yoo and M. Harman. Regression testing minimization, selection and prioritization: asurvey.Software testing, verification and reliability, 22(2):67–120, 2012.

[96] S. Yoo and M. Harman. Regression testing minimization, selection and prioritization: asurvey.Software Testing, Verification and Reliability, 22(2):67–120, 2012.

[97] L. Yu, E. Al ́egroth, P. Chatzipetrou, and T. Gorschek. Utilising ci environment for efficientand effective testing of nfrs.Information and Software Technology, 117:106199, 2020.

[98] L. Zhang. Hybrid regression test selection. In2018 IEEE/ACM 40th International Confer-ence on Software Engineering (ICSE), pages 199–209. IEEE, 2018.

[99] Y. Zhao, A. Serebrenik, Y. Zhou, V. Filkov, and B. Vasilescu. The impact of continuousintegration on other software development practices: a large-scale empirical study. In 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE), pages60–71. IEEE, 2017.

[100] Y. Zhu, E. Shihab, and P. C. Rigby. Test re-prioritization in continuous testing environments.In2018 IEEE International Conference on Software Maintenance and Evolution (ICSME),pages 69–79, 2018.

[101] C. Ziftci and J. Reardon. Who broke the build? automatically identifying changes that inducetest failures in continuous integration at google scale. In2017 IEEE/ACM 39th InternationalConference on Software Engineering: Software Engineering in Practice Track (ICSE-SEIP),pages 113–122. IEEE, 2017.
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top