Header

UZH-Logo

Maintenance Infos

How High Will It Be? Using Machine Learning Models to Predict Branch Coverage in Automated Testing


Grano, Giovanni; Titov, Timofey V; Panichella, Sebastiano; Gall, Harald (2018). How High Will It Be? Using Machine Learning Models to Predict Branch Coverage in Automated Testing. In: Workshop on Machine Learning Techniques for Software Quality Evaluation (MaLTeSQuE), Campobasso, Italy, 20 April 2018 - 20 April 2018, 19-24.

Abstract

Software testing is a crucial component in modern continuous integration development environment.
Ideally, at every commit, all the system's test cases should be executed and moreover, new test cases should be generated for the new code.
This is especially true in the a Continuous Test Generation (CTG) environment, where the automatic generation of test cases is integrated into the continuous integration pipeline.
Furthermore, developers want to achieve a minimum level of coverage for every build of their systems.
Since both executing all the test cases and generating new ones for all the classes at every commit is not feasible, they have to select which subset of classes has to be tested.
In this context, knowing a priori the branch coverage that can be achieved with test data generation tools might gives some useful indications for answering such a question.
In this paper, we take the first steps towards the definition of machine learning models to predict the branch coverage achieved by test data generation tools.
We conduct a preliminary study considering well known code metrics as a features.
Despite the simplicity of these features, our results show that using machine learning to predict branch coverage in automated testing is a viable and feasible option.

Abstract

Software testing is a crucial component in modern continuous integration development environment.
Ideally, at every commit, all the system's test cases should be executed and moreover, new test cases should be generated for the new code.
This is especially true in the a Continuous Test Generation (CTG) environment, where the automatic generation of test cases is integrated into the continuous integration pipeline.
Furthermore, developers want to achieve a minimum level of coverage for every build of their systems.
Since both executing all the test cases and generating new ones for all the classes at every commit is not feasible, they have to select which subset of classes has to be tested.
In this context, knowing a priori the branch coverage that can be achieved with test data generation tools might gives some useful indications for answering such a question.
In this paper, we take the first steps towards the definition of machine learning models to predict the branch coverage achieved by test data generation tools.
We conduct a preliminary study considering well known code metrics as a features.
Despite the simplicity of these features, our results show that using machine learning to predict branch coverage in automated testing is a viable and feasible option.

Statistics

Downloads

19 downloads since deposited on 14 Mar 2018
19 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Paper), refereed, original work
Communities & Collections:03 Faculty of Economics > Department of Informatics
Dewey Decimal Classification:000 Computer science, knowledge & systems
Language:English
Event End Date:20 April 2018
Deposited On:14 Mar 2018 14:22
Last Modified:19 Mar 2018 14:42
Publisher:IEEE Press
OA Status:Green
Other Identification Number:merlin-id:16235

Download

Download PDF  'How High Will It Be? Using Machine Learning Models to Predict Branch Coverage in Automated Testing'.
Preview
Content: Published Version
Filetype: PDF
Size: 221kB