Header

UZH-Logo

Maintenance Infos

An Empirical Investigation on the Readability of Manual and Generated Test Cases


Grano, Giovanni; Scalabrino, Simone; Oliveto, Rocco; Gall, Harald C (2018). An Empirical Investigation on the Readability of Manual and Generated Test Cases. In: Proceedings of the 26th International Conference on Program Comprehension, ICPC, Gothenburg, 26 May 2018 - 27 May 2018, Association for Computing Machinery.

Abstract

Software testing is one of the most crucial tasks in the typical development process. Developers are usually required to write unit test cases for the code they implement. Since this is a time-consuming task, in last years many approaches and tools for automatic test case generation - such as EvoSuite - have been introduced. Nevertheless, developers have to maintain and evolve tests to sustain the changes in the source code; therefore, having readable test cases is important to ease such a process. However, it is still not clear whether developers make an effort in writing readable unit tests. Therefore, in this paper, we conduct an explorative study comparing the readability of manually written test cases with the classes they test. Moreover, we deepen such analysis looking at the readability of automatically generated test cases. Our results suggest that developers tend to neglect the readability of test cases and that automatically generated test cases are generally even less readable than manually written ones.

Abstract

Software testing is one of the most crucial tasks in the typical development process. Developers are usually required to write unit test cases for the code they implement. Since this is a time-consuming task, in last years many approaches and tools for automatic test case generation - such as EvoSuite - have been introduced. Nevertheless, developers have to maintain and evolve tests to sustain the changes in the source code; therefore, having readable test cases is important to ease such a process. However, it is still not clear whether developers make an effort in writing readable unit tests. Therefore, in this paper, we conduct an explorative study comparing the readability of manually written test cases with the classes they test. Moreover, we deepen such analysis looking at the readability of automatically generated test cases. Our results suggest that developers tend to neglect the readability of test cases and that automatically generated test cases are generally even less readable than manually written ones.

Statistics

Citations

Dimensions.ai Metrics
18 citations in Web of Science®
27 citations in Scopus®
Google Scholar™

Altmetrics

Downloads

304 downloads since deposited on 17 Apr 2018
57 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Paper), refereed, original work
Communities & Collections:03 Faculty of Economics > Department of Informatics
Dewey Decimal Classification:000 Computer science, knowledge & systems
Scopus Subject Areas:Physical Sciences > Software
Language:English
Event End Date:27 May 2018
Deposited On:17 Apr 2018 13:51
Last Modified:11 Feb 2022 08:10
Publisher:Association for Computing Machinery
OA Status:Green
Publisher DOI:https://doi.org/10.1145/3196321.3196363
Other Identification Number:merlin-id:16308
  • Content: Accepted Version