Header

UZH-Logo

Maintenance Infos

An Empirical Investigation on the Readability of Manual and Generated Test Cases


Grano, Giovanni; Scalabrino, Simone; Oliveto, Rocco; Gall, Harald (2018). An Empirical Investigation on the Readability of Manual and Generated Test Cases. In: Proceedings of the 26th International Conference on Program Comprehension, ICPC, Gothenburg, 26 May 2018 - 27 May 2018.

Abstract

Software testing is one of the most crucial tasks in the typical development process. Developers are usually required to write unit test cases for the code they implement. Since this is a time-consuming task, in last years many approaches and tools for automatic test case generation - such as EvoSuite - have been introduced. Nevertheless, developers have to maintain and evolve tests to sustain the changes in the source code; therefore, having readable test cases is important to ease such a process. However, it is still not clear whether developers make an effort in writing readable unit tests. Therefore, in this paper, we conduct an explorative study comparing the readability of manually written test cases with the classes they test. Moreover, we deepen such analysis looking at the readability of automatically generated test cases. Our results suggest that developers tend to neglect the readability of test cases and that automatically generated test cases are generally even less readable than manually written ones.

Abstract

Software testing is one of the most crucial tasks in the typical development process. Developers are usually required to write unit test cases for the code they implement. Since this is a time-consuming task, in last years many approaches and tools for automatic test case generation - such as EvoSuite - have been introduced. Nevertheless, developers have to maintain and evolve tests to sustain the changes in the source code; therefore, having readable test cases is important to ease such a process. However, it is still not clear whether developers make an effort in writing readable unit tests. Therefore, in this paper, we conduct an explorative study comparing the readability of manually written test cases with the classes they test. Moreover, we deepen such analysis looking at the readability of automatically generated test cases. Our results suggest that developers tend to neglect the readability of test cases and that automatically generated test cases are generally even less readable than manually written ones.

Statistics

Citations

Downloads

115 downloads since deposited on 17 Apr 2018
65 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Paper), refereed, original work
Communities & Collections:03 Faculty of Economics > Department of Informatics
Dewey Decimal Classification:000 Computer science, knowledge & systems
Language:English
Event End Date:27 May 2018
Deposited On:17 Apr 2018 13:51
Last Modified:24 Sep 2019 23:27
Publisher:Association for Computing Machinery
OA Status:Green
Other Identification Number:merlin-id:16308

Download

Download PDF  'An Empirical Investigation on the Readability of Manual and Generated Test Cases'.
Preview
Content: Accepted Version
Filetype: PDF
Size: 483kB