Header

UZH-Logo

Maintenance Infos

EvaCRC: Evaluating Code Review Comments


Yang, Lanxin; Xu, Jinwei; Zhang, Yifan; Zhang, He; Bacchelli, Alberto (2023). EvaCRC: Evaluating Code Review Comments. In: ESEC/FSE '23: 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, San Francisco CA USA, 3 December 2023 - 9 December 2023. Association for Computing Machinery, 275-287.

Abstract

In code reviews, developers examine code changes authored by peers and provide feedback through comments. Despite the importance of these comments, no accepted approach currently exists for assessing their quality. Therefore, this study has two main objectives: (1) to devise a conceptual model for an explainable evaluation of review comment quality, and (2) to develop models for the automated evaluation of comments according to the conceptual model. To do so, we conduct mixed-method studies and propose a new approach: EvaCRC (Evaluating Code Review Comments). To achieve the first goal, we collect and synthesize quality attributes of review comments, by triangulating data from both authoritative documentation on code review standards and academic literature. We then validate these attributes using real-world instances. Finally, we establish mappings between quality attributes and grades by inquiring domain experts, thus defining our final explainable conceptual model. To achieve the second goal, EvaCRC leverages multi-label learning. To evaluate and refine EvaCRC, we conduct an industrial case study with a global ICT enterprise. The results indicate that EvaCRC can effectively evaluate review comments while offering reasons for the grades.

Abstract

In code reviews, developers examine code changes authored by peers and provide feedback through comments. Despite the importance of these comments, no accepted approach currently exists for assessing their quality. Therefore, this study has two main objectives: (1) to devise a conceptual model for an explainable evaluation of review comment quality, and (2) to develop models for the automated evaluation of comments according to the conceptual model. To do so, we conduct mixed-method studies and propose a new approach: EvaCRC (Evaluating Code Review Comments). To achieve the first goal, we collect and synthesize quality attributes of review comments, by triangulating data from both authoritative documentation on code review standards and academic literature. We then validate these attributes using real-world instances. Finally, we establish mappings between quality attributes and grades by inquiring domain experts, thus defining our final explainable conceptual model. To achieve the second goal, EvaCRC leverages multi-label learning. To evaluate and refine EvaCRC, we conduct an industrial case study with a global ICT enterprise. The results indicate that EvaCRC can effectively evaluate review comments while offering reasons for the grades.

Statistics

Citations

Altmetrics

Downloads

0 downloads since deposited on 19 Feb 2024
0 downloads since 12 months

Additional indexing

Item Type:Conference or Workshop Item (Paper), not_refereed, original work
Communities & Collections:03 Faculty of Economics > Department of Informatics
Dewey Decimal Classification:000 Computer science, knowledge & systems
Scopus Subject Areas:Physical Sciences > Artificial Intelligence
Physical Sciences > Software
Scope:Discipline-based scholarship (basic research)
Language:English
Event End Date:9 December 2023
Deposited On:19 Feb 2024 13:29
Last Modified:20 Feb 2024 21:01
Publisher:Association for Computing Machinery
ISBN:979-8-4007-0327-0
OA Status:Closed
Publisher DOI:https://doi.org/10.1145/3611643.3616245
Related URLs:https://doi.org/10.5281/zenodo.8297481 (Research Data)