Header

UZH-Logo

Maintenance Infos

Re-evaluating method-level bug prediction


Pascarella, Luca; Palomba, Fabio; Bacchelli, Alberto (2018). Re-evaluating method-level bug prediction. In: 25th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), Campobasso, 20 April 2018 - 23 April 2018, 592-601.

Abstract

Bug prediction is aimed at supporting developers in the identification of code artifacts more likely to be defective. Researchers have proposed prediction models to identify bug prone methods and provided promising evidence that it is possible to operate at this level of granularity. Particularly, models based on a mixture of product and process metrics, used as independent variables, led to the best results.
In this study, we first replicate previous research on method- level bug prediction on different systems/timespans. Afterwards, we reflect on the evaluation strategy and propose a more realistic one. Key results of our study show that the performance of the method-level bug prediction model is similar to what previously reported also for different systems/timespans, when evaluated with the same strategy. However—when evaluated with a more realistic strategy—all the models show a dramatic drop in performance exhibiting results close to that of a random classifier. Our replication and negative results indicate that method-level bug prediction is still an open challenge.

Abstract

Bug prediction is aimed at supporting developers in the identification of code artifacts more likely to be defective. Researchers have proposed prediction models to identify bug prone methods and provided promising evidence that it is possible to operate at this level of granularity. Particularly, models based on a mixture of product and process metrics, used as independent variables, led to the best results.
In this study, we first replicate previous research on method- level bug prediction on different systems/timespans. Afterwards, we reflect on the evaluation strategy and propose a more realistic one. Key results of our study show that the performance of the method-level bug prediction model is similar to what previously reported also for different systems/timespans, when evaluated with the same strategy. However—when evaluated with a more realistic strategy—all the models show a dramatic drop in performance exhibiting results close to that of a random classifier. Our replication and negative results indicate that method-level bug prediction is still an open challenge.

Statistics

Citations

Dimensions.ai Metrics

Altmetrics

Downloads

89 downloads since deposited on 24 Aug 2018
73 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Paper), refereed, original work
Communities & Collections:03 Faculty of Economics > Department of Informatics
Dewey Decimal Classification:000 Computer science, knowledge & systems
Language:English
Event End Date:23 April 2018
Deposited On:24 Aug 2018 13:17
Last Modified:24 Oct 2019 07:08
Publisher:IEEE
Series Name:REproducibility Studies and NEgative Results
ISBN:978-1-5386-4969-5
OA Status:Green
Publisher DOI:https://doi.org/10.1109/SANER.2018.8330264
Other Identification Number:merlin-id:16637
Project Information:
  • : FunderSNSF
  • : Grant IDPP00P2_170529
  • : Project TitleData-driven Contemporary Code Review

Download

Green Open Access

Download PDF  'Re-evaluating method-level bug prediction'.
Preview
Content: Published Version
Filetype: PDF
Size: 243kB
View at publisher