Journal Article


Two datasets of defect reports labeled by a crowd of annotators of unknown reliability

Abstract

Classifying software defects according to any defined taxonomy is not straightforward. In order to be used for automatizing the classification of software defects, two sets of defect reports were collected from public issue tracking systems from two different real domains. Due to the lack of a domain expert, the collected defects were categorized by a set of annotators of unknown reliability according to their impact from IBM's orthogonal defect classification taxonomy. Both datasets are prepared to solve the defect classification problem by means of techniques of the learning from crowds paradigm (Hernández-González et al. [1]).Two versions of both datasets are publicly shared. In the first version, the raw data is given: the text description of defects together with the category assigned by each annotator. In the second version, the text of each defect has been transformed to a descriptive vector using text-mining techniques.

Attached files

Authors

Hernández-González, Jerónimo
Rodriguez, Daniel
Inza, Iñaki
Harrison, Rachel
Lozano, Jose A.

Oxford Brookes departments

Faculty of Technology, Design and Environment\School of Engineering, Computing and Mathematics

Dates

Year of publication: 2018
Date of RADAR deposit: 2018-08-31


Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License


Related resources

This RADAR resource is the Version of Record of Two datasets of defect reports labeled by a crowd of annotators of unknown reliability

Details

  • Owner: Joseph Ripp
  • Collection: Outputs
  • Version: 1 (show all)
  • Status: Live
  • Views (since Sept 2022): 48