Conference Paper


Metrics for measuring error extents of machine learning classifiers

Abstract

Metrics play a crucial role in evaluating the performance of machine learning (ML) models. Metrics for quantifying the extent of errors, in particular, have been intensively studied and widely used but only so far for regression models. This paper focuses instead on classifier models. A new approach is proposed in which datamorphic exploratory testing is used to discover the boundary values between classes and the distance of misclassified instances from that boundary is used to quantify the errors that the model makes. Empirical experiments and case studies are reported that validate and evaluate the proposed metrics.

Attached files

Authors

Zhu Hong
Bayley, Ian
Green, Mark

Oxford Brookes departments

School of Engineering, Computing and Mathematics

Dates

Year of publication: 2022
Date of RADAR deposit: 2022-07-01



“© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.”


Related resources

This RADAR resource is the Accepted Manuscript of Metrics for measuring error extents of machine learning classifiers

Details

  • Owner: Joseph Ripp
  • Collection: Outputs
  • Version: 1 (show all)
  • Status: Live
  • Views (since Sept 2022): 421