Conference Paper


A unified evaluation framework for epistemic predictions

Abstract

Predictions of uncertainty-aware models are diverse, ranging from single point estimates (often averaged over prediction samples) to predictive distributions, to set-valued or credal-set representations. We propose a novel unified evaluation framework for uncertainty-aware classifiers, applicable to a wide range of model classes, which allows users to tailor the trade-off between accuracy and precision of predictions via a suitably designed performance metric. This makes possible the selection of the most suitable model for a particular real-world application as a function of the desired trade-off. Our experiments, concerning Bayesian, ensemble, evidential, deterministic, credal and belief function classifiers on the CIFAR-10, MNIST and CIFAR-100 datasets, show that the metric behaves as desired.



The fulltext files of this resource are not currently available.

Authors

Manchingal, Shireen Kudukkil
Mubashar, Muhammad
Wang Kaizheng
Cuzzolin, Fabio

Oxford Brookes departments

School of Engineering, Computing and Mathematics

Dates

Year of publication: [in press]
Date of RADAR deposit: 2025-01-30




Related resources

This RADAR resource is the Accepted Manuscript of [arXiv preprint] A Unified Evaluation Framework for Epistemic Predictions

Details

  • Owner: Joseph Ripp
  • Collection: Outputs
  • Version: 1 (show all)
  • Status: Live
  • Views (since Sept 2022): 89