Predictions of uncertainty-aware models are diverse, ranging from single point estimates (often averaged over prediction samples) to predictive distributions, to set-valued or credal-set representations. We propose a novel unified evaluation framework for uncertainty-aware classifiers, applicable to a wide range of model classes, which allows users to tailor the trade-off between accuracy and precision of predictions via a suitably designed performance metric. This makes possible the selection of the most suitable model for a particular real-world application as a function of the desired trade-off. Our experiments, concerning Bayesian, ensemble, evidential, deterministic, credal and belief function classifiers on the CIFAR-10, MNIST and CIFAR-100 datasets, show that the metric behaves as desired.
The fulltext files of this resource are not currently available.
Manchingal, Shireen Kudukkil Mubashar, Muhammad Wang KaizhengCuzzolin, Fabio
School of Engineering, Computing and Mathematics
Year of publication: [in press]Date of RADAR deposit: 2025-01-30