Conference Paper


Epistemic artificial intelligence : using random sets to quantify uncertainty in machine learning

Abstract

Quantifying uncertainty is fundamental in machine learning tasks, including classification, detection in complex domains such as computer vision (CV) and text generation in large language models (LLMs). This is especially crucial when artificial intelligence (AI) is used in safety-critical applications, such as, e.g., autonomous driving or medical diagnosis, where reliable decisions are crucial to prevent serious consequences. The Epistemic AI project explores the use of random sets for quantifying epistemic uncertainty in AI. A mathematical framework which generalizes the concept of random variables to sets, random sets enable a more flexible and expressive approach to uncertainty modeling. This work proposes ways to employ the random sets formalism to model classification uncertainty over both the target and parameter spaces of a machine learning model (e.g., a neural network), as well as detection uncertainty, within the context of computer vision. The applicability and effectiveness of random sets is also demonstrated in large language models, where they can be utilized to model uncertainty in natural language processing tasks. We show how, by leveraging random set theory, machine learning models can achieve enhanced robustness, interpretability, and reliability while effectively modelling uncertainty.



The fulltext files of this resource are not currently available.

Authors

Manchingal, Shireen Kudukkil
Mubashar, Muhammad
Sultana, Maryam
Khan, Salman
Cuzzolin, Fabio

Oxford Brookes departments

School of Engineering, Computing and Mathematics

Dates

Year of publication: [in press]
Date of RADAR deposit: 2025-01-28



Details

  • Owner: Joseph Ripp
  • Collection: Outputs
  • Version: 1 (show all)
  • Status: Live
  • Views (since Sept 2022): 152