Quantifying uncertainty is fundamental in machine learning tasks, including classification, detection in complex domains such as computer vision (CV) and text generation in large language models (LLMs). This is especially crucial when artificial intelligence (AI) is used in safety-critical applications, such as, e.g., autonomous driving or medical diagnosis, where reliable decisions are crucial to prevent serious consequences. The Epistemic AI project explores the use of random sets for quantifying epistemic uncertainty in AI. A mathematical framework which generalizes the concept of random variables to sets, random sets enable a more flexible and expressive approach to uncertainty modeling. This work proposes ways to employ the random sets formalism to model classification uncertainty over both the target and parameter spaces of a machine learning model (e.g., a neural network), as well as detection uncertainty, within the context of computer vision. The applicability and effectiveness of random sets is also demonstrated in large language models, where they can be utilized to model uncertainty in natural language processing tasks. We show how, by leveraging random set theory, machine learning models can achieve enhanced robustness, interpretability, and reliability while effectively modelling uncertainty.
The fulltext files of this resource are not currently available.
Manchingal, Shireen Kudukkil Mubashar, Muhammad Sultana, Maryam Khan, Salman Cuzzolin, Fabio
School of Engineering, Computing and Mathematics
Year of publication: [in press]Date of RADAR deposit: 2025-01-28