Conference Paper


Credal Learning Theory

Abstract

Statistical learning theory is the foundation of machine learning, providing theoretical bounds for the risk of models learned from a (single) training set, assumed to issue from an unknown probability distribution. In actual deployment, however, the data distribution may (and often does) vary, causing domain adaptation/generalization issues. In this paper we lay the foundations for a `credal' theory of learning, using convex sets of probabilities (credal sets) to model the variability in the data-generating distribution. Such credal sets, we argue, may be inferred from a finite sample of training sets. Bounds are derived for the case of finite hypotheses spaces (both assuming realizability or not), as well as infinite model spaces, which directly generalize classical results.



The fulltext files of this resource are not currently available.

Authors

Caprio, Michele
Sultana, Maryam
Elia, Eleni G.
Cuzzolin, Fabio

Oxford Brookes departments

School of Engineering, Computing and Mathematics

Dates

Year of publication: Not yet published
Date of RADAR deposit: 2024-10-18



All rights reserved.

Details

  • Owner: Daniel Croft (removed)
  • Collection: Outputs
  • Version: 1 (show all)
  • Status: Live
  • Views (since Sept 2022): 112