Individual Fairness with Group Awareness Under Uncertainty

Abstract

As machine learning (ML) extends its influence across diverse societal realms, the need to ensure fairness within these systems has markedly increased, reflecting notable advancements in fairness research. However, most existing fairness studies exclusively optimize either individual fairness or group fairness, neglecting the potential impact on one aspect while enforcing the other. In addition, most of them operate under the assumption of having full access to class labels, a condition that often proves impractical in real-world applications due to censorship. This paper delves into the concept of individual fairness amidst censorship and also with group awareness. We argue that this setup provides a more realistic understanding of fairness that aligns with real-world scenarios. Through experiments conducted on four real-world datasets with socially sensitive concerns and censorship, we demonstrate that our proposed approach not only outperforms state-of-the-art methods in terms of fairness but also maintains a competitive level of predictive performance.

Publication
Joint European Conference on Machine Learning and Knowledge Discovery in Databases 2025