Exploring Fairness and Causality in Online Decision-Making
Abstract
Online decision-making under uncertainty is a fundamental aspect of numerous real-world problems across various domains, including online resource allocation, crowd-sourcing, and online advertising. Multi-Armed Bandits (MABs) and Markov Decision Processes (MDPs) are two popular modeling frameworks for capturing decision-making under uncertainty. The inherent nature of applications modeled by frameworks like MABs and MDPs often requires additional considerations and adaptations to effectively address real-world challenges. In this thesis, our primary emphasis is on two specific factors: integrating fairness considerations into the model and leveraging causal relations among different variables in the model to make better decisions. The thesis comprises three contributions: First, we commence with an exploration of fairness within temporally extended decision-making scenarios, specifically those modeled as MDPs. Our novel fairness notion aims to guarantee that each state's long-term visitation frequency surpasses a predefined fraction - a natural extension of quota-based fairness from MAB literature. We propose an algorithm with a dual guarantee: simultaneously satisfying fairness and maximizing the total reward. Second, we shift our focus to a variant of the MAB model that accounts for the dynamic nature of the environment. This model, where arm rewards increase with each pull, is a versatile abstraction for real-world scenarios, particularly in education and employment domains where opportunity allocation impacts community capabilities. We present an algorithm that maximizes the total reward while ensuring that the arms, which may correspond to communities, attain their fullest potential. Third, we study the problem of learning good interventions in causal graphs by modeling it as an MAB problem. This problem called the Causal Multi-Armed Bandit (Causal MAB) problem, captures dependencies between arms through a causal graph. We study the problem of identifying the best intervention in Causal MAB and provide algorithms for three variants of the Causal MAB problem.