Show simple item record

dc.contributor.advisorLouis, Anand
dc.contributor.authorGorantla, Sruthi
dc.date.accessioned2025-04-28T04:47:24Z
dc.date.available2025-04-28T04:47:24Z
dc.date.submitted2025
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/6910
dc.description.abstractThe rapid integration of automated decision-making systems in critical domains such as resume screening, loan approval, content recommendation, and disaster containment has raised significant concerns regarding biases in these systems. This research investigates both theoretical and practical aspects of fairness in machine learning models, proposing novel fairness definitions and algorithmic frameworks that mitigate bias while maintaining performance. Ranking systems shape decision-making across various fields but often introduce biases disproportionately affecting underrepresented groups. In applications like resume screening, ensuring fair representation in expectation is insufficient or even unlawful. We propose a novel randomized ranking algorithm that produces group-fair rankings by sampling from a probability distribution satisfying the natural fair ranking axioms we define. We further show that our sampler enables in-processing for ex-post fairness in learning-to-rank systems. Experiments on real-world datasets, such as credit risk assessments and student admissions, demonstrate that our approach effectively balances fairness and accuracy. Ensuring fairness requires addressing group fairness (equitable treatment across demographic groups) and individual fairness (similar treatment for similar individuals). Given group fairness constraints in ranking, we first define underranking—a phenomenon where meritorious individuals are systematically pushed down in rankings to satisfy the group fairness constraints. We prove a theoretical lower bound on underranking and propose an efficient post-processing algorithm with simultaneous group fairness and underranking guarantees comparable to the lower bound. We also give an efficient algorithm that samples rankings from a distribution over group-fair rankings that also satisfies probabilistic individual fairness. Empirical results show that our algorithm achieves both fairness notions while outperforming state-of-the-art baselines. Extending beyond ranking, we propose a novel algorithm for fairness in graph min-cut problems, with theoretical and experimental validation. Beyond traditional fairness constraints, we explore error-based fairness, ensuring that different groups incur a fair error distribution. Inspired by socially-fair clustering (Ghadiri et al., 2021; Abbasi et al., 2021), we develop approximation algorithms for socially-fair (k,z) clustering (including k-median, k-means, and socially-fair linear subspace clustering) using strong coresets. Experimental evaluations validate their effectiveness in mitigating bias. In ranking, we introduce a cascaded-norm objective function for fair error measurement in settings where the ranker learns from noisy pairwise preferences. We establish theoretical lower bounds for both group-blind and group-aware fair ranking algorithms, showing that our group-blind algorithm nearly matches the sample complexity lower bound. In contrast, our group-aware approach achieves better sample complexity in real-world experiments.en_US
dc.description.sponsorshipGoogle PhD Fellowshipen_US
dc.language.isoen_USen_US
dc.relation.ispartofseries;ET00920
dc.rightsI grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertationen_US
dc.subjectfairnessen_US
dc.subjectrankingen_US
dc.subjectalgorithmsen_US
dc.subjectranking algorithmen_US
dc.subjectfair rankingen_US
dc.subject.classificationResearch Subject Categories::TECHNOLOGY::Information technology::Computer science::Computer scienceen_US
dc.titleMitigating Bias via Algorithms with Fairness Guaranteesen_US
dc.typeThesisen_US
dc.degree.namePhDen_US
dc.degree.levelDoctoralen_US
dc.degree.grantorIndian Institute of Scienceen_US
dc.degree.disciplineEngineeringen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record