Fairness in AI systems is an interdisciplinary field of research and practice that aims to understand and address some of the negative impacts of AI systems on society, with an emphasis on improving the impacts of such systems on historically underserved and marginalized communities.
In this tutorial, we will walk through the process of assessing and mitigating fairness-related harms in the context of the U.S. health care system. Specifically, we will consider a scenario involving patient health risk modeling that has demonstrated racial disparities (Obermeyer et al., 2019). This tutorial will consist of a mix of instructional content and hands-on demonstrations using Jupyter notebooks. Participants will use the Fairlearn library to assess an ML model for performance disparities across different racial groups and mitigate those disparities using a variety of algorithmic techniques. Participants will also learn how to explore, document, and communicate fairness issues, drawing on resources such as datasheets for datasets and model cards.
Participants are expected to have intermediate Python skills and familiarity with Scikit-Learn. For maximal benefit, participants should have some experience training and evaluating supervised models in Python.