With the rise of GPUs and Large Language Models (LLMs), Machine Learning (ML) and Artificial Intelligence (AI) have become household names. Python, known for its simplicity and powerful libraries, has been at the forefront of this AI revolution, offering robust tools for model building, training, and fine-tuning.
However, as AI systems become more embedded in society, one challenge that has gained increasing attention is "Fairness in AI". Fairness is a complex and widely discussed topic with no one-size-fits-all definition. For example, imagine a baker dividing a pie: should the larger slice go to the person who worked harder or the one who is hungrier? Similarly, in AI, fairness is defined by context, but it aims to ensure equitable treatment of all individuals or groups impacted by a model’s predictions.
ML models learn through data. Unfortunately, more often than not, data is full of historical inequities, imbalanced labels, or skewed representations—that can lead to unfair outcomes. Addressing these biases requires tools that go beyond precision, accuracy and optimization.
In this talk, we’ll explore these challenges and go over a demonstration of how Python can help address them using the fairlearn library. Through real-world examples, we will learn methods to measure fairness using metrics like demographic parity, equalized odds, equal opportunity etc. We will also learn to mitigate these biases using GridSearch and ThresholdOptimizer.
Prerequisites: Whether you are a Python newbie or a seasoned Pythoniast, with any level of experience in Machine Learning, you are welcome to join and explore a piece of the pie!