Research Engineer CNH Industrial Malden, Massachusetts, United States
Did you know that when asked to generate an image of a doctor, AI is more likely to show a man? This bias isn’t accidental — it stems from imbalanced training data, historical stereotypes, and a lack of fairness-aware evaluation. As AI shapes real-world decisions, tackling these biases is crucial for fairness across industries. In this session, we’ll dive into the technical roots of gender bias in AI, from dataset limitations to model training flaws. We’ll explore real-world cases where biased AI led to unintended consequences and discuss industry efforts like fairness benchmarks, de-biasing algorithms, and inclusive dataset curation. Women in AI have a powerful role to play in driving change. This session will offer actionable steps to advocate for better dataset representation, refine evaluation metrics, and influence policies that promote fairness in AI development. Let’s build AI that works for everyone!
Learning Objectives:
Analyze sources of gender bias in AI models and datasets.
Evaluate fairness benchmarks and de-biasing techniques in machine learning.
Recommend strategies for improving dataset representation and model evaluation metrics.