Рет қаралды 1,206
The 3rd session of AI Trust, Bias, Explainability Series by IBM AI.
Date: 8/17, 2020 10am PST
Title: Understanding and Removing Unfair Bias in ML
Abstract:
Welcome to the "AI Trust, Bias and Explainability" learning series, by IBM AI. In collaboration with IBM team, we host a series of practical introductory sessions to AI trust, bias and explainability.
Extensive evidence has shown that AI can embed human and societal bias and deploy them at scale. And many algorithms are now being reexamined due to illegal bias. So how do you remove bias & discrimination in the machine learning pipeline?
In this webinar you’ll learn the debiasing techniques that can be implemented by using the open source toolkit AI Fairness 360.
AI Fairness 360 (AIF360) is an extensible, open source toolkit for measuring, understanding, and removing AI bias. AIF360 is the first solution that brings together the most widely used bias metrics, bias mitigation algorithms, and metric explainers from the top AI fairness researchers across industry & academia.
Speaker: Upkar Lidder
Upkar Lidder is a Full Stack Developer and Data Wrangler with a decade of development experience in a variety of roles. He can be seen speaking at various conferences and participating in local tech groups and meetups.
learn.xnextcon.com/event/even...
Resources:
github.com/lidderupk/aifairne...
aif360.mybluemix.net/#
aif360.mybluemix.net/resource...
github.com/Trusted-AI/AIF360/...
aif360.slack.com
aif360.mybluemix.net/community
All sessions of the series:
Jul 27th - AI Security Privacy-Preserving Machine Learning by IBM AI. Session 1
Aug 10th - Explainable AI Workflows using Python. Session 2
Aug 17th - Understanding and Removing Unfair Bias in ML. Session 3
Aug 24th - Adversarial Robustness 360 Toolbox For ML. Session 4
Aug 31st - Workshop: Explainable AI Workflows. Session 5