Amit Sharma
Microsoft
UPPTÄCK, ANSLUT, VÄXA
Är du redo att komma igång med AI? Microsoft Reactor tillhandahåller evenemang, utbildning och communityresurser som hjälper startups, entreprenörer och utvecklare att bygga sin nästa verksamhet på AI-teknik. Följ med oss!
UPPTÄCK, ANSLUT, VÄXA
Är du redo att komma igång med AI? Microsoft Reactor tillhandahåller evenemang, utbildning och communityresurser som hjälper startups, entreprenörer och utvecklare att bygga sin nästa verksamhet på AI-teknik. Följ med oss!
23 mars, 2023 | 4:00 em - 5:30 em (UTC) Samordnad universell tid
Ämne: Datavetenskap & Machine Learning
Språk: Engelska
The XAI course provides a comprehensive overview of explainable AI, covering both theory and practice, and exploring various use cases for explainability. Participants will learn not only how to generate explanations, but also how to evaluate and effectively communicate these explanations to diverse stakeholders.
The XAI course is managed on a voluntary basis by DataNights and Microsoft organizers and free for charge for the participant. This course is designed for data scientists that have at least two years in industry of hands-on experience with machine learning and Python and a basic background in deep learning. Some of the sessions will be held in-person at the Microsoft Reactor in Tel Aviv, while others will be conducted virtually.
Course Leaders:
Bitya Neuhof, DataNights
Yasmin Bokobza, Microsoft
What is this session about?
How to explain a machine learning model such that the explanation is truthful to the model and yet interpretable to people? This question is key to ML explanations research because explanation techniques face an inherent tradeoff between fidelity and interpretability: a high-fidelity explanation for an ML model tends to be complex and hard to interpret, while an interpretable explanation is often inconsistent with the ML model. In this talk, I will present counterfactual (CF) explanations that bridge this tradeoff. Rather than approximate an ML model or rank features by their predictive importance, a CF explanation “interrogates” a model to find required changes that would flip the model’s decision and presents those examples to a user. Such examples offer a true reflection of how the model would change its prediction, thus helping decision-subject decide what they should do next to obtain a desired outcome and helping model designers debug their model. Using benchmark datasets on loan approval, I will compare counterfactual explanations to popular alternatives like LIME and SHAP. I will also present a case study on generating CF examples for image classifiers that can be used for evaluating fairness and even improving the generalizability of a model.
Högtalare
För frågor kontakta oss på reactor@microsoft.com