Hoppa till huvudinnehåll

LEARN, CONNECT, BUILD

Microsoft Reactor

Gå med i Microsoft Reactor och interagera med utvecklare live

Är du redo att komma igång med AI och de senaste teknikerna? Microsoft Reactor tillhandahåller evenemang, utbildning och communityresurser som hjälper utvecklare, entreprenörer och nystartade företag att bygga vidare på AI-teknik med mera. Följ med!

LEARN, CONNECT, BUILD

Microsoft Reactor

Gå med i Microsoft Reactor och interagera med utvecklare live

Är du redo att komma igång med AI och de senaste teknikerna? Microsoft Reactor tillhandahåller evenemang, utbildning och communityresurser som hjälper utvecklare, entreprenörer och nystartade företag att bygga vidare på AI-teknik med mera. Följ med!

Gå tillbaka

Explainable AI (XAI) Course: Counterfactual Explanations - Explaining and Debugging

23 mars, 2023 | 4:00 em - 5:30 em (UTC) Samordnad universell tid

  • Format:
  • alt##LivestreamLivestream

Område: Datavetenskap & Machine Learning

Språk: Engelska

The XAI course provides a comprehensive overview of explainable AI, covering both theory and practice, and exploring various use cases for explainability. Participants will learn not only how to generate explanations, but also how to evaluate and effectively communicate these explanations to diverse stakeholders.

The XAI course is managed on a voluntary basis by DataNights and Microsoft organizers and free for charge for the participant. This course is designed for data scientists that have at least two years in industry of hands-on experience with machine learning and Python and a basic background in deep learning. Some of the sessions will be held in-person at the Microsoft Reactor in Tel Aviv, while others will be conducted virtually.

Course Leaders:
Bitya Neuhof, DataNights
Yasmin Bokobza, Microsoft

What is this session about?
How to explain a machine learning model such that the explanation is truthful to the model and yet interpretable to people? This question is key to ML explanations research because explanation techniques face an inherent tradeoff between fidelity and interpretability: a high-fidelity explanation for an ML model tends to be complex and hard to interpret, while an interpretable explanation is often inconsistent with the ML model. In this talk, I will present counterfactual (CF) explanations that bridge this tradeoff. Rather than approximate an ML model or rank features by their predictive importance, a CF explanation “interrogates” a model to find required changes that would flip the model’s decision and presents those examples to a user. Such examples offer a true reflection of how the model would change its prediction, thus helping decision-subject decide what they should do next to obtain a desired outcome and helping model designers debug their model. Using benchmark datasets on loan approval, I will compare counterfactual explanations to popular alternatives like LIME and SHAP. I will also present a case study on generating CF examples for image classifiers that can be used for evaluating fairness and even improving the generalizability of a model.

Talare

Relaterade händelser

Händelserna nedan kan också vara av intresse för dig. Var noga med att besöka vår Reaktorns startsida för att se alla tillgängliga händelser.

Delar av denna sida kan vara maskin- eller AI-översatta.