跳到主要內容

學習、聯繫、建置

Microsoft Reactor

加入 Microsoft Reactor 並與開發人員即時互動

準備好開始使用 AI 和最新技術嗎? Microsoft Reactor 提供活動、訓練和社群資源,協助開發人員、企業家和初創公司建置 AI 技術等等。 加入我們!

學習、聯繫、建置

Microsoft Reactor

加入 Microsoft Reactor 並與開發人員即時互動

準備好開始使用 AI 和最新技術嗎? Microsoft Reactor 提供活動、訓練和社群資源,協助開發人員、企業家和初創公司建置 AI 技術等等。 加入我們!

返回

Explainable AI (XAI) Course: Counterfactual Explanations - Explaining and Debugging

23 3月, 2023 | 4:00 下午 - 5:30 下午 (UTC) 國際標準時間

  • 格式:
  • alt##Livestream線上直播

主題: 資料科學與機器學習

語言: 英文

The XAI course provides a comprehensive overview of explainable AI, covering both theory and practice, and exploring various use cases for explainability. Participants will learn not only how to generate explanations, but also how to evaluate and effectively communicate these explanations to diverse stakeholders.

The XAI course is managed on a voluntary basis by DataNights and Microsoft organizers and free for charge for the participant. This course is designed for data scientists that have at least two years in industry of hands-on experience with machine learning and Python and a basic background in deep learning. Some of the sessions will be held in-person at the Microsoft Reactor in Tel Aviv, while others will be conducted virtually.

Course Leaders:
Bitya Neuhof, DataNights
Yasmin Bokobza, Microsoft

What is this session about?
How to explain a machine learning model such that the explanation is truthful to the model and yet interpretable to people? This question is key to ML explanations research because explanation techniques face an inherent tradeoff between fidelity and interpretability: a high-fidelity explanation for an ML model tends to be complex and hard to interpret, while an interpretable explanation is often inconsistent with the ML model. In this talk, I will present counterfactual (CF) explanations that bridge this tradeoff. Rather than approximate an ML model or rank features by their predictive importance, a CF explanation “interrogates” a model to find required changes that would flip the model’s decision and presents those examples to a user. Such examples offer a true reflection of how the model would change its prediction, thus helping decision-subject decide what they should do next to obtain a desired outcome and helping model designers debug their model. Using benchmark datasets on loan approval, I will compare counterfactual explanations to popular alternatives like LIME and SHAP. I will also present a case study on generating CF examples for image classifiers that can be used for evaluating fairness and even improving the generalizability of a model.

演講者

相關活動

下列活動也可能對您感興趣。 務必造訪我們的 Reactor 首頁 以查看所有可用的活動。

本頁面的一部分可能是機器翻譯或人工智能翻譯的.