Amit Sharma
Microsoft
学习、联系、构建
准备好开始使用 AI 和最新技术了吗? Microsoft Reactor 提供活动、培训和社区资源,帮助开发人员、企业家和初创公司利用 AI 技术等。 快加入我们吧!
学习、联系、构建
准备好开始使用 AI 和最新技术了吗? Microsoft Reactor 提供活动、培训和社区资源,帮助开发人员、企业家和初创公司利用 AI 技术等。 快加入我们吧!
23 三月, 2023 | 4:00 下午 - 5:30 下午 (UTC) 协调世界时
主题: 数据科学和机器学习
语言: 英语
The XAI course provides a comprehensive overview of explainable AI, covering both theory and practice, and exploring various use cases for explainability. Participants will learn not only how to generate explanations, but also how to evaluate and effectively communicate these explanations to diverse stakeholders.
The XAI course is managed on a voluntary basis by DataNights and Microsoft organizers and free for charge for the participant. This course is designed for data scientists that have at least two years in industry of hands-on experience with machine learning and Python and a basic background in deep learning. Some of the sessions will be held in-person at the Microsoft Reactor in Tel Aviv, while others will be conducted virtually.
Course Leaders:
Bitya Neuhof, DataNights
Yasmin Bokobza, Microsoft
What is this session about?
How to explain a machine learning model such that the explanation is truthful to the model and yet interpretable to people? This question is key to ML explanations research because explanation techniques face an inherent tradeoff between fidelity and interpretability: a high-fidelity explanation for an ML model tends to be complex and hard to interpret, while an interpretable explanation is often inconsistent with the ML model. In this talk, I will present counterfactual (CF) explanations that bridge this tradeoff. Rather than approximate an ML model or rank features by their predictive importance, a CF explanation “interrogates” a model to find required changes that would flip the model’s decision and presents those examples to a user. Such examples offer a true reflection of how the model would change its prediction, thus helping decision-subject decide what they should do next to obtain a desired outcome and helping model designers debug their model. Using benchmark datasets on loan approval, I will compare counterfactual explanations to popular alternatives like LIME and SHAP. I will also present a case study on generating CF examples for image classifiers that can be used for evaluating fairness and even improving the generalizability of a model.
主讲人
你也可能对以下活动感兴趣。 请务必访问我们的 Reactor 主页 查看所有可用活动。
形式:
直播
主题: 数据科学和机器学习
语言: 英语
形式:
直播
主题: 数据科学和机器学习
语言: 英语