Ignora e passa al contenuto principale

LEARN, CONNECT, BUILD

Microsoft Reactor

Partecipa a Microsoft Reactor e interagisci con gli sviluppatori live

Sei pronto per iniziare a usare l''intelligenza artificiale e le tecnologie più recenti? Microsoft Reactor fornisce eventi, formazione e risorse della community per aiutare sviluppatori, imprenditori e startup a sviluppare la tecnologia di intelligenza artificiale e altro ancora. Unisciti a noi.

LEARN, CONNECT, BUILD

Microsoft Reactor

Partecipa a Microsoft Reactor e interagisci con gli sviluppatori live

Sei pronto per iniziare a usare l''intelligenza artificiale e le tecnologie più recenti? Microsoft Reactor fornisce eventi, formazione e risorse della community per aiutare sviluppatori, imprenditori e startup a sviluppare la tecnologia di intelligenza artificiale e altro ancora. Unisciti a noi.

Indietro

Explainable AI (XAI) Course: Local Explanations - Concept and Methods

13 marzo, 2023 | 5:00 PM - 6:30 PM (UTC) Coordinated Universal Time

  • Formato:
  • alt##LivestreamLive Stream

Argomento: Data Science & Machine Learning

Lingua: Ebraico

The XAI course provides a comprehensive overview of explainable AI, covering both theory and practice, and exploring various use cases for explainability. Participants will learn not only how to generate explanations, but also how to evaluate and effectively communicate these explanations to diverse stakeholders.

The XAI course is managed on a voluntary basis by DataNights and Microsoft organizers and free for charge for the participant. This course is designed for data scientists that have at least two years in industry of hands-on experience with machine learning and Python and a basic background in deep learning. Some of the sessions will be held in-person at the Microsoft Reactor in Tel Aviv, while others will be conducted virtually.

Course Leaders:
Bitya Neuhof, DataNights
Yasmin Bokobza, Microsoft

What is this session about?
Machine learning models can be analyzed at a high level using global explanations, such as linear model coefficients. However, there are several limitations to these global explanations. In this talk, I will review the use cases where local explanations are needed and introduce two popular methods for generating local explanations: LIME and SHAP. Our learning will be focused on SHAP, its theory, model-agnostic and model-specific versions, and how to use and read SHAP visualizations.

Relatori

Parti di questa pagina possono essere tradotte da macchina o IA.