Naar hoofdinhoud gaan

LEARN, CONNECT, BUILD

Microsoft Reactor

Neem deel aan Microsoft Reactor en neem live contact op met ontwikkelaars

Klaar om aan de slag te gaan met AI en de nieuwste technologieën? Microsoft Reactor biedt evenementen, training en communitybronnen om ontwikkelaars, ondernemers en startups te helpen bouwen op AI-technologie en meer. Kom kijken.

LEARN, CONNECT, BUILD

Microsoft Reactor

Neem deel aan Microsoft Reactor en neem live contact op met ontwikkelaars

Klaar om aan de slag te gaan met AI en de nieuwste technologieën? Microsoft Reactor biedt evenementen, training en communitybronnen om ontwikkelaars, ondernemers en startups te helpen bouwen op AI-technologie en meer. Kom kijken.

Terug

Explainable AI (XAI) Course: Local Explanations - Concept and Methods

13 maart, 2023 | 5:00 p.m. - 6:30 p.m. (UTC) Coordinated Universal Time

  • Notatie:
  • alt##LivestreamLivestream

Onderwerp: Datawetenschap & Machine Learning

Taal: Hebreeuws

The XAI course provides a comprehensive overview of explainable AI, covering both theory and practice, and exploring various use cases for explainability. Participants will learn not only how to generate explanations, but also how to evaluate and effectively communicate these explanations to diverse stakeholders.

The XAI course is managed on a voluntary basis by DataNights and Microsoft organizers and free for charge for the participant. This course is designed for data scientists that have at least two years in industry of hands-on experience with machine learning and Python and a basic background in deep learning. Some of the sessions will be held in-person at the Microsoft Reactor in Tel Aviv, while others will be conducted virtually.

Course Leaders:
Bitya Neuhof, DataNights
Yasmin Bokobza, Microsoft

What is this session about?
Machine learning models can be analyzed at a high level using global explanations, such as linear model coefficients. However, there are several limitations to these global explanations. In this talk, I will review the use cases where local explanations are needed and introduce two popular methods for generating local explanations: LIME and SHAP. Our learning will be focused on SHAP, its theory, model-agnostic and model-specific versions, and how to use and read SHAP visualizations.

Sprekers

Delen van deze pagina kunnen machinaal of door AI vertaald zijn.