Pular para o conteúdo principal

DESCOBRIR, CONECTAR, CRESCER

Microsoft Reactor

Ingresse no Microsoft Reactor e interaja com startups e desenvolvedores ao vivo

Pronto para começar a usar a IA? O Microsoft Reactor fornece eventos, treinamento e recursos da comunidade para ajudar startups, empreendedores e desenvolvedores a criar seus próximos negócios em tecnologia de IA. Junte-se a nós!

DESCOBRIR, CONECTAR, CRESCER

Microsoft Reactor

Ingresse no Microsoft Reactor e interaja com startups e desenvolvedores ao vivo

Pronto para começar a usar a IA? O Microsoft Reactor fornece eventos, treinamento e recursos da comunidade para ajudar startups, empreendedores e desenvolvedores a criar seus próximos negócios em tecnologia de IA. Junte-se a nós!

Voltar

Explainable AI (XAI) Course: Explainable AI in practice

28 março, 2023 | 4:00 PM - 6:00 PM (UTC) Tempo Universal Coordenado UTC

  • Formato:
  • alt##LivestreamTransmissão ao vivo

Tópico: Ciência de dados e machine learning

Linguagem: Hebraico

The XAI course provides a comprehensive overview of explainable AI, covering both theory and practice, and exploring various use cases for explainability. Participants will learn not only how to generate explanations, but also how to evaluate and effectively communicate these explanations to diverse stakeholders.

The XAI course is managed on a voluntary basis by DataNights and Microsoft organizers and free for charge for the participant. This course is designed for data scientists that have at least two years in industry of hands-on experience with machine learning and Python and a basic background in deep learning. Some of the sessions will be held in-person at the Microsoft Reactor in Tel Aviv, while others will be conducted virtually.

Course Leaders:
Bitya Neuhof, DataNights
Yasmin Bokobza, Microsoft

What is this session about?
How to properly incorporate explanations in machine learning projects and
what aspects should you keep in mind?
Over the past few years the need to explain the output of machine learning models
has received growing attention. Explanations not only reveal the
reasons behind models predictions and increase users' trust in the model, but they
can be used for different purposes. To fully utilize explanations and
incorporate them into machine learning projects the following aspects of explanations
should taken into consideration: explanation goals, the explanation method, and
explanations’ quality. In this talk, we will discuss how to select the appropriate
explanation method based on the intended purpose of the explanation. Then, we will
present two approaches for evaluating explanations, including practical examples of
evaluation metrics, while highlighting the importance of assessing explanation quality.
Next, we will examine the various purposes explanation can serve, along with the
stage of the machine learning pipeline the explanation should be incorporated in.
Finally we will present a real use case of script classification as malware-related in Microsoft and how we can benefit from high-dimensional explanations in this context.

Palestrantes

Para perguntas, entre em contato conosco em reactor@microsoft.com