콘텐츠 기본 건너뛰기

이 페이지의 일부는 기계 또는 AI 번역될 수 있습니다.

검색, 연결, 성장

Microsoft Reactor

Microsoft Reactor에 가입하고 스타트업 및 개발자와 실시간 소통

AI를 시작할 준비가 되셨나요? Microsoft Reactor는 스타트업, 기업가 및 개발자가 AI 기술에 대한 다음 비즈니스를 구축할 수 있도록 이벤트, 교육 및 커뮤니티 리소스를 제공합니다. 참여해 주세요!

검색, 연결, 성장

Microsoft Reactor

Microsoft Reactor에 가입하고 스타트업 및 개발자와 실시간 소통

AI를 시작할 준비가 되셨나요? Microsoft Reactor는 스타트업, 기업가 및 개발자가 AI 기술에 대한 다음 비즈니스를 구축할 수 있도록 이벤트, 교육 및 커뮤니티 리소스를 제공합니다. 참여해 주세요!

돌아가기

Explainable AI (XAI) Course: Explainable AI in practice

28 3월, 2023 | 4:00 오후 - 6:00 오후 (UTC) 협정 세계시

  • 형식:
  • alt##LivestreamLivestream

항목: 데이터 과학 및 Machine Learning

언어: 히브리어

The XAI course provides a comprehensive overview of explainable AI, covering both theory and practice, and exploring various use cases for explainability. Participants will learn not only how to generate explanations, but also how to evaluate and effectively communicate these explanations to diverse stakeholders.

The XAI course is managed on a voluntary basis by DataNights and Microsoft organizers and free for charge for the participant. This course is designed for data scientists that have at least two years in industry of hands-on experience with machine learning and Python and a basic background in deep learning. Some of the sessions will be held in-person at the Microsoft Reactor in Tel Aviv, while others will be conducted virtually.

Course Leaders:
Bitya Neuhof, DataNights
Yasmin Bokobza, Microsoft

What is this session about?
How to properly incorporate explanations in machine learning projects and
what aspects should you keep in mind?
Over the past few years the need to explain the output of machine learning models
has received growing attention. Explanations not only reveal the
reasons behind models predictions and increase users' trust in the model, but they
can be used for different purposes. To fully utilize explanations and
incorporate them into machine learning projects the following aspects of explanations
should taken into consideration: explanation goals, the explanation method, and
explanations’ quality. In this talk, we will discuss how to select the appropriate
explanation method based on the intended purpose of the explanation. Then, we will
present two approaches for evaluating explanations, including practical examples of
evaluation metrics, while highlighting the importance of assessing explanation quality.
Next, we will examine the various purposes explanation can serve, along with the
stage of the machine learning pipeline the explanation should be incorporated in.
Finally we will present a real use case of script classification as malware-related in Microsoft and how we can benefit from high-dimensional explanations in this context.

스피커

질문이 있는 경우 다음으로 문의하세요. reactor@microsoft.com