发现、联系、增长
Microsoft Reactor
加入 Microsoft Reactor 并实时与初创公司和开发人员互动
是否准备好开始使用 AI? Microsoft Reactor 提供活动、培训和社区资源,以帮助初创公司、企业家和开发人员利用 AI 技术打造新业务。 快加入我们吧!
发现、联系、增长
Microsoft Reactor
加入 Microsoft Reactor 并实时与初创公司和开发人员互动
是否准备好开始使用 AI? Microsoft Reactor 提供活动、培训和社区资源,以帮助初创公司、企业家和开发人员利用 AI 技术打造新业务。 快加入我们吧!
Assessing security risk in AI model using Microsoft Counterfit
12 六月, 2024 | 11:30 上午 - 12:30 下午 (UTC) 协调世界时
主题: 负责任 AI
语言: 英语
This session involves the Responsible AI aspect of developing an AI model. While developing an AI model as a part of Responsible AI one should take care of possible security risks involved with the AI model because, to prevent attack one should be aware of possible attacks. Nowadays it is one of the less focused areas in the field of AI development.
We will discuss various categories of attack possible on the AI model such as Black box and White box attack. An AI risk assessment tool developed by Microsoft Counterfit will be introduced.
At last, we will conclude our session with a live demo of Microsoft Counterfit on Neural Network Classification system.
This session will focus on:
Brief introduction about Responsible AI and how security of AI model is crucial part of responsible AI.
Brief introduction of attack possible on AI model.
Microsoft Counterfit live demo on Neural Network classifier.
What will you learn from this session:
Possible attack on AI model.
How to use Microsoft Counterfit to assess security risk on an AI model.
Further Learning: https://aka.ms/Fundamentals-Responsible-GenAI
Speaker Bio: Dhruvkumar Patel
Dhruv is a senior software engineer at Marvell India, He holds an M.Tech. in Computer Science from NIT Surat. He has been actively engaged in AI, ML, Gen AI, and adversarial machine learning with a recent foray into the field of data science over the past few months.
Social Handle: LinkedIn: https://www.linkedin.com/in/dhruvpatel0463/
先决条件
https://aka.ms/GitHub/Azure-Counterfit , https://aka.ms/AIsecurity-riskassessmentusing-counterfit