學習、聯繫、建置
Microsoft Reactor
加入 Microsoft Reactor 並與開發人員即時互動
準備好開始使用 AI 和最新技術嗎? Microsoft Reactor 提供活動、訓練和社群資源,協助開發人員、企業家和初創公司建置 AI 技術等等。 加入我們!
學習、聯繫、建置
Microsoft Reactor
加入 Microsoft Reactor 並與開發人員即時互動
準備好開始使用 AI 和最新技術嗎? Microsoft Reactor 提供活動、訓練和社群資源,協助開發人員、企業家和初創公司建置 AI 技術等等。 加入我們!
Assessing security risk in AI model using Microsoft Counterfit
12 6月, 2024 | 11:30 上午 - 12:30 下午 (UTC) 國際標準時間
主題: 負責任 AI
語言: 英文
This session involves the Responsible AI aspect of developing an AI model. While developing an AI model as a part of Responsible AI one should take care of possible security risks involved with the AI model because, to prevent attack one should be aware of possible attacks. Nowadays it is one of the less focused areas in the field of AI development.
We will discuss various categories of attack possible on the AI model such as Black box and White box attack. An AI risk assessment tool developed by Microsoft Counterfit will be introduced.
At last, we will conclude our session with a live demo of Microsoft Counterfit on Neural Network Classification system.
This session will focus on:
Brief introduction about Responsible AI and how security of AI model is crucial part of responsible AI.
Brief introduction of attack possible on AI model.
Microsoft Counterfit live demo on Neural Network classifier.
What will you learn from this session:
Possible attack on AI model.
How to use Microsoft Counterfit to assess security risk on an AI model.
Further Learning: https://aka.ms/Fundamentals-Responsible-GenAI
Speaker Bio: Dhruvkumar Patel
Dhruv is a senior software engineer at Marvell India, He holds an M.Tech. in Computer Science from NIT Surat. He has been actively engaged in AI, ML, Gen AI, and adversarial machine learning with a recent foray into the field of data science over the past few months.
Social Handle: LinkedIn: https://www.linkedin.com/in/dhruvpatel0463/
必要條件
https://aka.ms/GitHub/Azure-Counterfit , https://aka.ms/AIsecurity-riskassessmentusing-counterfit