Passer directement au contenu principal

Certaines parties de cette page peuvent être traduites par machine ou IA.

AI Apps & Agents Dev Days

Rejoignez Microsoft Reactor et collaborez avec les start-ups et les développeurs en direct

Êtes-vous prêt à démarrer avec l’IA ?  Microsoft Reactor propose des événements, des formations et des ressources communautaires pour aider les start-ups, les entrepreneurs et les développeurs à fonder leurs futures activités sur la technologie de l’IA. Rejoignez-nous !

AI Apps & Agents Dev Days

Rejoignez Microsoft Reactor et collaborez avec les start-ups et les développeurs en direct

Êtes-vous prêt à démarrer avec l’IA ?  Microsoft Reactor propose des événements, des formations et des ressources communautaires pour aider les start-ups, les entrepreneurs et les développeurs à fonder leurs futures activités sur la technologie de l’IA. Rejoignez-nous !

Retourner

AI Apps & Agents Dev Days

  • Format:
  • alt##LivestreamStream en direct

Thème: Infrastructure pour IA

Langage: À l’aide de la langue anglaise

  • Événements dans cette série:
  • 8

Brought to you by Microsoft and NVIDIA

Step into a builder’s playground where ideas turn into working AI experiences. AI Apps & Agents Dev Days​ isn’t about slides, it’s about building. Each month, we tackle real-world challenges, share patterns that work, and experiment with what’s next in AI-driven apps and agent design. Bring your curiosity, your code, and your questions. Leave with something you can show off, and a reason to come back for more.​

Événements à venir

Cliquez sur un événement ci-dessous pour en savoir plus et vous inscrire à des événements individuels.

Toutes les heures dans - Temps universel coordonné

déc.

16

mardi

2025

Scale and Orchestrate Multi-Agent Systems Effortlessly (APAC)

4:00 PM - 5:00 PM (UTC)

Explore how to leverage multi-agent systems in your applications to optimize operations, automate recommendations, and enhance customer experience. The solution utilizes Microsoft Agent Framework, OpenAI ChatKit and NVIDIA Nemotron model on Azure AI Foundry to seamlessly connect with store databases, integrate human oversight, and deploy scalable chat agents. This approach enables real-time analytics, predictive insights, and personalized interactions, resulting in improved decision-making, operational efficiency, and a superior user experience for both store managers and customers.

  • Format:
  • alt##LivestreamStream en direct

Thème: Infrastructure pour IA

Langage: À l’aide de la langue anglaise

Détails

déc.

16

mardi

2025

Scale and Orchestrate Multi-Agent Systems Effortlessly (AMER)

11:00 PM - 12:00 AM (UTC)

Explore how to leverage multi-agent systems in your applications to optimize operations, automate recommendations, and enhance customer experience. The solution utilizes Microsoft Agent Framework, OpenAI ChatKit and NVIDIA Nemotron model on Azure AI Foundry to seamlessly connect with store databases, integrate human oversight, and deploy scalable chat agents. This approach enables real-time analytics, predictive insights, and personalized interactions, resulting in improved decision-making, operational efficiency, and a superior user experience for both store managers and customers.

  • Format:
  • alt##LivestreamStream en direct

Thème: Infrastructure pour IA

Langage: À l’aide de la langue anglaise

Détails

Intervenants

Événements passés dans cette série

Toutes les heures dans - Temps universel coordonné

oct.

20

lundi

2025

Run open models on Serverless GPUs (EMEA)

4:00 PM - 5:00 PM (UTC)

Brought to you by Microsoft and NVIDIA​ ​ Join this live session to learn how to deploy OpenAI’s GPT-OSS models on serverless GPUs using NVIDIA NIM. We’ll show how to spin up gpt-oss on demand and demonstrate scenarios that use its reasoning and tool-calling abilities. Learn how NVIDIA NIM simplifies the journey from experimentation to deploying enterprise AI applications on Azure Container Apps, with pre-optimized models and industry-standard APIs. Discover how to scale open models efficiently while unlocking new agentic workflows.

  • Format:
  • alt##LivestreamStream en direct

Thème: Infrastructure pour IA

Langage: À l’aide de la langue anglaise

Regarder à la demande

oct.

21

mardi

2025

Run open models on Serverless GPUs (AMER)

8:00 PM - 9:00 PM (UTC)

Brought to you by Microsoft and NVIDIA​ ​ Join this live session to learn how to deploy OpenAI’s GPT-OSS models on serverless GPUs using NVIDIA NIM. We’ll show how to spin up gpt-oss on demand and demonstrate scenarios that use its reasoning and tool-calling abilities. Learn how NVIDIA NIM simplifies the journey from experimentation to deploying enterprise AI applications on Azure Container Apps, with pre-optimized models and industry-standard APIs. Discover how to scale open models efficiently while unlocking new agentic workflows.

  • Format:
  • alt##LivestreamStream en direct

Thème: Infrastructure pour IA

Langage: À l’aide de la langue anglaise

Regarder à la demande

oct.

22

mercredi

2025

Run open models on Serverless GPUs (APAC)

4:00 AM - 5:00 AM (UTC)

Brought to you by Microsoft and NVIDIA​ ​ Join this live session to learn how to deploy OpenAI’s GPT-OSS models on serverless GPUs using NVIDIA NIM. We’ll show how to spin up gpt-oss on demand and demonstrate scenarios that use its reasoning and tool-calling abilities. Learn how NVIDIA NIM simplifies the journey from experimentation to deploying enterprise AI applications on Azure Container Apps, with pre-optimized models and industry-standard APIs. Discover how to scale open models efficiently while unlocking new agentic workflows.

  • Format:
  • alt##LivestreamStream en direct

Thème: Infrastructure pour IA

Langage: À l’aide de la langue anglaise

Regarder à la demande

nov.

11

mardi

2025

Accelerating AI model performance (EMEA)

6:00 PM - 7:00 PM (UTC)

Join this live session to explore what drives performance in modern AI systems with experts from Microsoft and NVIDIA. We’ll break down how latency and throughput shape responsiveness in large language models and share techniques you can use to improve performance. Learn how hardware, batching, model size, and inference optimizations affect system efficiency, and see benchmarking in action across different configurations in Azure. Discover how to unlock new levels of model performance through advanced infrastructure from Azure and NVIDIA.

  • Format:
  • alt##LivestreamStream en direct

Thème: Infrastructure pour IA

Langage: À l’aide de la langue anglaise

Regarder à la demande

nov.

13

jeudi

2025

Accelerating AI model performance (AMER)

12:00 AM - 1:00 AM (UTC)

Join this live session to explore what drives performance in modern AI systems with experts from Microsoft and NVIDIA. We’ll break down how latency and throughput shape responsiveness in large language models and share techniques you can use to improve performance. Learn how hardware, batching, model size, and inference optimizations affect system efficiency, and see benchmarking in action across different configurations in Azure. Discover how to unlock new levels of model performance through advanced infrastructure from Azure and NVIDIA.

  • Format:
  • alt##LivestreamStream en direct

Thème: Infrastructure pour IA

Langage: À l’aide de la langue anglaise

Regarder à la demande

nov.

13

jeudi

2025

Accelerating AI Model Performance (APAC)

4:00 AM - 5:00 AM (UTC)

Join this live session to explore what drives performance in modern AI systems with experts from Microsoft and NVIDIA. We’ll break down how latency and throughput shape responsiveness in large language models and share techniques you can use to improve performance. Learn how hardware, batching, model size, and inference optimizations affect system efficiency, and see benchmarking in action across different configurations in Azure. Discover how to unlock new levels of model performance through advanced infrastructure from Azure and NVIDIA.

  • Format:
  • alt##LivestreamStream en direct

Thème: Infrastructure pour IA

Langage: À l’aide de la langue anglaise

Regarder à la demande

Événements passés dans cette série

Toutes les heures dans - Temps universel coordonné

oct.

20

lundi

2025

Run open models on Serverless GPUs (EMEA)

4:00 PM - 5:00 PM (UTC)

Brought to you by Microsoft and NVIDIA​ ​ Join this live session to learn how to deploy OpenAI’s GPT-OSS models on serverless GPUs using NVIDIA NIM. We’ll show how to spin up gpt-oss on demand and demonstrate scenarios that use its reasoning and tool-calling abilities. Learn how NVIDIA NIM simplifies the journey from experimentation to deploying enterprise AI applications on Azure Container Apps, with pre-optimized models and industry-standard APIs. Discover how to scale open models efficiently while unlocking new agentic workflows.

  • Format:
  • alt##LivestreamStream en direct

Thème: Infrastructure pour IA

Langage: À l’aide de la langue anglaise

Regarder à la demande

oct.

21

mardi

2025

Run open models on Serverless GPUs (AMER)

8:00 PM - 9:00 PM (UTC)

Brought to you by Microsoft and NVIDIA​ ​ Join this live session to learn how to deploy OpenAI’s GPT-OSS models on serverless GPUs using NVIDIA NIM. We’ll show how to spin up gpt-oss on demand and demonstrate scenarios that use its reasoning and tool-calling abilities. Learn how NVIDIA NIM simplifies the journey from experimentation to deploying enterprise AI applications on Azure Container Apps, with pre-optimized models and industry-standard APIs. Discover how to scale open models efficiently while unlocking new agentic workflows.

  • Format:
  • alt##LivestreamStream en direct

Thème: Infrastructure pour IA

Langage: À l’aide de la langue anglaise

Regarder à la demande

oct.

22

mercredi

2025

Run open models on Serverless GPUs (APAC)

4:00 AM - 5:00 AM (UTC)

Brought to you by Microsoft and NVIDIA​ ​ Join this live session to learn how to deploy OpenAI’s GPT-OSS models on serverless GPUs using NVIDIA NIM. We’ll show how to spin up gpt-oss on demand and demonstrate scenarios that use its reasoning and tool-calling abilities. Learn how NVIDIA NIM simplifies the journey from experimentation to deploying enterprise AI applications on Azure Container Apps, with pre-optimized models and industry-standard APIs. Discover how to scale open models efficiently while unlocking new agentic workflows.

  • Format:
  • alt##LivestreamStream en direct

Thème: Infrastructure pour IA

Langage: À l’aide de la langue anglaise

Regarder à la demande

nov.

11

mardi

2025

Accelerating AI model performance (EMEA)

6:00 PM - 7:00 PM (UTC)

Join this live session to explore what drives performance in modern AI systems with experts from Microsoft and NVIDIA. We’ll break down how latency and throughput shape responsiveness in large language models and share techniques you can use to improve performance. Learn how hardware, batching, model size, and inference optimizations affect system efficiency, and see benchmarking in action across different configurations in Azure. Discover how to unlock new levels of model performance through advanced infrastructure from Azure and NVIDIA.

  • Format:
  • alt##LivestreamStream en direct

Thème: Infrastructure pour IA

Langage: À l’aide de la langue anglaise

Regarder à la demande

nov.

13

jeudi

2025

Accelerating AI model performance (AMER)

12:00 AM - 1:00 AM (UTC)

Join this live session to explore what drives performance in modern AI systems with experts from Microsoft and NVIDIA. We’ll break down how latency and throughput shape responsiveness in large language models and share techniques you can use to improve performance. Learn how hardware, batching, model size, and inference optimizations affect system efficiency, and see benchmarking in action across different configurations in Azure. Discover how to unlock new levels of model performance through advanced infrastructure from Azure and NVIDIA.

  • Format:
  • alt##LivestreamStream en direct

Thème: Infrastructure pour IA

Langage: À l’aide de la langue anglaise

Regarder à la demande

nov.

13

jeudi

2025

Accelerating AI Model Performance (APAC)

4:00 AM - 5:00 AM (UTC)

Join this live session to explore what drives performance in modern AI systems with experts from Microsoft and NVIDIA. We’ll break down how latency and throughput shape responsiveness in large language models and share techniques you can use to improve performance. Learn how hardware, batching, model size, and inference optimizations affect system efficiency, and see benchmarking in action across different configurations in Azure. Discover how to unlock new levels of model performance through advanced infrastructure from Azure and NVIDIA.

  • Format:
  • alt##LivestreamStream en direct

Thème: Infrastructure pour IA

Langage: À l’aide de la langue anglaise

Regarder à la demande