跳至主要内容

发现、联系、增长

Microsoft Reactor

加入 Microsoft Reactor 并实时与初创公司和开发人员互动

是否准备好开始使用 AI?  Microsoft Reactor 提供活动、培训和社区资源,以帮助初创公司、企业家和开发人员利用 AI 技术打造新业务。 快加入我们吧!

发现、联系、增长

Microsoft Reactor

加入 Microsoft Reactor 并实时与初创公司和开发人员互动

是否准备好开始使用 AI?  Microsoft Reactor 提供活动、培训和社区资源,以帮助初创公司、企业家和开发人员利用 AI 技术打造新业务。 快加入我们吧!

返回

Improving Large Language Model by Systematically Improving its Data

26 二月, 2024 | 12:00 下午 - 1:00 下午 (UTC) 协调世界时

  • 形式:
  • alt##Livestream直播

主题: 数据科学和机器学习

语言: 英语

Labeled data powers AI/ML in the enterprise, but real-world datasets have been found to contain between 7-50% annotation errors. Imperfectly labelled text data hampers ML models' training (and evaluation) across tasks like intent recognition, entity recognition, and sequence generation. Although pretrained LLMs are equipped with a lot of world knowledge, their performance is adversely affected by noisy training data (as noted by OpenAI).

In this talk, we illustrate data-centric techniques to mitigate the effect of label noise without changing any code related to model architecture, hyperparameters, or training. These data quality improvement techniques should thus remain applicable even for future advanced LLMs like GPT-10.

主讲人

如有疑问,请联系我们 reactor@microsoft.com