Рет қаралды 114
Speaker: Oleksandr Yaremchuk, Principal Engineer LLMs and Open-Source Initiatives, Protect AI
In summer 2023, recognizing the urgent necessity to secure Large Language Model (LLM) applications transitioning from proof of concept to production, we introduced LLM Guard. This leading open-source toolkit is built to protect LLM applications, featuring an advanced suite of 14 input and 20 output scanners. Additionally, our prompt injection detection model got over 2.5 million downloads within its first month and our work was further acknowledged when we received the Google Patch Reward. Through our talk, we'll share the journey of creating LLM Guard, the challenges we faced, the solutions we discovered, and how we've helped organizations implement this toolkit in real-world scenarios. We'll also touch on the lessons we've learned and the future opportunities we see for enhancing LLM security. This session is essential for anyone looking to deploy LLM applications to production with confidence.