Building Production-Ready RAG Applications: Jerry Liu

  Рет қаралды 338,779

AI Engineer

AI Engineer

Күн бұрын

Пікірлер
@joxa6119
@joxa6119 11 ай бұрын
So far the most completed and clear LLM RAG go-through video ever existed on KZbin.
@MatBat__
@MatBat__ 8 ай бұрын
100%
@ReflectionOcean
@ReflectionOcean 9 ай бұрын
00:00:49 Fix the model by creating a data pipeline to add context into the prompt. 00:01:33 Understand the paradigms of retrieval augmentation and fine-tuning for language models. 00:02:00 Learn about building a QA system using data ingestion and querying components. 00:02:07 Explore lower-level components to understand data ingestion and querying processes. 00:03:01 Address challenges with naive rag applications, such as poor response quality. 00:04:02 Improve retrieval performance by optimizing data storage and pipeline. 00:04:14 Enhance the embedding representation for better performance. 00:04:45 Implement advanced retrieval methods like reranking and recursive retrieval. 00:05:18 Incorporate metadata filtering to add structured context to text chunks. 00:06:27 Experiment with small to big retrieval for more precise retrieval results. 00:07:14 Consider embedding references to parent chunks for improved retrieval. 00:09:31 Explore the use of agents for reasoning and more advanced analysis. 00:12:12 Fine-tune the rag system to optimize specific components for better performance. 00:17:01 Generate a synthetic query dataset from raw text chunks using LLMS to fine-tune and embed a model. 00:17:12 Fine-tune the base model itself or fine-tune an adapter on top of the model to improve performance. 00:17:16 Consider fine-tuning an adapter on top of the model as it has advantages such as not requiring the base model's weights to fine-tune and avoiding the need to reindex the entire document corpus when fine-tuning the query. 00:18:00 Explore the idea of generating a synthetic dataset using a bigger model like GBD4 and distilling it into a weaker LM like 3.5 Turbo to enhance train of thought, response quality, and structured outputs.
@kashishmukheja7024
@kashishmukheja7024 11 ай бұрын
🎯 Key Takeaways for quick navigation: 01:44 🧩 *The current RAG stack for building a QA system consists of two main components: data ingestion and data querying (retrieval and synthesis).* 03:08 🚧 *Challenges with naive RAG include issues with response quality, bad retrieval, low precision, hallucination, fluff in return responses, low recall, and outdated information.* 04:31 🔄 *Strategies to improve RAG performance involve optimizing various aspects, including data, retrieval algorithm, and synthesis. Techniques include storing additional information, optimizing data pipeline, adjusting chunk sizes, and optimizing embedding representation.* 06:50 📊 *Evaluation of RAG systems involves assessing both retrieval and synthesis. Retrieval evaluation includes ensuring returned content is relevant to the query, while synthesis evaluation examines the quality of the final response.* 08:30 🛠️ *To optimize RAG systems, start with "table stakes" techniques like tuning chunk sizes, better pruning, adjusting chunk sizes, and using metadata filters integrated with vector databases.* 12:29 🧐 *Advanced retrieval methods, such as small to big retrieval and embedding a reference to the parent trunk, can enhance precision by retrieving more granular information.* 14:42 🧠 *Exploring more advanced concepts, like multi-document agents, allows for reasoning beyond synthesis, enabling the modeling of documents as sets of tools for tasks such as summarization and QA.* 16:23 🎯 *Fine-tuning in RAG systems is crucial to optimize specific components, such as embeddings, for better performance. It involves generating synthetic query datasets and fine-tuning on either the base model or an adapter on top of the model.* 18:15 📚 *Documentation on production RAG and fine-tuning, including distilling knowledge from larger models to weaker ones, is available for further exploration.* Made with HARPA AI
@2200venkat
@2200venkat 7 ай бұрын
So far this is the best presentation on RAG I have ever come across in last couple of months.
@postnetworkacademy
@postnetworkacademy Ай бұрын
This is a great overview of the transformative impact of Large Language Models and the exciting developments around Retrieval Augmented Generation (RAG). Jerry Liu's talk seems like a must-watch for anyone interested in building and optimizing LLM-powered applications on private data. It's inspiring to see experts like Jerry, with his impressive background in AI research and engineering, sharing insights on how to tackle the challenges of productionizing RAG systems. Looking forward to exploring more at the AI Engineer World's Fair 2024!
@streetchronicles5693
@streetchronicles5693 Жыл бұрын
Thank you not just for putting this together, but by making sense of it all! In 18min!? Amazing!
@MatBat__
@MatBat__ 8 ай бұрын
Thank you very much for this. In this age of LLms it is getting more and more important to be able to mesure theyr accuracy and efficacy. I've been working with problems like this since the beggining of 2024 and it's been such an interesting topic to learn about. Cheers and thx for the upload
@Bball1129
@Bball1129 Жыл бұрын
Your distilled video has almost no knowledge loss over hours of coursework. Great work !
@minwang2182
@minwang2182 11 ай бұрын
Very deep talking! Really appreciate and learned a lot
@gopikrishna8063
@gopikrishna8063 Жыл бұрын
i thoroughly enjoyed your presentation. jerry Liu-Thanks for the Deep methods to be applied to traditional RAG.-
@UncleDao
@UncleDao Жыл бұрын
I was thoroughly impressed by the depth of your insights and the clarity of your delivery. The ability of Jerry Liu to distill complex concepts into understandable terms was remarkable, and I particularly enjoyed how you illustrated the practical applications of RAG in various fields. Would it be possible for you to share the slides from the Jerry Liu's presentation?
@CsabaTothMr
@CsabaTothMr Жыл бұрын
There wasn't anything filler. Down to the point from beginning to the end. He gave a similar talk at Silicon Valley DevFest AI Edition, I was impressed.
@carlomartinotti3649
@carlomartinotti3649 4 ай бұрын
This is exactly what i needed, when I needed it. Big props!
@justy1337
@justy1337 10 ай бұрын
I love Jerry's approach to identifying intuition and solution
@Ke_Mis
@Ke_Mis 8 ай бұрын
Really nice presentation skills, Jerry!
@believeM668
@believeM668 3 ай бұрын
Amazing video. Helped a lot !
@jasonzhang6534
@jasonzhang6534 9 ай бұрын
short and sweet presentation. Very clear
@bhaskartripathi
@bhaskartripathi 7 ай бұрын
Very nice presentation and very practical tips for enterprise RAGs
@Breaking_Bold
@Breaking_Bold 9 ай бұрын
Excellent presentation on RAG
@RealUniquee
@RealUniquee 9 ай бұрын
Thanks for Your hard-work. Really learned a lot
@anne-marieroy8812
@anne-marieroy8812 7 ай бұрын
Thank you for this excellent presentation, very much appreciated
@laup4321
@laup4321 2 ай бұрын
12:56 interesting expanding on smaller chunks
@SeanTechStories
@SeanTechStories 6 ай бұрын
This is an awesome video 🎉
@hiiamlawliet480
@hiiamlawliet480 Жыл бұрын
Can anyone share this presentation link mentioned in 5:35 ?
@ashwinmanghat4416
@ashwinmanghat4416 Жыл бұрын
Any luck on this?
@hiiamlawliet480
@hiiamlawliet480 Жыл бұрын
Nope
@deeghalbhaumik3779
@deeghalbhaumik3779 Жыл бұрын
docs.google.com/presentation/d/1GWjchMiY0LQ8Bc8e7NAkutOzpaTsfn487XHwbGIqKvo/mobilepresent?slide=id.p
@funkymunky8787
@funkymunky8787 10 ай бұрын
kzbin.info/www/bejne/q5KcZIqKn66BbdUsi=Kp0VrpPkDuVJ_HGC it’s concerning none of you could find this
@shopbc5553
@shopbc5553 11 ай бұрын
Are there any take-aways here that can help an average user generate better results using a standard UI?
@oceanlifecafe
@oceanlifecafe 4 ай бұрын
He speaks like the guys in "The Californians". I keep expecting him to say "turn right on Ocean, left on Pico all the way to ...."
@chendeheng8196
@chendeheng8196 Жыл бұрын
🎯 Key Takeaways for quick navigation: 00:01 🎤 *视频简介* - Jerry介绍了他的公司以及今天的主题:构建生产就绪的RAG应用程序。 00:23 📚 *LLM的应用场景* - Jerry提到了近期的AI应用,包括知识搜索、QA、对话代理、工作流自动化和文档处理等。 01:03 🔍 *LLM数据理解的两种主要方法* - 检索增强:通过数据源将上下文添加到语言模型的输入提示中。 - 微调:通过训练模型权重来将知识嵌入到模型中。 01:44 📊 *RAG的构建* - RAG架构包括数据摄取和数据查询,包括检索和合成。 - Jerry建议学习如何进行数据摄取和查询以深入了解组件的工作原理。 03:08 🚧 *RAG的挑战* - Jerry介绍了RAG的性能挑战,包括响应质量、检索问题、数据陈旧和LLM的问题。 - 指出了检索过程中可能出现的问题,如低准确性、幻觉、低召回等。 05:27 🧪 *评估RAG系统* - 讨论了RAG系统的评估方法,包括检索评估和合成评估。 - 强调了需要定义基准来度量性能的重要性。 08:30 🧩 *优化RAG系统* - Jerry提供了从基础到高级的RAG系统优化方法,包括调整块大小、元数据过滤、高级检索和代理。 16:23 🔄 *微调和未来展望* - 讨论了微调LLM的潜在益处,以及使用较弱LLM生成合成数据集来提高性能的方法。 Made with HARPA AI
@arpitbansal4911
@arpitbansal4911 3 ай бұрын
was someone able to open that colab link that was mentioned in one of the slide, if yes, could you share the link. please
@huonglarne
@huonglarne 8 ай бұрын
wow thanks for the presentation
@420_gunna
@420_gunna Жыл бұрын
Awesome rundown!
@antoniopassarelli
@antoniopassarelli Жыл бұрын
The V stands for cmd/ctrl V
@shivamverma-wm3vv
@shivamverma-wm3vv Жыл бұрын
I am using the ''Gpt2" model , its response is correct but the response time is about 10 seconds on the local pc and 35 seconds on the EC 2 server, can you tell how to reduce response time, you can share server configuration or any good model of GPT 2 or smaller than this
@Kevin.Kawchak
@Kevin.Kawchak 8 ай бұрын
Thank you
@holonaut
@holonaut 10 ай бұрын
I use the hyper-naive approach: Provide the LLM with all the knowledge keys in my MySQL DB and let it tell me which ones are most likely to be helpful for answering the current prompt. Then just load the entries based on the keys the LLM told me and inject them into the second propmpt, which the LLM is then supposed to answer. (Yes, Vector search would be way more fitting for this, but I'm a peasant and don't even have the slightest clue of how to to implement it)
@AmeeliaK
@AmeeliaK 2 ай бұрын
It's five lines of codes in the llama index docs. Works well out of the box for simple data.
@kishanprajapati6170
@kishanprajapati6170 7 ай бұрын
Can I get the presentation ?
@amethyst1044
@amethyst1044 Жыл бұрын
can I have these slides somewhere ? Compact infor, thank you !
@RyanStuart85
@RyanStuart85 11 ай бұрын
RAG is an interesting idea. If the predictions are right and these models are only going to get better, wouldn’t it make sense to give them direct access to the embedding DB and let the model decide how best to handle retrieval rather than having the humans do it?
@Pmahya
@Pmahya 7 ай бұрын
No, but that’s the whole point of human feedback and RFHL. It would be great to give LLM all access to DB but then their coherent biases would eventually lead to overfitting.
@mso2802
@mso2802 5 ай бұрын
what music is that by the way?
@tecnopadre
@tecnopadre Жыл бұрын
Great one!
@swetharangaraj4521
@swetharangaraj4521 11 ай бұрын
what is the process if i what to query chat from cloud mangoDB using llm and RAG
@fintech1378
@fintech1378 Жыл бұрын
impressed
@robinmountford5322
@robinmountford5322 Жыл бұрын
I still haven't managed to find an argument for RAG over LORA. RAG's biggest achilles heel is cortext size. It almost seems to me to be a band aid, especially when at least a year from now context size may not even be an issue. We can spend months perfecting our RAG pipeline and end up throwing it all away a month later due to it being redundant.
@namankapasi6463
@namankapasi6463 Жыл бұрын
Pretty sure rag avoids hallucination much better than Lora does, fine tuning is good for changing the language style but doesn’t necessarily work the best when your looking for specific info from the way I understand it, also rag allows you to plug in diff data without having to go back and re fine tune ur model with every update
@robinmountford5322
@robinmountford5322 Жыл бұрын
@@namankapasi6463 I have noticed with LORA you don't get back the specifics of the trained data, but rather an interpreted version of it (which in my experiments has been jaw-dropping). If RAG functions more like a search engine then I can see how these could both be useful. So my guess, after reading your reply, is LORA would be suited to emulating specific writing styles and RAG would be good for technical data retrieval or for extracting paragraphs from text with references? Makes sense then, since you would probably only need to train in a specific writing style once. Even so, when context size increases dramatically will we still use RAG and not just add the content into the main prompt as is? Or does the vector process make the entire process more efficient, regardless?
@namankapasi6463
@namankapasi6463 Жыл бұрын
I mean you can think of rag as restricting your output to the data that ur giving it, user makes a request to the model, model looks at vector database and responds from the database first, not saying I’m an expert but im 99% sure. Also in regards to efficiency, higher context windows are expensive and are repetitive so I’d avoid them, even tho open ai caching is p good this not the case for a lot of open source models
@robinmountford5322
@robinmountford5322 Жыл бұрын
@@namankapasi6463 Ok great. Thanks for shedding some extra light here.
@marcvayn
@marcvayn Жыл бұрын
You need to do both for optimal performance. Everything you put in RAG should be data that may need to change in real time - ex: price lists, spec sheets, latest instructions manuals, product updates etc…. Most everything else you can fine tune - however if you plan on running sizable projects your fine tuning could take weeks. Or even days. Now if you have to constantly adjust your fine tuning this is not very practical. Therefore you may wish to move part of your data into RAG. Additionally you need to play with Chunks in order to better organize your training data. Of course much depends on your project
@aaronlang9533
@aaronlang9533 Жыл бұрын
this is pretty deep
@dantesbytes
@dantesbytes 2 ай бұрын
More like this
@alitomix
@alitomix 6 ай бұрын
All the documentation became obsolete in a couple of months, since I can't find useful examples with the current stuff I'm moving to langchain
@foju9365
@foju9365 11 ай бұрын
All these videos today start with a cyberpunk theme music
@AmeeliaK
@AmeeliaK 2 ай бұрын
When somebody who looks like 19 says that Information Retrieval is already one or two decades old, I feel so old 😂 Come on, Lucene is already more than 20 years old 😅
@TheKnowledgeAlchemist
@TheKnowledgeAlchemist Жыл бұрын
I just want an LLM to read my google docs and let me ask questions about stuff, then use it to write and add into my drive
@deeghalbhaumik3779
@deeghalbhaumik3779 Жыл бұрын
Seems straightforward to be. Just encode your docs into vector embeddings. And then search whatever you need and you can use the information to write stuff by creating appropriate prompt templates depending on what you want it to write. Search using any LLM. You can use openai or the ones on hugging face
@TheKnowledgeAlchemist
@TheKnowledgeAlchemist Жыл бұрын
@@deeghalbhaumik3779 found lm studio and embedding models. This is working now
@fanebone8732
@fanebone8732 3 ай бұрын
Google’s NotebookLM does this exactly
@foju9365
@foju9365 10 ай бұрын
I wish they didn't use the term QA for question answering and used "Q&A" instead. leads to a lot of confusion with those of us developing production grade systems that require quality assurance :)
@bababear1745
@bababear1745 8 ай бұрын
Are ypu working on an AI based Quality assurance / Quality Audit system? Would love to connect and work together
@mosesdaudu
@mosesdaudu 11 ай бұрын
Nice intro music 😂
@sanjaybhatikar
@sanjaybhatikar 6 ай бұрын
Are the comments AI-generated? They seem like variants of the same glowing, effusive prompt.
@AtomicPixels
@AtomicPixels 11 ай бұрын
Why does every tech bro speak as if every comment is cooler when in the tone of a question.
@sanjaybhatikar
@sanjaybhatikar 6 ай бұрын
Llama Index has poor documentation despite claims to the contrary and causes dependency conflicts off the bat.
@jerseyboy66
@jerseyboy66 9 ай бұрын
Feel like MSFT copilot is the RAG killer…
@paraga123456789
@paraga123456789 Жыл бұрын
kehna kya chahte ho
@ankitait2
@ankitait2 Жыл бұрын
AI basically consumes data like your body consumes a large cube of paneer, breaking it into smaller pieces and digesting it using stomach juices to know it is paneer. AI paneer ko paneer hi bole, aloo na bole iske liye nuske bata rahe hai bhai I think.
@user-he8qc4mr4i
@user-he8qc4mr4i 8 ай бұрын
Million Things to do = initiative time before going to prod :-/
@vishnurajbhar007
@vishnurajbhar007 6 ай бұрын
Don't wear a hat next time, you didn't come to fashion show. These are serious world changing talks. I didn't get anything because of the hat 🙄
@2AoDqqLTU5v
@2AoDqqLTU5v 5 ай бұрын
You have a very low IQ if a hat can throw you out this much
@_iampavan
@_iampavan Жыл бұрын
Short and Precise
@gu5on16
@gu5on16 4 ай бұрын
thank you
Building Production RAG Over Complex Documents
1:22:18
Databricks
Рет қаралды 17 М.
"okay, but I want Llama 3 for my specific use case" - Here's how
24:20
Мясо вегана? 🧐 @Whatthefshow
01:01
История одного вокалиста
Рет қаралды 7 МЛН
The Future of Knowledge Assistants: Jerry Liu
16:55
AI Engineer
Рет қаралды 125 М.
GraphRAG: LLM-Derived Knowledge Graphs for RAG
15:40
Alex Chao
Рет қаралды 132 М.
How RAG Turns AI Chatbots Into Something Practical
10:18
bycloud
Рет қаралды 52 М.
OpenAI Embeddings and Vector Databases Crash Course
18:41
Adrian Twarog
Рет қаралды 517 М.
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
pixegami
Рет қаралды 331 М.
GraphRAG: The Marriage of Knowledge Graphs and RAG: Emil Eifrem
19:15
How to Improve LLMs with RAG (Overview + Python Code)
21:41
Shaw Talebi
Рет қаралды 87 М.
How to build Multimodal Retrieval-Augmented Generation (RAG) with Gemini
34:22
Google for Developers
Рет қаралды 74 М.
Мясо вегана? 🧐 @Whatthefshow
01:01
История одного вокалиста
Рет қаралды 7 МЛН