Mastering LangChain RAG: Integrating Chat History (Part 2)

  Рет қаралды 1,279

Eric Vaillancourt

Eric Vaillancourt

Күн бұрын

Пікірлер: 16
@GopalNoutiyal
@GopalNoutiyal 28 күн бұрын
Please continue making more video's. Appreciate your effort and contribution. Thank you
@srirammoorthy9337
@srirammoorthy9337 3 ай бұрын
@Eric, thank you for posting this video. Your video's are clear and easy to understand. I have been reading through the Langchain docs and did not find it friendly to understand. Please continue making more video's. Appreciate your effort and contribution. Thank you Regards, Sriram Moorthy
@andrew.derevo
@andrew.derevo Ай бұрын
Thanks a lot for video. It’s interesting how exactly chain pass the messages to LLM. I’m not 100% sure but I think they need extra formatting.
@sangram7153
@sangram7153 Ай бұрын
Really great vedios!! Thank You so much
@BertrandGoetzmann
@BertrandGoetzmann Ай бұрын
Hello Eric, thank you very much for this video. It's interesting to understand how the question can be contextualized using history to query the vectorstore; however, is there a really interest to use the history in the qa_prompt. Isn't enough to just have the context and the question? What do you think?
@eric_vaillancourt
@eric_vaillancourt Ай бұрын
The history is important when you ask a follow up question.
@BertrandGoetzmann
@BertrandGoetzmann Ай бұрын
@@eric_vaillancourt I agree. I supposed the input injected in the qa_prompt came from the reformulated question. So I suppose it's not the case.
@sangram7153
@sangram7153 Ай бұрын
Which approach is reliable to use? this one or using lang-chain Conversation memories like Conversation Buffer Memory Conversation Buffer Window Memory Conversation Summary Memory Conversation Summary Buffer Memory
@eric_vaillancourt
@eric_vaillancourt Ай бұрын
It all depends on your needs. In my experience, summarizing is a good technique.
@sangram7153
@sangram7153 Ай бұрын
Have some quries? Lets say if we ask 100+ question so will it cause a LLM context limit??? are you passing the chat history in prompt to LLM??
@eric_vaillancourt
@eric_vaillancourt Ай бұрын
At some point, you will have to do some memory management, like summarizing the conversation.
@lesptitsoiseaux
@lesptitsoiseaux Ай бұрын
Fameux!
@sangram7153
@sangram7153 Ай бұрын
How many numbers of converations are passed in prompt every time?? can we change that number? how?
@eric_vaillancourt
@eric_vaillancourt Ай бұрын
You decide. It also depends on the context limit of the LLM
Mastering LangChain RAG: Implementing Streaming Capabilities (Part 3)
19:37
POV: Your kids ask to play the claw machine
00:20
Hungry FAM
Рет қаралды 21 МЛН
Man Mocks Wife's Exercise Routine, Faces Embarrassment at Work #shorts
00:32
Fabiosa Best Lifehacks
Рет қаралды 4,2 МЛН
LIFEHACK😳 Rate our backpacks 1-10 😜🔥🎒
00:13
Diana Belitskay
Рет қаралды 3,9 МЛН
Why are vector databases so FAST?
44:59
Underfitted
Рет қаралды 17 М.
Episode 206 - Building Powerful AI Agents with LangGraph
37:47
Two Voice Devs
Рет қаралды 239
AI Knowing My Entire Codebase Resulted in a 20x Productivity Increase
9:33
LangChain - Advanced RAG Techniques for better Retrieval Performance
24:57
Coding Crash Courses
Рет қаралды 31 М.
LangChain Sharepoint Loader
16:12
Eric Vaillancourt
Рет қаралды 749
Mastering LangChain RAG: Quick Start Guide to LangChain RAG (Part 1)
31:12
RAG But Better: Rerankers with Cohere AI
23:43
James Briggs
Рет қаралды 59 М.
What are AI Agents?
12:29
IBM Technology
Рет қаралды 476 М.
POV: Your kids ask to play the claw machine
00:20
Hungry FAM
Рет қаралды 21 МЛН