Please continue making more video's. Appreciate your effort and contribution. Thank you
@srirammoorthy93373 ай бұрын
@Eric, thank you for posting this video. Your video's are clear and easy to understand. I have been reading through the Langchain docs and did not find it friendly to understand. Please continue making more video's. Appreciate your effort and contribution. Thank you Regards, Sriram Moorthy
@andrew.derevoАй бұрын
Thanks a lot for video. It’s interesting how exactly chain pass the messages to LLM. I’m not 100% sure but I think they need extra formatting.
@sangram7153Ай бұрын
Really great vedios!! Thank You so much
@BertrandGoetzmannАй бұрын
Hello Eric, thank you very much for this video. It's interesting to understand how the question can be contextualized using history to query the vectorstore; however, is there a really interest to use the history in the qa_prompt. Isn't enough to just have the context and the question? What do you think?
@eric_vaillancourtАй бұрын
The history is important when you ask a follow up question.
@BertrandGoetzmannАй бұрын
@@eric_vaillancourt I agree. I supposed the input injected in the qa_prompt came from the reformulated question. So I suppose it's not the case.
@sangram7153Ай бұрын
Which approach is reliable to use? this one or using lang-chain Conversation memories like Conversation Buffer Memory Conversation Buffer Window Memory Conversation Summary Memory Conversation Summary Buffer Memory
@eric_vaillancourtАй бұрын
It all depends on your needs. In my experience, summarizing is a good technique.
@sangram7153Ай бұрын
Have some quries? Lets say if we ask 100+ question so will it cause a LLM context limit??? are you passing the chat history in prompt to LLM??
@eric_vaillancourtАй бұрын
At some point, you will have to do some memory management, like summarizing the conversation.
@lesptitsoiseauxАй бұрын
Fameux!
@sangram7153Ай бұрын
How many numbers of converations are passed in prompt every time?? can we change that number? how?
@eric_vaillancourtАй бұрын
You decide. It also depends on the context limit of the LLM