LangChain - Advanced RAG Techniques for better Retrieval Performance

  Рет қаралды 34,905

Coding Crash Courses

Coding Crash Courses

Күн бұрын

Пікірлер: 58
@codingcrashcourses8533
@codingcrashcourses8533 11 ай бұрын
Many requested a follow-up video with an example - Two-Stage Retrieval with Cross-Encoders: kzbin.info/www/bejne/aajCdWSCZatgq6c
@santasalo86
@santasalo86 6 ай бұрын
Nice work! few new methods of Langchain I was not aware of :)
@ultrainstinct6715
@ultrainstinct6715 4 ай бұрын
Very informative content. Thank you so much for sharing.
@say.xy_
@say.xy_ Жыл бұрын
Already Love your content ❤ Would love to see you making Production Ready Chatbot Pt 2 along with deployment part. Thankyou for producing quality content for free.
@codingcrashcourses8533
@codingcrashcourses8533 Жыл бұрын
Thank you! I currently work on a Udemy Course, which explains how to deploy a Production Grade Chatbot on Microsoft Azure. It´s not free, but only costs a few bucks 🙂. Will release it in January. But of course I will continue to do Videos on YT which are completely free.
@Peter-cd9rp
@Peter-cd9rp 11 ай бұрын
@@codingcrashcourses8533 very cool. where is it :D
@levius_24
@levius_24 10 ай бұрын
Fantastic video! :D Quick question: Do you know how it's possible to create a local vector database that's queried via code, so the database doesn't get initialised each time the script is run? Would really appreciate your help!
@codingcrashcourses8533
@codingcrashcourses8533 10 ай бұрын
You just have the use the correct constructor for that Database class. Methods like from_documents are just helper functions to make that easier. Not sure if I understood your question correct though
@levius_24
@levius_24 10 ай бұрын
Yeah, answered my question pretty much, thanks a lot! Do you know which function i can use to create a local database, that can also be passed to the SelfQueryRetriever.from_llm() constructor?@@codingcrashcourses8533
@RajaReivan
@RajaReivan 2 ай бұрын
have u figured out the solution?
@StyrmirSaevarsson
@StyrmirSaevarsson 11 ай бұрын
Thank you so much for this tutorial! It is exactly the stuff I was looking for!
@codingcrashcourses8533
@codingcrashcourses8533 11 ай бұрын
Great to hear that. Thanks for your comment
@wylhias
@wylhias 8 ай бұрын
Great useful content, with clear explanation. 👍
@moonly3781
@moonly3781 10 ай бұрын
Thank you for the amazing tutorial! I was wondering, instead of using ChatOpenAi, how can I utilize a llama 2 model locally? Specifically, I couldn't find any implementation, for example, for contextual compression, where you pass compressor = LLMChainExtractor.from_llm(llm) with the ChatOpenAi (llm). How can I achieve this locally with llama 2? My use case involves private documents, so I'm looking for solutions using open-source LLMS.
@codingcrashcourses8533
@codingcrashcourses8533 10 ай бұрын
Sorry, I only use the OpenAI models due to my old computer. Can´t really help you with that
@gangs0846
@gangs0846 Жыл бұрын
Absolutely fantastic
@codingcrashcourses8533
@codingcrashcourses8533 Жыл бұрын
Thank you so much :)
@saurabhjain507
@saurabhjain507 Жыл бұрын
Nice video. Can you please create a video on evaluation of RAG? I think a lot of people would be interested in this.
@codingcrashcourses8533
@codingcrashcourses8533 Жыл бұрын
Thank you! That kind of video is currently not planned, since it´s actually quite expensive to evaluate RAG Output and designing that experiment is PROBABLY something not many people would watch on KZbin. In addition to that I am not really an Expert on that topic. In my company our data scientists currently work on this^^
@prateek_alive
@prateek_alive 11 ай бұрын
@@codingcrashcourses8533 what would be the right technique for evaluating a RAG? If you can share your thoughts in chat?
@newcooldiscoveries5711
@newcooldiscoveries5711 11 ай бұрын
Excellent information!! Thank you. Liked and Subscribed.
@codingcrashcourses8533
@codingcrashcourses8533 11 ай бұрын
Nice! Will release a follow up video with a practical example on monday ;-)
@arslanabid2245
@arslanabid2245 3 ай бұрын
Hi Sir! I want to know, what are the correct langchain classes should we should use(in production) to extract both structured and un-structured data from a pdf.
@codingcrashcourses8533
@codingcrashcourses8533 3 ай бұрын
I wouldn´t use any of them tbh. For structured data I use LLM based (semantic) splitters and for pdfs the unstructured package or a service from Microsoft Azure :)
@arslanabid2245
@arslanabid2245 3 ай бұрын
​@@codingcrashcourses8533 Interesting! Thanks
@theindianrover2007
@theindianrover2007 11 ай бұрын
Thanks for the video, what is x & y dim in the scatter plot (5.19)?
@codingcrashcourses8533
@codingcrashcourses8533 11 ай бұрын
Tjw axis is of the plot
@danielbusquets3282
@danielbusquets3282 7 ай бұрын
Liked and subscribed. Spot on!
@Davi-do8iz
@Davi-do8iz 3 ай бұрын
awesome! very usefull
@ghazouaniahmed766
@ghazouaniahmed766 9 ай бұрын
Thank you, can you handle theproblem of retrieval when we ask question out of context of rag or greeting for exemple ?
@codingcrashcourses8533
@codingcrashcourses8533 9 ай бұрын
You May check nvidia guardrails
@sivajanumm
@sivajanumm Жыл бұрын
Thanks for great video of this topic. can you also post some videos related to LoRA with any LLMs of your choice.
@karthikb.s.k.4486
@karthikb.s.k.4486 Жыл бұрын
Nice tutorial . May I know the theme used for visual studio code please
@codingcrashcourses8533
@codingcrashcourses8533 Жыл бұрын
Material Theme dark :)
@karthikb.s.k.4486
@karthikb.s.k.4486 Жыл бұрын
@@codingcrashcourses8533 link for the theme please as I see lot of material themes in market place extensions
@rafaykhattak4470
@rafaykhattak4470 5 ай бұрын
Can we combine all of them?
@codingcrashcourses8533
@codingcrashcourses8533 5 ай бұрын
Yes, but you probably should not since latency is also a key Part of an app
@Chevignay
@Chevignay Жыл бұрын
Thank you so much this is really good stuff
@codingcrashcourses8533
@codingcrashcourses8533 Жыл бұрын
Thanks for your comment :)
@Chevignay
@Chevignay Жыл бұрын
You're welcome I just bought your course actually 🙂@@codingcrashcourses8533
@syedhaideralizaidi1828
@syedhaideralizaidi1828 Жыл бұрын
Thank you so much for making this video! You create valuable content. I just have one question. I'm currently utilizing the Azure Search Service, and I'm curious if it's feasible to integrate all the retrievers. I've attempted to use LangChain with it, but my options seem limited to searching with specific parameters and filters. Unfortunately, there's not a lot of information available on how to effectively use these retrievers in conjunction with the Azure Search Service.
@codingcrashcourses8533
@codingcrashcourses8533 Жыл бұрын
I tried ACS before and also was not tooo happy with it. My biggest con is that ACS does not support the indexing API. I prefer Postgres/PgVector :)
@whitedeviljr9351
@whitedeviljr9351 11 ай бұрын
PDFInfoNotInstalledError: Unable to get page count. Is poppler installed and in PATH?
@codingcrashcourses8533
@codingcrashcourses8533 11 ай бұрын
Though is it?
@yazanrisheh5127
@yazanrisheh5127 Жыл бұрын
I'm a beginner here and I've been using langchain from your videos. Is the advanced RAG instead of doing something like my code below where instead of using the search type as similarity, I'm using the types that you showed in the video yet everything else stays the same like using ConversationalRetrievalChain, prompt, memory etc...? retriever=knowledge_base.as_retriever(search_type = "similarity_score_threshold", search_kwargs = {"score_threshold":0.8}) Also, which would you recommend to retrieve for large documents? I need to do RAG over 80 PDF documents and have been struggling with accuracy. Lastly, in your OpenAi embeddings, why are you using chunk_size= 1 when by default, its chunk_size = 1000? Can you explain this part also please and thank you in advance
@codingcrashcourses8533
@codingcrashcourses8533 Жыл бұрын
The advanced techniques also work with memory etc., but with the High Level chains I showed I may become a little bit difficult and "hacky". In general I don´t set any scores, but just retrieve the best documents. I also don´t have an answer for setting a good threshold. In general I recommend using the get_documents method with the retriever interface for getting documents. I set the chunk_size to 1 due to rate limit errors I often experienced. With higher chunk sizes it just makes too many requests at once it seems.
@akshaykumarmishra2129
@akshaykumarmishra2129 Жыл бұрын
hi, in retrievalQa from langchain, we have a retriever that retrieves docs from a vector db and provides a context to the llm, let's say i'm using gpt3.5 whose max tokens is 4096... how do i handle huge context to be sent to it ? any suggestions will be appreciated
@codingcrashcourses8533
@codingcrashcourses8533 Жыл бұрын
Gpt-3.5 Turbo allows 32 tokens I guess, gpt-4-turbo 128k. If you really need that large context window, my go-to apporach would be to use models with larger context windows at the end of 2023. There are also map-reduce methode to reduce the context, but these also do many requests before sending a final one.
@vicvicking1990
@vicvicking1990 6 ай бұрын
Wait what, I thought FAISS didnt support metadata filters ? Weird that TimeWaited works with it no ?
@codingcrashcourses8533
@codingcrashcourses8533 6 ай бұрын
I am not too familiar with each change, FAISS is also work in progress, maybe they added it in some version :)
@vicvicking1990
@vicvicking1990 6 ай бұрын
@@codingcrashcourses8533 In any case, your video is amazing and you are greatly helping me for my internship project. Many thanks, keep up the great work 💪👍
@micbab-vg2mu
@micbab-vg2mu Жыл бұрын
Thank you for the video:). In your opinion which method of retrieval will give me the most accurate output ( the cost is not as important in my case )? I work in pharma industry - tolerance to LMMs mistakes is very low.
@codingcrashcourses8533
@codingcrashcourses8533 Жыл бұрын
I can not give you a blueprint for that. Just try it out and experiment. You know your data and there are so many different ways to improve performance. If cost does not matter the easiest way is use GPT-4 instead of GPT-3.5. Also try chain of thought prompting and then use one of the techniques I showed in the notebooks. There are so many ways to improve performance :)
@BJarvey
@BJarvey 4 ай бұрын
Back 4. Maguire😢😢 and Licha again. We lose. Manager: we need a new CB De ligt: I'm right here😮
@lefetznove3185
@lefetznove3185 5 ай бұрын
hum .. you forgot to remove your OpenAI API Key from the source code !
@codingcrashcourses8533
@codingcrashcourses8533 5 ай бұрын
I always delete these^^
@alex.5801
@alex.5801 4 ай бұрын
What is your email for business?
@codingcrashcourses8533
@codingcrashcourses8533 4 ай бұрын
@@alex.5801 datamastery87@gmail.com
LangChain vs. LlamaIndex - What Framework to use for RAG?
16:51
Coding Crash Courses
Рет қаралды 19 М.
LangChain Advanced RAG - Two-Stage Retrieval with Cross Encoder (BERT)
14:21
Coding Crash Courses
Рет қаралды 12 М.
The evil clown plays a prank on the angel
00:39
超人夫妇
Рет қаралды 53 МЛН
UFC 310 : Рахмонов VS Мачадо Гэрри
05:00
Setanta Sports UFC
Рет қаралды 1,2 МЛН
30 Orb of Chance VS 30 Rings and Amulets POE 2
2:28
Building long context RAG with RAPTOR from scratch
21:30
LangChain
Рет қаралды 35 М.
LangGraph - SQL Agent - Let an LLM interact with your SQL Database
20:22
Coding Crash Courses
Рет қаралды 4,7 М.
Agentic Framework LangGraph explained in 8 minutes | Beginners Guide
8:04
W.W. AI Adventures
Рет қаралды 2,7 М.
Stanford CS25: V3 I Retrieval Augmented Language Models
1:19:27
Stanford Online
Рет қаралды 175 М.
Advanced RAG 01 - Self Querying Retrieval
12:02
Sam Witteveen
Рет қаралды 48 М.
RAPTOR - Advanced RAG with LangChain
15:43
Coding Crash Courses
Рет қаралды 11 М.
Building Production-Ready RAG Applications: Jerry Liu
18:35
AI Engineer
Рет қаралды 338 М.
The 5 Levels Of Text Splitting For Retrieval
1:09:00
Greg Kamradt
Рет қаралды 85 М.