Many requested a follow-up video with an example - Two-Stage Retrieval with Cross-Encoders: kzbin.info/www/bejne/aajCdWSCZatgq6c
@ultrainstinct6715Ай бұрын
Very informative content. Thank you so much for sharing.
@say.xy_9 ай бұрын
Already Love your content ❤ Would love to see you making Production Ready Chatbot Pt 2 along with deployment part. Thankyou for producing quality content for free.
@codingcrashcourses85339 ай бұрын
Thank you! I currently work on a Udemy Course, which explains how to deploy a Production Grade Chatbot on Microsoft Azure. It´s not free, but only costs a few bucks 🙂. Will release it in January. But of course I will continue to do Videos on YT which are completely free.
@Peter-cd9rp8 ай бұрын
@@codingcrashcourses8533 very cool. where is it :D
@santasalo863 ай бұрын
Nice work! few new methods of Langchain I was not aware of :)
@wylhias5 ай бұрын
Great useful content, with clear explanation. 👍
@StyrmirSaevarsson8 ай бұрын
Thank you so much for this tutorial! It is exactly the stuff I was looking for!
@codingcrashcourses85338 ай бұрын
Great to hear that. Thanks for your comment
@Davi-do8iz22 күн бұрын
awesome! very usefull
@newcooldiscoveries57117 ай бұрын
Excellent information!! Thank you. Liked and Subscribed.
@codingcrashcourses85337 ай бұрын
Nice! Will release a follow up video with a practical example on monday ;-)
@gangs08469 ай бұрын
Absolutely fantastic
@codingcrashcourses85339 ай бұрын
Thank you so much :)
@quengelbeard7 ай бұрын
Fantastic video! :D Quick question: Do you know how it's possible to create a local vector database that's queried via code, so the database doesn't get initialised each time the script is run? Would really appreciate your help!
@codingcrashcourses85337 ай бұрын
You just have the use the correct constructor for that Database class. Methods like from_documents are just helper functions to make that easier. Not sure if I understood your question correct though
@quengelbeard7 ай бұрын
Yeah, answered my question pretty much, thanks a lot! Do you know which function i can use to create a local database, that can also be passed to the SelfQueryRetriever.from_llm() constructor?@@codingcrashcourses8533
@danielbusquets32824 ай бұрын
Liked and subscribed. Spot on!
@BJarveyАй бұрын
Back 4. Maguire😢😢 and Licha again. We lose. Manager: we need a new CB De ligt: I'm right here😮
@sivajanumm9 ай бұрын
Thanks for great video of this topic. can you also post some videos related to LoRA with any LLMs of your choice.
@saurabhjain5079 ай бұрын
Nice video. Can you please create a video on evaluation of RAG? I think a lot of people would be interested in this.
@codingcrashcourses85339 ай бұрын
Thank you! That kind of video is currently not planned, since it´s actually quite expensive to evaluate RAG Output and designing that experiment is PROBABLY something not many people would watch on KZbin. In addition to that I am not really an Expert on that topic. In my company our data scientists currently work on this^^
@prateek_alive8 ай бұрын
@@codingcrashcourses8533 what would be the right technique for evaluating a RAG? If you can share your thoughts in chat?
@moonly37816 ай бұрын
Thank you for the amazing tutorial! I was wondering, instead of using ChatOpenAi, how can I utilize a llama 2 model locally? Specifically, I couldn't find any implementation, for example, for contextual compression, where you pass compressor = LLMChainExtractor.from_llm(llm) with the ChatOpenAi (llm). How can I achieve this locally with llama 2? My use case involves private documents, so I'm looking for solutions using open-source LLMS.
@codingcrashcourses85336 ай бұрын
Sorry, I only use the OpenAI models due to my old computer. Can´t really help you with that
@syedhaideralizaidi18289 ай бұрын
Thank you so much for making this video! You create valuable content. I just have one question. I'm currently utilizing the Azure Search Service, and I'm curious if it's feasible to integrate all the retrievers. I've attempted to use LangChain with it, but my options seem limited to searching with specific parameters and filters. Unfortunately, there's not a lot of information available on how to effectively use these retrievers in conjunction with the Azure Search Service.
@codingcrashcourses85339 ай бұрын
I tried ACS before and also was not tooo happy with it. My biggest con is that ACS does not support the indexing API. I prefer Postgres/PgVector :)
@Chevignay8 ай бұрын
Thank you so much this is really good stuff
@codingcrashcourses85338 ай бұрын
Thanks for your comment :)
@Chevignay8 ай бұрын
You're welcome I just bought your course actually 🙂@@codingcrashcourses8533
@yazanrisheh51279 ай бұрын
I'm a beginner here and I've been using langchain from your videos. Is the advanced RAG instead of doing something like my code below where instead of using the search type as similarity, I'm using the types that you showed in the video yet everything else stays the same like using ConversationalRetrievalChain, prompt, memory etc...? retriever=knowledge_base.as_retriever(search_type = "similarity_score_threshold", search_kwargs = {"score_threshold":0.8}) Also, which would you recommend to retrieve for large documents? I need to do RAG over 80 PDF documents and have been struggling with accuracy. Lastly, in your OpenAi embeddings, why are you using chunk_size= 1 when by default, its chunk_size = 1000? Can you explain this part also please and thank you in advance
@codingcrashcourses85339 ай бұрын
The advanced techniques also work with memory etc., but with the High Level chains I showed I may become a little bit difficult and "hacky". In general I don´t set any scores, but just retrieve the best documents. I also don´t have an answer for setting a good threshold. In general I recommend using the get_documents method with the retriever interface for getting documents. I set the chunk_size to 1 due to rate limit errors I often experienced. With higher chunk sizes it just makes too many requests at once it seems.
@ghazouaniahmed7665 ай бұрын
Thank you, can you handle theproblem of retrieval when we ask question out of context of rag or greeting for exemple ?
@codingcrashcourses85335 ай бұрын
You May check nvidia guardrails
@micbab-vg2mu9 ай бұрын
Thank you for the video:). In your opinion which method of retrieval will give me the most accurate output ( the cost is not as important in my case )? I work in pharma industry - tolerance to LMMs mistakes is very low.
@codingcrashcourses85339 ай бұрын
I can not give you a blueprint for that. Just try it out and experiment. You know your data and there are so many different ways to improve performance. If cost does not matter the easiest way is use GPT-4 instead of GPT-3.5. Also try chain of thought prompting and then use one of the techniques I showed in the notebooks. There are so many ways to improve performance :)
@vicvicking19902 ай бұрын
Wait what, I thought FAISS didnt support metadata filters ? Weird that TimeWaited works with it no ?
@codingcrashcourses85332 ай бұрын
I am not too familiar with each change, FAISS is also work in progress, maybe they added it in some version :)
@vicvicking19902 ай бұрын
@@codingcrashcourses8533 In any case, your video is amazing and you are greatly helping me for my internship project. Many thanks, keep up the great work 💪👍
@theindianrover20078 ай бұрын
Thanks for the video, what is x & y dim in the scatter plot (5.19)?
@codingcrashcourses85338 ай бұрын
Tjw axis is of the plot
@karthikb.s.k.44869 ай бұрын
Nice tutorial . May I know the theme used for visual studio code please
@codingcrashcourses85339 ай бұрын
Material Theme dark :)
@karthikb.s.k.44869 ай бұрын
@@codingcrashcourses8533 link for the theme please as I see lot of material themes in market place extensions
@rafaykhattak44702 ай бұрын
Can we combine all of them?
@codingcrashcourses85332 ай бұрын
Yes, but you probably should not since latency is also a key Part of an app
@whitedeviljr93518 ай бұрын
PDFInfoNotInstalledError: Unable to get page count. Is poppler installed and in PATH?
@codingcrashcourses85338 ай бұрын
Though is it?
@akshaykumarmishra21298 ай бұрын
hi, in retrievalQa from langchain, we have a retriever that retrieves docs from a vector db and provides a context to the llm, let's say i'm using gpt3.5 whose max tokens is 4096... how do i handle huge context to be sent to it ? any suggestions will be appreciated
@codingcrashcourses85338 ай бұрын
Gpt-3.5 Turbo allows 32 tokens I guess, gpt-4-turbo 128k. If you really need that large context window, my go-to apporach would be to use models with larger context windows at the end of 2023. There are also map-reduce methode to reduce the context, but these also do many requests before sending a final one.
@lefetznove31852 ай бұрын
hum .. you forgot to remove your OpenAI API Key from the source code !