Many requested a follow-up video with an example - Two-Stage Retrieval with Cross-Encoders: kzbin.info/www/bejne/aajCdWSCZatgq6c
@santasalo866 ай бұрын
Nice work! few new methods of Langchain I was not aware of :)
@ultrainstinct67154 ай бұрын
Very informative content. Thank you so much for sharing.
@say.xy_ Жыл бұрын
Already Love your content ❤ Would love to see you making Production Ready Chatbot Pt 2 along with deployment part. Thankyou for producing quality content for free.
@codingcrashcourses8533 Жыл бұрын
Thank you! I currently work on a Udemy Course, which explains how to deploy a Production Grade Chatbot on Microsoft Azure. It´s not free, but only costs a few bucks 🙂. Will release it in January. But of course I will continue to do Videos on YT which are completely free.
@Peter-cd9rp11 ай бұрын
@@codingcrashcourses8533 very cool. where is it :D
@levius_2410 ай бұрын
Fantastic video! :D Quick question: Do you know how it's possible to create a local vector database that's queried via code, so the database doesn't get initialised each time the script is run? Would really appreciate your help!
@codingcrashcourses853310 ай бұрын
You just have the use the correct constructor for that Database class. Methods like from_documents are just helper functions to make that easier. Not sure if I understood your question correct though
@levius_2410 ай бұрын
Yeah, answered my question pretty much, thanks a lot! Do you know which function i can use to create a local database, that can also be passed to the SelfQueryRetriever.from_llm() constructor?@@codingcrashcourses8533
@RajaReivan2 ай бұрын
have u figured out the solution?
@StyrmirSaevarsson11 ай бұрын
Thank you so much for this tutorial! It is exactly the stuff I was looking for!
@codingcrashcourses853311 ай бұрын
Great to hear that. Thanks for your comment
@wylhias8 ай бұрын
Great useful content, with clear explanation. 👍
@moonly378110 ай бұрын
Thank you for the amazing tutorial! I was wondering, instead of using ChatOpenAi, how can I utilize a llama 2 model locally? Specifically, I couldn't find any implementation, for example, for contextual compression, where you pass compressor = LLMChainExtractor.from_llm(llm) with the ChatOpenAi (llm). How can I achieve this locally with llama 2? My use case involves private documents, so I'm looking for solutions using open-source LLMS.
@codingcrashcourses853310 ай бұрын
Sorry, I only use the OpenAI models due to my old computer. Can´t really help you with that
@gangs0846 Жыл бұрын
Absolutely fantastic
@codingcrashcourses8533 Жыл бұрын
Thank you so much :)
@saurabhjain507 Жыл бұрын
Nice video. Can you please create a video on evaluation of RAG? I think a lot of people would be interested in this.
@codingcrashcourses8533 Жыл бұрын
Thank you! That kind of video is currently not planned, since it´s actually quite expensive to evaluate RAG Output and designing that experiment is PROBABLY something not many people would watch on KZbin. In addition to that I am not really an Expert on that topic. In my company our data scientists currently work on this^^
@prateek_alive11 ай бұрын
@@codingcrashcourses8533 what would be the right technique for evaluating a RAG? If you can share your thoughts in chat?
@newcooldiscoveries571111 ай бұрын
Excellent information!! Thank you. Liked and Subscribed.
@codingcrashcourses853311 ай бұрын
Nice! Will release a follow up video with a practical example on monday ;-)
@arslanabid22453 ай бұрын
Hi Sir! I want to know, what are the correct langchain classes should we should use(in production) to extract both structured and un-structured data from a pdf.
@codingcrashcourses85333 ай бұрын
I wouldn´t use any of them tbh. For structured data I use LLM based (semantic) splitters and for pdfs the unstructured package or a service from Microsoft Azure :)
@arslanabid22453 ай бұрын
@@codingcrashcourses8533 Interesting! Thanks
@theindianrover200711 ай бұрын
Thanks for the video, what is x & y dim in the scatter plot (5.19)?
@codingcrashcourses853311 ай бұрын
Tjw axis is of the plot
@danielbusquets32827 ай бұрын
Liked and subscribed. Spot on!
@Davi-do8iz3 ай бұрын
awesome! very usefull
@ghazouaniahmed7669 ай бұрын
Thank you, can you handle theproblem of retrieval when we ask question out of context of rag or greeting for exemple ?
@codingcrashcourses85339 ай бұрын
You May check nvidia guardrails
@sivajanumm Жыл бұрын
Thanks for great video of this topic. can you also post some videos related to LoRA with any LLMs of your choice.
@karthikb.s.k.4486 Жыл бұрын
Nice tutorial . May I know the theme used for visual studio code please
@codingcrashcourses8533 Жыл бұрын
Material Theme dark :)
@karthikb.s.k.4486 Жыл бұрын
@@codingcrashcourses8533 link for the theme please as I see lot of material themes in market place extensions
@rafaykhattak44705 ай бұрын
Can we combine all of them?
@codingcrashcourses85335 ай бұрын
Yes, but you probably should not since latency is also a key Part of an app
@Chevignay Жыл бұрын
Thank you so much this is really good stuff
@codingcrashcourses8533 Жыл бұрын
Thanks for your comment :)
@Chevignay Жыл бұрын
You're welcome I just bought your course actually 🙂@@codingcrashcourses8533
@syedhaideralizaidi1828 Жыл бұрын
Thank you so much for making this video! You create valuable content. I just have one question. I'm currently utilizing the Azure Search Service, and I'm curious if it's feasible to integrate all the retrievers. I've attempted to use LangChain with it, but my options seem limited to searching with specific parameters and filters. Unfortunately, there's not a lot of information available on how to effectively use these retrievers in conjunction with the Azure Search Service.
@codingcrashcourses8533 Жыл бұрын
I tried ACS before and also was not tooo happy with it. My biggest con is that ACS does not support the indexing API. I prefer Postgres/PgVector :)
@whitedeviljr935111 ай бұрын
PDFInfoNotInstalledError: Unable to get page count. Is poppler installed and in PATH?
@codingcrashcourses853311 ай бұрын
Though is it?
@yazanrisheh5127 Жыл бұрын
I'm a beginner here and I've been using langchain from your videos. Is the advanced RAG instead of doing something like my code below where instead of using the search type as similarity, I'm using the types that you showed in the video yet everything else stays the same like using ConversationalRetrievalChain, prompt, memory etc...? retriever=knowledge_base.as_retriever(search_type = "similarity_score_threshold", search_kwargs = {"score_threshold":0.8}) Also, which would you recommend to retrieve for large documents? I need to do RAG over 80 PDF documents and have been struggling with accuracy. Lastly, in your OpenAi embeddings, why are you using chunk_size= 1 when by default, its chunk_size = 1000? Can you explain this part also please and thank you in advance
@codingcrashcourses8533 Жыл бұрын
The advanced techniques also work with memory etc., but with the High Level chains I showed I may become a little bit difficult and "hacky". In general I don´t set any scores, but just retrieve the best documents. I also don´t have an answer for setting a good threshold. In general I recommend using the get_documents method with the retriever interface for getting documents. I set the chunk_size to 1 due to rate limit errors I often experienced. With higher chunk sizes it just makes too many requests at once it seems.
@akshaykumarmishra2129 Жыл бұрын
hi, in retrievalQa from langchain, we have a retriever that retrieves docs from a vector db and provides a context to the llm, let's say i'm using gpt3.5 whose max tokens is 4096... how do i handle huge context to be sent to it ? any suggestions will be appreciated
@codingcrashcourses8533 Жыл бұрын
Gpt-3.5 Turbo allows 32 tokens I guess, gpt-4-turbo 128k. If you really need that large context window, my go-to apporach would be to use models with larger context windows at the end of 2023. There are also map-reduce methode to reduce the context, but these also do many requests before sending a final one.
@vicvicking19906 ай бұрын
Wait what, I thought FAISS didnt support metadata filters ? Weird that TimeWaited works with it no ?
@codingcrashcourses85336 ай бұрын
I am not too familiar with each change, FAISS is also work in progress, maybe they added it in some version :)
@vicvicking19906 ай бұрын
@@codingcrashcourses8533 In any case, your video is amazing and you are greatly helping me for my internship project. Many thanks, keep up the great work 💪👍
@micbab-vg2mu Жыл бұрын
Thank you for the video:). In your opinion which method of retrieval will give me the most accurate output ( the cost is not as important in my case )? I work in pharma industry - tolerance to LMMs mistakes is very low.
@codingcrashcourses8533 Жыл бұрын
I can not give you a blueprint for that. Just try it out and experiment. You know your data and there are so many different ways to improve performance. If cost does not matter the easiest way is use GPT-4 instead of GPT-3.5. Also try chain of thought prompting and then use one of the techniques I showed in the notebooks. There are so many ways to improve performance :)
@BJarvey4 ай бұрын
Back 4. Maguire😢😢 and Licha again. We lose. Manager: we need a new CB De ligt: I'm right here😮
@lefetznove31855 ай бұрын
hum .. you forgot to remove your OpenAI API Key from the source code !