If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag
@awakenwithoutcoffee6 ай бұрын
Hi there, Personally I find the price too steep for only 2 hours of content but maybe you can convince us with a preview ! Cheers
@poloceccati10 ай бұрын
Very nice idea with this 'code display window' in your video: now the code is much easier to read, and much easier to follow step by step. Thanks.
@TomanswerAi10 ай бұрын
Excellent video I’ve been needing this. Very slick way to combine the responses from semantic and keyword search.
@paulmiller59110 ай бұрын
Fantastic Video and very timely. Thanks for the advice. I have made some massive progress because of it.
@engineerprompt10 ай бұрын
Glad it was helpful and thank you for your support 🙏
@lakshay51010 ай бұрын
Hey, These videos are really helpful. What do you think about scalability? When the document size increases from few to 1000s, the performance of semantic search decreases. Also have you tried qdrant? It worked better than chroma for me.
@engineerprompt10 ай бұрын
Scalability is potentially an issue. Will be making some content around it. In theory, the retrieval speed will decrease as the number of documents increases by order of magnitude. But in that case, finding approximate neighbors will work. Haven't looked at qdrant yet but it's on my list. Thanks for sharing
@SRV9005 ай бұрын
Hello! First of all, thank you very much for the video! Secondly, at minute 10:20 you mention that you are going to create a new video about obtaining the metadata of the chunks. Do you have that video? Again, thank you very much for the material.
@saqqara636110 ай бұрын
Great - while you can persist the chromadb, is there a way to persist der bm25retriever? or do you have to chunk always again when starting the application?
@vikaskyatannawar84176 ай бұрын
You can fetch documents from DB and feed it.
@MikewasG10 ай бұрын
This video is really helpful to me!Thanks a lot!
@engineerprompt10 ай бұрын
Thanks 😊
@mrchongnoi10 ай бұрын
How do you handle multiple documents that are unrelated to find the answer for the user ?
@parikshitrathode457810 ай бұрын
I have the same question, how do we handle multiple documents of similar types, let's say office policies for different companies? The similarity search will return all similar chunks (k=5) as context to LLM, which may contain different answers based on the company's policy. There is lot of ambiguity here. Also how do we handle tables in PDFs as when asked questions they don't provide correct answer for it. Can anyone help me out here?
@texasfossilguy10 ай бұрын
One way would be to have an agent select a specific database based on the query, or have a variable for the user stating which company they work for. You would then have multiple databases, one for each company involved, or whatever.. This would also keep the databases smaller. Handling that in some way like that would speed up the search and response.
@attilavass693510 ай бұрын
It's great that the example code uses free LLM inference like Hugging Face (or OpenRouter)!
@morespinach983210 ай бұрын
But can we host them locally? Working in an industry that can’t use public SaaS stuff.
@rafaf683810 ай бұрын
Thank you for sharing the guide. One question, how to make the response longer, I have tried to change the max_length parameter, as you suggested in the video, but the response is always some ~ 300 characters long.
@linuxmanju10 ай бұрын
It depends on the model too. May be your llm model doesn't support more than 300!? . Which model you are using btw ?
@engineerprompt10 ай бұрын
Which model are you trying? How long is your context?
@sarcastic.affirmations10 ай бұрын
@@engineerprompt I've experienced a similar issue, I'm using the zephyr-7b-beta model. Also, I don't want the AI to get the answers from the internet, and just give response if the context is available in the database provided. I tried to use the prompting for that, didn't help. Any tips?
@PallaviChauhan9110 ай бұрын
@@sarcastic.affirmations did you get what you were trying to find?
@andaldana10 ай бұрын
Great stuff! Thanks!
@micbab-vg2mu10 ай бұрын
Thank you for the video:)
@zYokiS10 ай бұрын
Amazing video! How can you use this in a conversational chat engine? I have built conversational pipelines that use RAG, however how would I do this here while having different retrievers?
@engineerprompt10 ай бұрын
This should work out of the box, you will need to replace your current retriever with the ensemble one.
@JanghyunBaek9 ай бұрын
@engineerprompt - Could you convert Notebook with LlamaIndex if you don't mind?
@KOTAGIRISIVAKUMAR10 ай бұрын
Great effort and good content..😇😇
@1235162410 ай бұрын
Amazing video , thanks
@engineerprompt10 ай бұрын
🙏
@deixis697910 ай бұрын
hello! thanks for the video. I was wondering if we can use it on csv files instead of PDF? How would that affect the architecture?
@Tofipie9 ай бұрын
Thanks! I have 500k documents. I want to compute the keyword retriever once and call it the same way I have external index for dense DB vector. Is there a way?
@kenchang34569 ай бұрын
Excellent video, it's helping me with my proof of concept. Thank you.
@engineerprompt9 ай бұрын
Glad to hear that!
@kenchang34567 ай бұрын
@@engineerprompt I finaly got my POC up and running to search for parts and materials using hybrid search and it works really well. Thanks for do this video.
@engineerprompt7 ай бұрын
@@kenchang3456 this is great news.
@hassentangier38918 ай бұрын
Great,do you have videos for using docx files
@engineerprompt8 ай бұрын
thanks, same will work but you will need to use a separate loader for it. Look into unstructured.io.
@PallaviChauhan9110 ай бұрын
Hi, I have a question, hope you reply. If we want to give it a PDF with bunch of video transcripts and ask it to formulate a creative article based on the info given, can it actually do the tasks like that? Or is it just useful for finding relevant information from the source files?
@engineerprompt10 ай бұрын
RAG is good for finding the relevant information. For the use case you are describing, you will need to add everything in the context window of the LLM in order for it to look at the whole file. Hope this helps.
@PallaviChauhan9110 ай бұрын
@@engineerprompt Can you point me out a good video/ channel who focuses on accomplishing such things using local LLMs or even chatGpt4 ?
@clinton231210 ай бұрын
I get KeyError 0 when I run this # Vector store with the selected embedding model vectorstore = Chroma.from_documents(chunks, embeddings) What am I doing wrong? I added my HF token with read the first time and then with write too... I would appreciate the help. Thanks for the video, though. Its amazing.
@goel3239 ай бұрын
I am getting same error
@aneerpa838410 ай бұрын
Really helpful, thank you ❤
@karanv2939 ай бұрын
i dont know what RAG to implement . is there benchmarks out there for the best solution? My use case will be 100s of LONG documents even textbooks.
@TheZEN201110 ай бұрын
I'll have to try this one. Great video!
@engineerprompt10 ай бұрын
Glad it was helpful
@chrismathew63810 ай бұрын
I'm using RAG for a coding model. can anyone suggest a good retriever for this task?. Thanks in advance!
@denb156810 ай бұрын
Can you add this functionality to localGPT?
@googleyoutubechannel855410 ай бұрын
Wait, this doesn't seem like RAG at all? If I'm following, the LLM is not using embedding vectors at all in the actual llm inference step? It seems you're using a complex text->embedding->search engine step as a way to build a text search engine that just injects regular text into the context, but does not use embeddings directly added to the model? Couldn't you generate extra 'ad-hoc' search text you're just plopping into the context window in any number of methods, only one of them being using embeddings -> db -> text? And this method has none of the advantage of actually 'grafting on' embeddings directly to the model as you're using up the context window?
@s11-informationatyourservi4410 ай бұрын
the whole point is to fix the broken part of RAG. the typical rag implementation doesn’t do too well with anything larger than a few docs
@abhinandansharma39838 ай бұрын
"Where can I find the PDF data?"
@engineerprompt8 ай бұрын
You will need to provide your own PDF files.
@vamshi367610 ай бұрын
The background is little distracting, its better to avoid the flashy one, i couldn't concentrate on your lecture. Please. Thank you.