This is exactly what I've been trying to find for the last couple of days. Simple instructions on how to do this with pure python and local LLM. Thank you!
@antonioalvarez32466 ай бұрын
x2! thanks @prompt engineering!
@nmstoker6 ай бұрын
Brilliantly explained with clarity and insight, thank you! Also really pleased you point out that RAG emerged from IR ideas and wasn't brand new: when I saw it I was like, haven't people seen Facebook's DrQA from 2017?!? And even that wasn't out the blue, there's a long established history with IR 👍
@engineerprompt5 ай бұрын
thank you. I agree, in most of the case, we are reinventing the wheel and giving old approaches with new names. Interestingly enough a simple keyword based search (BM-25) will still out perform dense embeddings in most cases!
@madbike716 ай бұрын
Excelent and concise description. Thank you.
@CreativeEngineering_5 ай бұрын
I just got done implementing an almost identical setup. Used SQLite and fastBart all in C# it’s amazing
@mokisable3 ай бұрын
Nice I've been wanting to start in C# for RAG... Any tips or guidance for a newbie? I was using KoboldCPP's webui for LLM generation... but have NO idea where to go. None of these videos even hint at anything with C#... let alone Kobold.
@nshettys6 ай бұрын
Brilliant! Thanks for this one
@RūtenisRaila3 ай бұрын
great work! very well explained
@vitalis6 ай бұрын
Problem with RAG solutions is they don’t hold up with bigger amounts of unstructured data. I wish there was a solution that includes long term memory for chat agents so that they get smarter about your context as you chat with them
@engineerprompt6 ай бұрын
Google released context caching for their long context models. This could be a solution
@Kishorekkube6 ай бұрын
@@engineerpromptis there a way to save and load the vector store that you made here sir ?
@tollington94145 ай бұрын
The graph rag solution may work better for large amounts of unstructured data
@vaishnokmr6 ай бұрын
yes! i did the same a year ago in research duration.. it works.
@LEANSCH966 ай бұрын
Can this also be implemented with a local model through Ollama?
@nguyentran70683 ай бұрын
Of course there is no restriction
@Connor514406 ай бұрын
Great video, nice style and easy to listen to, subscribed 👍🏼
@ujjwalsrivastava62485 ай бұрын
Hello sir! I want to build a question answering chatbot which gives answer form provided knowledge base in pdf or text format with python language. I'm working on this since last 10 days but failed to do till now! Can you please guide me through this project sir?
@MoFields6 ай бұрын
What are the best ways of importing documents into the RAG system From corporate systems, such as Google Docs or Confluence or Notion without asking your IT? I have actually done a few things manually, but they are very labour-intensive and manual for example using scraping tools and chrome extensions but is there something that is a bit more streamlined?
@MoFields6 ай бұрын
Also - how to add indexing, link backs, more nuances chunking mechanisms (context and type of info aware)?
@engineerprompt6 ай бұрын
You are looking for data connectors in this case. Each of these services will have their own APIs or you can use data loaders from langchain (python.langchain.com/v0.2/docs/integrations/document_loaders/). This is one aspect where i would recommend using a framework.
@nekososu6 ай бұрын
can u also show how to make structured output?
@ignaciopincheira235 ай бұрын
Hi, could you convert complex PDF documents (with graphics and tables) into an easily readable text format, such as Markdown? The input file would be a PDF and the output file would be a text file (.txt).
@engineerprompt5 ай бұрын
Yes, checkout this video: kzbin.info/www/bejne/o5Wvc6VvfrKgnas
@MrJekyllDrHyde15 ай бұрын
Great job. I'd try to make this work with free/opensource AI Models I also wants to see if this will work with bigger corpus.
@engineerprompt5 ай бұрын
it should work with open models. For bigger corpus, you will need to think about latency in retrieval. You might want to look into Quantized embeddings in that case.
@bastabey26526 ай бұрын
I never liked RAG frameworks .. thanks for the useful content
@aryandhakal31586 ай бұрын
could you please make a video on a a chatbot that can interact with pdf files and answer questions with recent tech ? I'm having the most difficulties with outdated tutorials. It would be a great help!
@TresMinecraftАй бұрын
Thank you!
@prathameshmandavkar75916 ай бұрын
Great work 👍🏻 Thanks
@gkhan7534 ай бұрын
As a newbe im hooked on this channel. Im about to take your RAG course, the issue have is, everytime ive been trying to use Langchain i get crazy errors about upgrades and in compatibilities with Python versions. How do you address this issue? Frustrating to resolve if at all.
@engineerprompt4 ай бұрын
My recommendation is to stick to a version of langchain and don't use the latest version. You can fix that in the requirements.txt. you don't need to latest version in most cases. For Python, use 3.10. Hope this helps
@antonioalvarez32466 ай бұрын
great work! thanks!
@Francotujk6 ай бұрын
Hello! I’ve a doubt. The similarities is a way to reduce the number of tokens that is sent to the openAi api? So basically when you make a query to the llm you are not sending the entire text of the wikipedia page? I ask it because of tokens cost, to know exactly what openai will charge us. Your content is probably the best on youtube! Really appreciate all your videos
@luizemanoel25886 ай бұрын
Probably. He used a Wiki page but you may have a 1000 pages pdf that will cost a lot to process and maybe most of it is irrelevant to what you want. When you break the text, and then get the 'n' most relevant chunks you get what you want faster and cheaper.
@luizemanoel25886 ай бұрын
And if you use a AI locally, the more info you use the slower it will be. So it can make a not so powerful PC do the job too.
@engineerprompt6 ай бұрын
Yes, there are two parts as mentioned by @luizemanoel. First the document can contain a lot of irrelevant info. You only want to provide what is relevant to the query to the LLM. This will improve the responses. And the added benefit is reduced tokens which means less cost as well.
@Francotujk6 ай бұрын
@@engineerprompt @luizemanoel2588 Ok thanks to both!
@TheCopernicus16 ай бұрын
Legend!
@leomeza93965 ай бұрын
Thank you so much!
@AlphaScraperOne6 ай бұрын
500 likes, keep it up !
@Salionca6 ай бұрын
Great! Thanks!
@rabeemohammed53515 ай бұрын
language arabic is supported or not
@drp1116 ай бұрын
Thanks for the video! However, RAG never convinced me. I'm looking for fine-tuning in 10 lines of code.
@sufiajiАй бұрын
Coooolll
@uwegenosdude6 ай бұрын
Thanks for this great video. I tried to run your juypter notebook. When calling the line "from google.colab import userdata" I get the error: ModuleNotFoundError: No module named 'google'. and somewhere I see pkg_resources is deprecated as an API Is python 3.12.3 too new? OK, I replaced the google part. There are other ways to create an OpenAI client ! Now it works !
@themax2go5 ай бұрын
... yes, you can do it that way - but, you lose functionality in terms of accuracy of relevance between topics
@geekyprogrammer48314 ай бұрын
Your course is too expensive
@MeinDeutschkurs6 ай бұрын
No frameworks, but please install RAGatuille? WTF!
@Yocoda246 ай бұрын
Are you also mad he used numpy? Hahahahah wtf Framework: a collection of libraries to build applications Libraries: a tool to leverage functionality
@MeinDeutschkurs6 ай бұрын
@@Yocoda24 , well: if the claim is pure python, no frameworks, yes. WTF.
@Yocoda246 ай бұрын
@@MeinDeutschkurs not sure where you’re pulling “pure python” from? Can you give me a timestamp to when it is said in the video?
@MeinDeutschkurs6 ай бұрын
@@Yocoda24 Read the video title: “RAG from Scratch in 10 lines Python - No Frameworks Needed!”
@Yocoda246 ай бұрын
@@MeinDeutschkurs oh okay so it doesn’t say pure python, and he doesn’t use any frameworks. Glad we could come to an understanding