Building a RAG application using open-source models (Asking questions from a PDF using Llama2)

  Рет қаралды 91,743

Underfitted

Underfitted

Күн бұрын

Пікірлер: 149
@GaurangDave
@GaurangDave 9 ай бұрын
Oh please don't stop creating these videos, this is really helpful. Very detailed and well explained! Thank you so much for this!
@berkbatuhangurhan708
@berkbatuhangurhan708 10 ай бұрын
Came from X, this is an amazing and very detailed walk through. Thanks for explaining even the tiniest bits of everything. Highly recommend this.
@TooyAshy-100
@TooyAshy-100 10 ай бұрын
Santiago, your videos on LLMs have been incredibly helpful! Thank you so much for sharing your expertise. I'm eager to see more of your content in the future.
@Meetlimbani27
@Meetlimbani27 3 ай бұрын
Seriously, This is the Best Tutorial I have seen which explains everything by building it with scratch. I was searching for a good tutorial from scratch for around 15 days now. My search ends here
@vadud3
@vadud3 3 ай бұрын
I went through tons of youtube videos and no one breaks down the process like this video. You will actually learn how python tools are used step by steps. This is the best video from my research. Thank you for making it available!!
@anonymoustechnopath1138
@anonymoustechnopath1138 10 ай бұрын
Thanks a lot Santiago!! Really needed these videos for LLMs. Keep them coming!
@swatantrasohni5235
@swatantrasohni5235 10 ай бұрын
Thanks Santiago for wonderful video..running LLM locally is something very handy for variety of task..Eventually everyone will have their own LLM running locally in device..thats the future..
@liuyan8066
@liuyan8066 9 ай бұрын
I like this fundamental courses, especially the last RAG one, I followed other training to build AI products, some teaching is over 10 hours. After i finished, I still didn't fully understand why I coded like that. Now these courses can make the connection step by step. Thank you.
@QuentinFennessy
@QuentinFennessy 9 ай бұрын
This is an excellent walk through - easy to follow and very practical
@surygarcia6823
@surygarcia6823 4 ай бұрын
This is easily the best RAG video out there
@yasirgamieldien
@yasirgamieldien 9 ай бұрын
This is an amazing video. Literally answered all the questions I had on building a RAG and it was really useful to see the comparison between GPT, Llama, and Mixtral
@sarash5061
@sarash5061 9 ай бұрын
This was just Amazing, You are a Star. Thanks for all the effort.
@lokeshsharma4177
@lokeshsharma4177 8 ай бұрын
This is the BEST video ever made comparing all the LLMs performing same task. God Bless You
@geethikaisurusampath
@geethikaisurusampath 9 ай бұрын
This is really Helpful. Specially the explainations behind why do it. Keep up the good work. Respect to you man.
@MarkoKhomytsya
@MarkoKhomytsya 10 ай бұрын
Thank you for the video! I found it particularly intriguing to consider the possibility of obtaining more accurate responses from the PDF using the Llama2 model. Given that local Language Models (LMs) tend to be highly sensitive to how queries are formatted, I believe it's crucial to refine your example further. Here are a couple of suggestions: 1) Instead of relying on a basic parser, it would be beneficial to prepare a set of predefined questions and answers. For instance, a question like "How much does the course cost?" could have a straightforward answer like "$400." 2) It's also important to determine the optimal format for prompts, specifically tailored for models like Mistral. By addressing these points, you could develop a truly functional product that delivers accurate responses. As it stands, most examples seem to demonstrate that local models struggle with practical applications and aren't quite ready for real-world deployment.
@underfitted
@underfitted 10 ай бұрын
Great suggestions!
@mehmetbakideniz
@mehmetbakideniz 10 ай бұрын
hi. prompt engineering would definetely solve the problem of verbose answers but do you think it would also correct hallucinations as seen in the video?
@MarkoKhomytsya
@MarkoKhomytsya 10 ай бұрын
good question @@mehmetbakideniz ! I would like to know answer too!
@junaidali1853
@junaidali1853 10 ай бұрын
Lovely. Super useful video. I’ll be building a RAG system with a Vector Database and langchain for my freelance client for around $2,000 or more. Thanks Santiago for helping make my life better.
@mune8937
@mune8937 2 ай бұрын
Wow! As a student, I work in this job. Where did you get this job?
@junaidali1853
@junaidali1853 2 ай бұрын
@@mune8937 that's on Upwork
@sumitrana8114
@sumitrana8114 10 ай бұрын
Thank you for leaving your job and starting your channel.
@ankandas3413
@ankandas3413 2 ай бұрын
Teachers like you make the world better.
@underfitted
@underfitted 2 ай бұрын
Thanks
@asifm3520
@asifm3520 8 ай бұрын
That was a really clear explanation. Even novices will have no trouble following along.
@sushanths.l4865
@sushanths.l4865 10 ай бұрын
This is the great video santiago I really learned a lot
@vasuchewprecha
@vasuchewprecha 9 ай бұрын
you are by far the best teacher on youtube regarding ML/AI. please consider launching a course on generative AI.
@underfitted
@underfitted 9 ай бұрын
Thanks!
@bhusanchettri8594
@bhusanchettri8594 9 ай бұрын
Great piece of work. Well explained!
@samcavalera9489
@samcavalera9489 10 ай бұрын
Thanks Santiago! I am a student of your ML School course and I haven taken your course in two different cohorts. You ML School course is definitely the best of its kind in the market. Can you please design a new course on RAG that covers everything about this awesome technology including evaluation techniques and deployment? That will be wonderful and I cannot wait to enrol in your RAG (and any other AI) course!
@underfitted
@underfitted 10 ай бұрын
Working on it!
@samcavalera9489
@samcavalera9489 10 ай бұрын
@@underfitted Many thanks 🙏 🙏 🙏
@SuhasKM-tl1rg
@SuhasKM-tl1rg 10 ай бұрын
I love your content. More of this in my feed please!
@ThamBui-ll7qc
@ThamBui-ll7qc 9 ай бұрын
Great video, I would love to see how to properly structure the prompt and make the bot remember context as conversation goes on...
@fintech1378
@fintech1378 10 ай бұрын
is searching via embedding always better compared to 'traditional' search aka very long context window? where should we use one or the other..how bout if we wanna build multimodal video recommendation system
@koko9712
@koko9712 10 ай бұрын
Nice video Santiago ! Keep up the good work
@Orenji902
@Orenji902 6 ай бұрын
Incredible video, really like the longer content codealong.
@square007tube
@square007tube 8 ай бұрын
Many Thanks for this video. I walked through the video, I was able to install Ollama3 on my machine, but I have nvidia GPU MX250, which is taking long time to answer the questions. it take 7 mins to answer two questions. I will watch your playlist of LLM.
@toddroloff93
@toddroloff93 7 ай бұрын
Nicely done. Always learn something from your video's. Looking forward to more content. Thanks for doing them.
@underfitted
@underfitted 7 ай бұрын
Thanks for coming back!
@HelloIamLauraa
@HelloIamLauraa 4 ай бұрын
omg, exactly what i need can’t wait to watch
@seanb9949
@seanb9949 10 ай бұрын
Another great video Santiago! I really look forward to seeing more of these. Heck, I'll watch the ads to make sure you get some $$$ 🙏
@underfitted
@underfitted 10 ай бұрын
Ha ha! Thanks!
@mehmetbakideniz
@mehmetbakideniz 10 ай бұрын
this was super helpfull. I noticed that using m2 pro some cells took 16 seconds in my laptop while it just took 0.5 second in your computer. then you said you are using m3gpu. How can I make sure that I am using gpu instead of cpu in executing this code? or does langchain already utilize gpu when needed?
@asnair
@asnair 5 ай бұрын
Excellent! What about the last step -- saving the vector embeddings in the pinecone database?
@RaviShamihoke
@RaviShamihoke 6 ай бұрын
just created the open ai api key but getting rate limit error
@adinathdesai6880
@adinathdesai6880 8 ай бұрын
Amazing Video. You added great value to our knowledge. Thank you so much.
@noa2427
@noa2427 9 ай бұрын
I am running in to vector store problem saying import error docarray which i installed. I tried many ways i tried many vertions of docarray and DocArrayInMemorySearch any helpfull thanks
@SarthakVashisth-y6z
@SarthakVashisth-y6z Ай бұрын
if i would like to use the newer llama models like 3.1 or even 3.3 till some extent, would i need to make some changes in the code or libraries perhaps to make it work?
@RameshBaburbabu
@RameshBaburbabu 10 ай бұрын
Wow gr8 video, I am able walk with you and finished till the end. "Batch" is gr8 . Thanks please post more videos .. 🙏🙏
@dannysuarez6265
@dannysuarez6265 7 ай бұрын
What a great presentation! Thank you so much, sir!
@ravindarmadishetty736
@ravindarmadishetty736 21 күн бұрын
I have a question when we deal with huge document repository how this repository can be scaled? As the Chromadb and pinecone are with limited i guess. Please help me how to handle this.
@GrantNaylor-b8l
@GrantNaylor-b8l 3 ай бұрын
More great content. Clear, easy to follow. I have a question. If you only wanted a simple RAG to answer questions from small snippets of text (a few pages not 100's) would a vector memory store really be a bad thing?
@researchpaper7440
@researchpaper7440 10 ай бұрын
Looking for these videos, next i am looking a model to train on SQL data
@TheMunishk
@TheMunishk 10 ай бұрын
Congrats and well done for producing this useful content. Exactly what I was looking to kick start my langchain journey with the models. Let me practice this but I was also looking for how to integrate all this in the front end. Do you have a video of which tools to build a front end for the prompt that will interact with the backend LLMs?
@tipiapagupo
@tipiapagupo 7 ай бұрын
Amazing video! Are you still planning to make the video on how to communicate with websites? I'm really curious about the technologies you consider most relevant.
@PoojaGori
@PoojaGori 3 күн бұрын
Santiago, does the mlschool course also include the latest GenAI development ?
@fatiga2426
@fatiga2426 8 ай бұрын
Santiago, muy buen video! Una pregunta, por que usas un parser para obtener el output del modelo como string? Por que mejor no obtener el content directamente? Saludos
@mrskenz1068
@mrskenz1068 9 ай бұрын
Thanks for the vidéo. How we can do for scientific PDF that contains a lot of mathematical and chemical formulas.
@RomanovDK
@RomanovDK 2 ай бұрын
Basic question - which editor is that ? The Terminal app on my Mac does not seem to be it.?
@ravindarmadishetty736
@ravindarmadishetty736 6 ай бұрын
It's such a fantastic video, Santiago. 🎉
@TempleTimes
@TempleTimes 6 ай бұрын
At 17.25, when I try to run the code to invoke llama 2 it gives me an error saying module not callable. Have installed llama 2 in laptop kindly help
@avinashnair5064
@avinashnair5064 4 ай бұрын
Hey can you please createa video how can we use this in a UI as I tried using streamlit , so I am not able to get the same output what I am getting in the terminal..
@alextiger548
@alextiger548 7 ай бұрын
Ma, thanks for what you are doing! Fantastic stuff.
@derekottman9622
@derekottman9622 9 ай бұрын
This video is supposed to have a link to ANOTHER "from scratch" video as a popup - but WHEN that link pops up, I think it's ACTUALLY a link within this video pointing in a circular loop BACK TO THIS VIDEO, instead of the "pointing elsewhere" link to the other video it's supposed to be. (This video has a link to itself, if I didn't get MY wires crossed.)
@underfitted
@underfitted 9 ай бұрын
I don’t think that’s possible? Anyway, you’ll find the other video here: kzbin.info/www/bejne/eKPWoJaAl5KZd9Esi=BVJfS_0Iq9lwRX0B
@GEORGE.M.M
@GEORGE.M.M 5 ай бұрын
Hi there! I'm new to AI and RAG systems and have spent the past few days diving into your tutorial to understand each step and debug along the way. I have to say, THIS IS A GREAT TUTORIAL! I do have a question about alternative local LLM approaches. Regarding the accuracy of asking questions and retrieving relevant information from PDF documents like research papers and books using local models, would you recommend this approach over using tools like Ollama UI and LM Studio? Are there specific advantages or disadvantages to consider when choosing between these methods?
@MD.IKRAMULHOQUE-c3o
@MD.IKRAMULHOQUE-c3o 3 күн бұрын
Can anyone clear the concept how the locally hosted Ollama model is connected to the program? i can see we are using OPEAI KEY. So got confused
@chanukyapekala
@chanukyapekala 9 ай бұрын
excellent work! so clear and concise..
@UsmanTahir6
@UsmanTahir6 4 ай бұрын
Very good tutorial thanks. Although due to some reason docarray is not working at my end therefore i used FAISS (if anyone interested to know). Thanks Santiago! 🙂
@_kissimusic
@_kissimusic 6 ай бұрын
Can I embed this, such as with laravel, and then serve it on a host online. So I can access it anywhere?
@antonioskarvelas1325
@antonioskarvelas1325 7 ай бұрын
I have problem with the code. I run the code in VScode and I get the error: ValueError: Ollama call failed with status code 403. Could you help me?
@gauravpratapsingh8840
@gauravpratapsingh8840 7 ай бұрын
Hey can you make a video for website page Q/A chatbot by using langchain framework? by using some open llms or free public API keys?
@fredericv3497
@fredericv3497 7 ай бұрын
Really good job and clear tutorial ! Thank you
@surygarcia6823
@surygarcia6823 4 ай бұрын
Is there any tutorial for making our own vector database for production?
@AntoineToussaint
@AntoineToussaint 2 ай бұрын
Great videos by the way, started with the "Build from scratch" one. Quick question though: why does the embeddings for the RAG need to match the model type? My understanding is we only need to be consistent on the question embedding and the store embedding for similarities but then the retriever pulls the actual documents and send then to the model. Later in the video, you say you can use any retriever , like Google search or anything so it seems to me the embeddings can be anything as long as question and documents embedding are the same.
@underfitted
@underfitted 2 ай бұрын
They don’t. That was my mistake. You only need to make sure the embedding you do on the query matches the embedding you used when processing the data. But you can use any embedding model regardless of the embedding used by the LLM.
@AntoineToussaint
@AntoineToussaint 2 ай бұрын
@@underfitted Thank you for the prompt -- pun intended -- reply! I watched two videos and now I am running ollama! You explain so well and give a lot of passion for these. I am trying to build a simple model so I can take a query and extract some custom filtering to send OpenSearch. Great stuff. Thank you for making these videos!
@TomasTrenor
@TomasTrenor 8 ай бұрын
Amazing video Santiago ! Many thanks . Just tried it with Llama3 8b and, as it seems , is not so accurate as Llama2 ( what does not make sense obviously). I need to deep into it
@nguyenquocviet4287
@nguyenquocviet4287 10 ай бұрын
Dear Santiago, I would like to ask you about the evaluation metric? Do you know any evaluation metric for evaluating between the generated answers and true answers? (eg. Rouge metric) Thank you so much!
@joeldartez829
@joeldartez829 10 ай бұрын
Truly, you're the best. I've never met someone who explains things so well. I apologize if it is written somewhere and I missed it, but I wanted to ask if I buy your course today, can I have access to the past content today? I don't want to wait until the live sessions in April (or I want to arrive prepared for them). Thank you very much.
@underfitted
@underfitted 10 ай бұрын
Yes, you get immediate access to everything from day 1.
@azharsham
@azharsham 7 ай бұрын
Brilliant video! but one quick question , are you passing the same string to both question and context here ? If yes does it always work in case of document reader
@jpagano569
@jpagano569 7 ай бұрын
Hmm is there a way to run this in CoLab or Github Codespace? I suppose the point is to run locally, but I hate setting up dev environments (because I'm new to coding!)
@farukondertr
@farukondertr 8 ай бұрын
dude, its awesome! do not stop pls
@chiragharish1020
@chiragharish1020 7 ай бұрын
Great video 👏👏Really helpfull , can you tell which model mac do u have,I have a m2 mac 8gb -i am doubtfull i will be able to run these powerfull models.
@Jonathan-ru9zl
@Jonathan-ru9zl 9 ай бұрын
Great! Can this model and setup serve as an assistant to, lets say, a board design engineer that have thousands of components specs in pdf files? To find and analyze the components faster?
@energyexecs
@energyexecs 2 ай бұрын
I collect old vintage engineering and science books from old bookstores that are ready to be junked - hopefully they are not copyrighted. 😊 My goal is to somehow convert these old books to digital then embed in a vector database. Then take them through transformer and layers and semantic to a LLM of some time for search by "natural language". I created an Independent Private Library Association on Facebook so my thought is to use Meta Llama as my LLM - I probably need to ask Meta Llama people for the costs to do this. It's just a hobby of mine. I have lots of old books. And now Libraries are trashing beautiful old vintage books. I want to do all modalities because these books have beautiful lithograph sketches and single line drawings - I think you've seen such books.
@gilbertoparra5255
@gilbertoparra5255 6 ай бұрын
Thank you for the content very helpful, I have one question, how do I know it is running locally? I mean for every model we used the langchain library as if it were consulted through the API.
@underfitted
@underfitted 6 ай бұрын
They are running locally because I’m using Ollama to host the models in my computer. We use Langchain exactly as we would use it to access online models. That’s good. It means we can switch models without changing the code.
@KumR
@KumR 7 ай бұрын
Can u also pl add an UI using streamlit
@TexttoInvoice
@TexttoInvoice 9 ай бұрын
This video is awesome so so great!!! Thank you so much for such a quality video. Question: what’s the best way to improve the results being accurate to the document, can using structured data such as spreadsheets and CSV files give you a more accurate answer and maybe the model prefers interacting with them? Also, if there was more instances of the data, say multiple different documents, containing the same information that needs to reference ? Anyone who has found the best way to optimize getting correct answers from your retrieval. Please let me know! Thanks
@alexstele5315
@alexstele5315 9 ай бұрын
Thanks a bunch! 🎉 I've been looking for something like that.
@marschrr
@marschrr 9 ай бұрын
Came here from X. Great overview on how to implement an LLM+RAG locally. Any multimodal ones incoming?
@underfitted
@underfitted 9 ай бұрын
Yes
@Rituraj-s5b
@Rituraj-s5b 10 ай бұрын
Very informative video, Santiago!
@vivekatbitm
@vivekatbitm 7 ай бұрын
Great video, just curious though how come both gpt and llama model generated the same joke? Isn't that weird??
@ManuRaj-i4b
@ManuRaj-i4b 8 ай бұрын
Sir how can i do this project using java or spring boot
@anmolacharya7872
@anmolacharya7872 4 ай бұрын
why wont this work for llama 3.1?? It says "llama2 not found". Please help
@anmolacharya7872
@anmolacharya7872 4 ай бұрын
edit. Made it work.....Had to go to every instance of llama2 in the code and modify that into the ollama model I was using.
@HelloIamLauraa
@HelloIamLauraa 2 ай бұрын
why don't we use chunks?
@basantsingh6404
@basantsingh6404 10 ай бұрын
if you are using open_ai key, it means you are paying to use the open ai model. how is it open source ?
@Hizar_127
@Hizar_127 8 ай бұрын
i want to deploy it on cloud. does it is possible?
@theDrewDag
@theDrewDag 10 ай бұрын
Is it actually true that you need models to be aligned with their respective embeddings? I don't think so 🤔 Embeddings are used only for the vector search and lookup functionality. At the end of the day all the model is seeing is your textual prompt. You can use OpenAI embeddings with any open source models and viceversa.
@underfitted
@underfitted 10 ай бұрын
You are right. In this example I only use the embeddings for the search, so what I said is irrelevant here.
@gonzaloplazag
@gonzaloplazag 8 ай бұрын
Great video! incredibly helpful!!!
@TheHikmaQuest
@TheHikmaQuest 9 ай бұрын
kindly create another video in which you use Pinecone and also give a GUI making it a complete standalone application
@serhiua
@serhiua 8 ай бұрын
Pinecode already explained here kzbin.info/www/bejne/eKPWoJaAl5KZd9E&ab_channel=Underfitted
@nikkypuvvada2666
@nikkypuvvada2666 8 ай бұрын
Thanks
@loko5307
@loko5307 5 ай бұрын
Great content! helped me alot. Thanks!
@mehershahzad-n5s
@mehershahzad-n5s 4 ай бұрын
Your content is superb
@mehmetbakideniz
@mehmetbakideniz 10 ай бұрын
Thanks!
@underfitted
@underfitted 10 ай бұрын
Thank you so much! Really appreciate you!
@kergee
@kergee 8 ай бұрын
The lesson was very good, thanks
@ОлександрВасильєв-б5д
@ОлександрВасильєв-б5д 7 ай бұрын
Can I use LLama 3 models with your tutorial?
@underfitted
@underfitted 7 ай бұрын
Yes
@Sam-oi3hw
@Sam-oi3hw 7 ай бұрын
Does anyone know of similar videos on KZbin? I'd really appreciate it.
@sumittupe3925
@sumittupe3925 9 ай бұрын
Thanks for the video..... Well Explained....!
@Omar-p9r3c
@Omar-p9r3c 10 ай бұрын
has anybody used ollamaembedding and got it working?
@TheHikmaQuest
@TheHikmaQuest 9 ай бұрын
kindly convert the same project into a GUI based application
@sam-uw3gf
@sam-uw3gf 10 ай бұрын
good video and your tweets are more informative....✌
@lindavid1975
@lindavid1975 7 ай бұрын
Thank you Santiago - sorry about the code red.
How to evaluate an LLM-powered RAG application automatically.
50:42
A gentle introduction to RAG (using open-source models)
50:10
Underfitted
Рет қаралды 14 М.
VIP ACCESS
00:47
Natan por Aí
Рет қаралды 30 МЛН
Enceinte et en Bazard: Les Chroniques du Nettoyage ! 🚽✨
00:21
Two More French
Рет қаралды 42 МЛН
Don’t Choose The Wrong Box 😱
00:41
Topper Guild
Рет қаралды 62 МЛН
LangChain Explained in 13 Minutes | QuickStart Tutorial for Beginners
12:44
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,3 МЛН
Step by step no-code RAG application using Langflow.
40:52
Underfitted
Рет қаралды 30 М.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1,5 МЛН
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
pixegami
Рет қаралды 347 М.
The intro to Docker I wish I had when I started
18:27
typecraft
Рет қаралды 366 М.
Local GraphRAG with LLaMa 3.1 - LangChain, Ollama & Neo4j
15:01
Coding Crash Courses
Рет қаралды 37 М.
Building Production-Ready RAG Applications: Jerry Liu
18:35
AI Engineer
Рет қаралды 341 М.
VIP ACCESS
00:47
Natan por Aí
Рет қаралды 30 МЛН