What is RAG? (Retrieval Augmented Generation)

  Рет қаралды 60,905

Don Woodlock

Don Woodlock

3 ай бұрын

How do you create an LLM that uses your own internal content?
You can imagine a patient visiting your website and asking a chatbot: “How do I prepare for my knee surgery?”
And instead of getting a generic answer from just ChatGPT, the patient receives an answer that retrieves information from your own internal documents.
The way you can do this is with a Retrieval Augmented Generation (RAG) architecture.
It’s not as complex as it sounds and I’m breaking down how this very popular solution works in today’s edition of #CodetoCare, my video series on AI & ML.
My next video will be on a use case of AI in healthcare - what do you want to hear about from me?
#AI #artificialintelligence #LLM #genai
Check out my LinkedIn: / donwoodlock
---
ABOUT INTERSYSTEMS
Established in 1978, InterSystems Corporation is the leading provider of data technology for extremely critical data in healthcare, finance, and logistics. It’s cloud-first data platforms solve interoperability, speed, and scalability problems for large organizations around the globe.
InterSystems Corporation is ranked by Gartner, KLAs, Forrester and other industry analysts as the global leader in Data Access and Interoperability. InterSystems is the global market leader in Healthcare and Financial Services.
Website: www.intersystems.com/
KZbin: / @intersystemscorp
LinkedIn: / intersystems
Twitter: / intersystems

Пікірлер: 127
@dwoodlock
@dwoodlock 18 күн бұрын
Since this video turned out to be so successful and several people requested for me to do a deep dive / demo, here it is! Looking forward to reading your comments and hope you enjoy this one too. kzbin.info/www/bejne/hmnXgJ2fjqp5p7c
@hussamcheema
@hussamcheema 2 ай бұрын
One of the best explanation of RAG on KZbin. Thanks Don.
@NicolaiDufva
@NicolaiDufva Ай бұрын
I agree. Most other explanations are either way too detailed with live coding that muddles the information or way too high-level talking about how the LLM retrieves the additional data (which it doesn't! it is given to it via the prompt!)
@CodeVeda
@CodeVeda 2 ай бұрын
Finally someone is explaining with an real time example. Otherwise everyone else takes an example of fruits (apple, oranges etc) or movie names etc.
@eahmedshendy
@eahmedshendy 2 ай бұрын
Not confusing at all, just simple and get to the point explanation, thank you.
@christopherhunt-walker6294
@christopherhunt-walker6294 10 күн бұрын
Wow he has explained this really clearly. This is the missing link for me between LLMs and making them actually useful for my projects. Thank you!
@bhaskarmazumdar9478
@bhaskarmazumdar9478 17 күн бұрын
This is an excellent explanation of the concept. Thank you Don
@MuthukumaranPanchalingapuramKo
@MuthukumaranPanchalingapuramKo 24 күн бұрын
Best content on RAG!! Thank you!
@BAZ82
@BAZ82 2 ай бұрын
I found your video to be the most accessible and informative introduction to RAG, especially for those new to this topic.
@longship44
@longship44 28 күн бұрын
This is one of the best explanations of large language Models and the value of utilizing RAG I have seen. Don, you are an outstanding communicator. Thank you for taking the time to put this together.
@user-ts2sj2dg8t
@user-ts2sj2dg8t 2 ай бұрын
Thank you. You are the first to explain RAG well. I have hear about a lot without understanding what does it mean.
@latentspaced
@latentspaced Ай бұрын
Appreciate you and your content. I'm glad I found you again
@MateoGarcia-rt7xt
@MateoGarcia-rt7xt 15 сағат бұрын
Thanks for this great explanation, Don!
@MrNewAmerican
@MrNewAmerican 2 ай бұрын
This is probably the best tutorial I have watched. Period. What an amazing teacher!
@arjbaid2024
@arjbaid2024 2 ай бұрын
Wonderful explanation of this topic. Thank you!
@vinayakminde8317
@vinayakminde8317 3 ай бұрын
By far this is the most simple explaination for RAG I have came across. Amazing. Looking forward to next videos in series.
@joeytribbiani735
@joeytribbiani735 Ай бұрын
the best explanation of rag that've found thank you a lot
@easybachha
@easybachha 2 ай бұрын
Excellent explanation. Exactly what I was looking for! Thank you, Don!
@rahulkunal
@rahulkunal Ай бұрын
Thanks for such a simple explanation of the RAG Architecture Concepts.
@MichaelRuddock
@MichaelRuddock 2 ай бұрын
Thank you for sharing your knowledge with us, great explanation.
@govindarajram8553
@govindarajram8553 5 күн бұрын
Just I watched from youtube suggestions and you me good explanations on Retrieval augmented generation closure to my use case. Thank you
@herculesgixxer
@herculesgixxer 2 ай бұрын
loved your explanation, thank you
@bryanbimantaka
@bryanbimantaka 2 ай бұрын
WOW! The simplest yet the best explanation! It's easy to understand for a beginner like me. THANK YOU!
@MrFrubez
@MrFrubez 2 ай бұрын
Such a great explanation of RAG. It really helped me grasp the power of it.
@rafa_lopes
@rafa_lopes 2 ай бұрын
It was one of the most didactic explanations about RAG. Thank you, @Don Woodlock.
@aryankushwaha9306
@aryankushwaha9306 Ай бұрын
one of the best explanation i ever found. Now I finally understand what RAG is and thank you so much Mr. Don
@chesaku
@chesaku 24 күн бұрын
Wow.. Job well done. Great and simplistic explanation for such complex topic.
@AshisRaj
@AshisRaj Ай бұрын
Excellent explanation Mr. Author
@achen94
@achen94 2 ай бұрын
Amazing video. Thanks for the great explanation!
@m.abdullahfiaz9635
@m.abdullahfiaz9635 26 күн бұрын
Thanks Prof. Don Woodlock you have explained exactly the same as I need to understand about my current project every concept maps to the practical part of project. Please deliver your knowledge more about advance and complex topics.👍
@JamesNguyen-lt5bc
@JamesNguyen-lt5bc Күн бұрын
awesome explanation. Thank you
@Ak_Seeker
@Ak_Seeker 2 ай бұрын
Awesome, thanks for the wonderful explanation in simple language
@nadellaella6416
@nadellaella6416 17 күн бұрын
Bestt explanation! Thank youu Mr.Don!
@fire17102
@fire17102 Ай бұрын
Would love it if you could showcase a working rag example with live changing data. For example item price change, or policy update. Does it require to manually manage chunks and embedding references or are there better existing solutions? I think this really differentiates between fun-todo and actual production systems and applications. Thanks and all the best! Awesome video ❤
@_alphahowler
@_alphahowler Ай бұрын
I would second that request with a real world example where information changes, i.e. some information is outdated Nd some new information is added without compromising the quality of the system.
@Themojii
@Themojii 7 күн бұрын
Great explanation of RAG. I subscribed to your channel after watching this. Thank you Don for the great content.
@johnny1966m
@johnny1966m 2 ай бұрын
Thank you very much for this video. Now is understand what my colleagues do in work with system documentation handling with use of LLM.:)
@zandanshah
@zandanshah Ай бұрын
Good content, please share more.
@mattius459
@mattius459 17 күн бұрын
Great, thank you.
@jasonkey7063
@jasonkey7063 Ай бұрын
Great explanation. I believe this has a big market for developers in small towns. Such an easy product to create and sell.
@PR03
@PR03 2 ай бұрын
great session dear Don. It was very complete, to the point and simply more advanced than other popular videos but of course in simple words. Thank you so much sir. ❤❤
@DavidBennell
@DavidBennell 13 күн бұрын
Great explanation, I have seen a lot of these and people normally go into far too much detail and muddy the water, or are far too abstract, fast and loose, or just get it wrong. I think this is a great level to cover this topic at.
@abhilpnYT
@abhilpnYT 2 ай бұрын
Well explained ThankYou ❤
@dannysuarez6265
@dannysuarez6265 2 ай бұрын
Thank you for your great explanation sir!
@shamimibneshahid706
@shamimibneshahid706 Ай бұрын
Clearly explained!
@ciropaiva1519
@ciropaiva1519 2 ай бұрын
Incredible Video! Thank you very much!
@IhorVasutyn
@IhorVasutyn Ай бұрын
Very intuitive
@steffenmuller2888
@steffenmuller2888 Ай бұрын
I was looking for a general explanation to the RAG topic and you provide it very well! Now, I understand that the quality of RAG systems strongly depend on the information retrieval from the vector database. I will try to implement a RAG system on my own to learn something about it. Thank you very much!
@itayregev4691
@itayregev4691 2 ай бұрын
Thank you Don
@bigplumppenguin
@bigplumppenguin 2 ай бұрын
Very good introduction!!!
@coopernelson6947
@coopernelson6947 20 күн бұрын
Great video. I feel like this is the first time I'm learning stuff that is at the cutting edge. This video was posted 2 months ago, very exciting times
@gtarptv_
@gtarptv_ 5 күн бұрын
Same here I had no idea that RAG WAS BIG DEAL. I'VE BEEN READING STUFF ON REDDIT WORK PEOPLE TALKING ABOUT THE RAG THIS AND THAT
@stephenlii1744
@stephenlii1744 Ай бұрын
it’s a pretty good explanation,thanks Don
@itsAlabi
@itsAlabi Ай бұрын
This is really clear, this will customize the output based on the environment of the user not just on open source data.
@EstevaoFloripa
@EstevaoFloripa Ай бұрын
Thanks a lot! Great and simples explanation!
@789juggernaut
@789juggernaut 2 ай бұрын
Excellent video, really appreciate it.
@EGlobalKnowledge
@EGlobalKnowledge 2 ай бұрын
Nice explanation, Thank you
@travelchimps6637
@travelchimps6637 22 күн бұрын
9:20 not at all confusing, makes perfecf sense the way u exolained it thank you!!!
@narendraparmar1631
@narendraparmar1631 Ай бұрын
Thanks Don , that's informative
@Arunkumar-234.
@Arunkumar-234. Ай бұрын
Great explanation! Thank you very much :)
@screenwatcher6224
@screenwatcher6224 Ай бұрын
This is SOOOO GOOD
@kingofartsofficial4431
@kingofartsofficial4431 2 ай бұрын
Very Good Explanation Sir
@anoopaji1469
@anoopaji1469 2 ай бұрын
Very informative and simple
@Deep185
@Deep185 3 ай бұрын
Thank you!
@TournamentPoker
@TournamentPoker 2 ай бұрын
Great tutorial!
@ClayBellBrews
@ClayBellBrews Ай бұрын
Great work; would really love to see you dig in on tokens and how they work as well.
@CollaborationSimplified
@CollaborationSimplified Ай бұрын
This was great, thank you! I believe this process is what Copilot for Microsoft 365 uses and it is referred to as ‘grounding’. Very helpful 👍
@MagusArtStudios
@MagusArtStudios 2 ай бұрын
I've been doing RAG and not even knowing the definition. Was glad to see I wasn't doing it wrong by injecting it into the end of the prompt.
@inaccessiblecardinal9352
@inaccessiblecardinal9352 3 ай бұрын
Doing RAG stuff right now for work. Just scratching the surface, but very interesting stuff so far. We have a few clients on the horizon who really just need text classification, and the vanilla results from the vector DB might actually be good enough for them. Interesting territory coming fast.
@dwoodlock
@dwoodlock 3 ай бұрын
yes - I have found that pretty small LLMs (like BERT) do just fine for text classification.
@FirstNameLastName-fv4eu
@FirstNameLastName-fv4eu 14 күн бұрын
God save that patient on his/ her Knee Surgery !!
@peterbedford2610
@peterbedford2610 2 ай бұрын
Sounds like it is optimizing or creating a more efficient prompt session? I guess "augmentation" is a fairly good description. Thank you. I enjoy your teaching style.
@speedycareer
@speedycareer Ай бұрын
Great knowledge obviously sir. Would u please tell, can we integrate this data or these things in an app
@joannaw3842
@joannaw3842 3 ай бұрын
Thank you very much, finally someone has explained it in an accessible way. My question, as a tester, are there any weaknesses in such a solution that need to be taken into account when working with such systems?
@dwoodlock
@dwoodlock 2 ай бұрын
Good question. There are two key points of failure that you want to think about from a testing point of view. Part 1 is whether the system is pulling the right documents to use as context. And Part 2 is whether the LLM, given the right documents, is giving a good answer to the question. Maybe teasing those two apart and testing them separately would be a good strategy.
@MadHolms
@MadHolms Ай бұрын
great explanation, thx, but is only theory, can you please show a sample system where all of the above happens? thx!
@letseat3553
@letseat3553 Ай бұрын
The top 5 documents sounds very much like a TF-IDF / cosine similarity based query with a 'limit 5' added to get the top 5 matches on the query - the kind of result you can get from a simple MariaDB search on a FTS index these days. No need to over complicate it and involve an LLM at that stage. I do like how you describe it as 'the prompt before the prompt' - which is just the top 5 results......
@dwoodlock
@dwoodlock 9 күн бұрын
Yes - though a language model will have a better sense of vocabulary, meaning it'll know that 'tired' and 'fatigued' are similar - a TF/IDF will unless you feed it Word2Vec vectors. And it can morph the word vectors based on the rest of the sentence its order of words. But I agree with your point, you shouldn't overcomplicate things if simpler approaches work for your use case.
@oryxchannel
@oryxchannel 3 ай бұрын
Pinecone vector DB has done some revolutionizing of its website- driving costs down with a new tech. It may affect how info is retrieved.
@dwoodlock
@dwoodlock 3 ай бұрын
Yes - pinecone is a leader in this area.
@geoiowa
@geoiowa 27 күн бұрын
Great explanation of RAG! What tool do you use for the drawing? Thanks
@dwoodlock
@dwoodlock 9 күн бұрын
It's a Revolution Lightboard.
@cerberus1321
@cerberus1321 28 күн бұрын
Great video, thanks. I'm tasked with prototyping a product utilizing these methodologies for a client this quarter. I've not done it before so this is very helpful. Is langchain a tool that can handle this entire process? How much context can you provide an LLM without restricting it? Also, how do we actually bottle the raw database query results for summarization, assuming not all questions will relate to qualitative data?
@dwoodlock
@dwoodlock 9 күн бұрын
Yes - langchain can be a great help. I mostly didn't use it for this video because I wanted to explain the underlying concepts.
@hebol
@hebol 2 ай бұрын
Just found you and your great content. Have to ask do you mirror-write…or thinking about it do you just mirror the video .-)
@dwoodlock
@dwoodlock 2 ай бұрын
No I don't mirror right - that would be way too complicated! I'm speaking behind a glass a writing on it naturally and it reverses everything. That's why I don't wear my Nirvana T-shirt when filming.
@hebol
@hebol 2 ай бұрын
@@dwoodlockit wouldn’t be impossible. I had a lecturer she wrote with both hands interchangeable. You absolutely have the uniqeness (if that is a word…I’m Swedish) But you mirror the video so that we can se the text or? You are right-handed aren’t you…it looks like you are left handed….but regardless..I love your content. The best of lecturers theroretical and practical!
@morespinach9832
@morespinach9832 2 ай бұрын
The revelation for me is that “our own data” is in fact added as a prompt before the prompt. And not after the LLM has responded. Is this correct? Secondly any vector database recommendations for storing our own very unstructured PDF documents? Do we need specialized stuff like Pinecone (which sadly is only hosted saas) or would Neo4J type stuff work too… or elastic search?
@DhavalPatel12
@DhavalPatel12 20 күн бұрын
Thanks for explaining in detail and relevant example. Why not just train LLM with your data in the first place ? That would simplify the architecture.
@dwoodlock
@dwoodlock 9 күн бұрын
Yes - it would. But you may not have enough text to teach it all the intricacies of the English language and the original training is very very expensive and computational expensive. So it's better to start with a model that somebody has trained first and then fine-tune it or feed in context like I described.
@mtb_carolina
@mtb_carolina 2 күн бұрын
Let me ask...do you have any methodologies to keep the chunks from the internal database private for the rag recall with the LLM if those internal databases that the RAG system is pulling from are confidential? I've been grappling with this...any insights will be much appreciated. Thank you!
@JohnTurner313
@JohnTurner313 2 ай бұрын
Very clear and helpful. The question I have: if I create a RAG using my own content (eg: contents of my cloud drive), how do I prevent that data content, which may include PII, HIPAA and other protected information, from being used by the AI provider like OpenAI? Anything I send to a 3P AI LLM will be used by them for training their own model, which in turn leads to high risk leakages to other people who aren't me. It seems like the only way to do this is to have a restricted, private LLM running locally on my laptop or home network.
@dwoodlock
@dwoodlock 2 ай бұрын
You need an agreement with these cloud services providers that enabled you to send PHI to them. Some offer this as one of their services. If you don't have that agreement, you cannot do it (in most countries). It is also an option to run a model on-prem and there are decent open source LLMs that you can use for certain use cases.
@morespinach9832
@morespinach9832 2 ай бұрын
@@dwoodlockwhich ones can be self hosted - BERT, RoBERTa? Which ones are good I mean. Also - do we have to keep these models up to date on Prem by downloading them again in the future as new versions of them emerge?
@seva723
@seva723 2 ай бұрын
soooooo lit
@imranideas
@imranideas 29 күн бұрын
Hi, I have passed on the content of a pdf file to the llm and it does come up with a relevant response however the response is still generic in nature from the content i provided, what i need is a crisp to the point response like steps required to activate a sim card.. can you help me achieve the same
@didyouknowtriviayt
@didyouknowtriviayt 2 ай бұрын
I built a system like this last year with openai, pinecone and python
@pritampatil6056
@pritampatil6056 2 ай бұрын
Father of AI for a reason!!
@djl3009
@djl3009 2 ай бұрын
I guess it doesn't hurt to have a likeness to Geoffrey Hinton if you are an AI practitioner :)
@dwoodlock
@dwoodlock 9 күн бұрын
Ha.
@lxn7404
@lxn7404 7 күн бұрын
I wonder what is the role of LLM in creating a vector, it looks like simple indexation
@theindubitable
@theindubitable Ай бұрын
I have a problem with the model not changing context. It fills the the token cap and then when I ask another question it wont update the chunks everytime. How can I solve this? Maybe prompting.
@joemiller9856
@joemiller9856 Ай бұрын
Don , Can RAG be used to protect private company sensitive (trade secrets, etc. ) data by essentially translating this private data to numerical information (vectors) while using publicly available LLM? I suppose the responses from these prompts may also potentially expose sensitive information as well??
@dwoodlock
@dwoodlock 9 күн бұрын
Even though the RAG approaches turns words, sentences, and paragraphs into numbers, that doesn't mean that they are private, in other words you can call the tokenizer in reverse. So you need to treat these in the same way when considering whether to send them to a cloud LLM service.
@labsanta
@labsanta 3 ай бұрын
- [00:00 - 02:20](kzbin.info/www/bejne/q2WaeKeOrMqDo9U) 🧠 Introducción a la Recuperación Aumentada en Generación (RAG) - Explicación de lo que es Retrieval Augmented Generation (RAG). - Uso común de RAG para mejorar la experiencia de los usuarios con modelos de lenguaje. - Aplicaciones de RAG en la respuesta a preguntas y generación de contenido. - [02:20 - 06:11](kzbin.info/www/bejne/q2WaeKeOrMqDo9U) 🤖 Componentes de RAG - Detalle sobre la estructura de una solicitud a un modelo de lenguaje (prompt). - La importancia de las instrucciones en la solicitud antes del prompt. - Proceso de selección de contenido relevante de la base de datos y su inclusión en la solicitud. - [06:11 - 10:58](kzbin.info/www/bejne/q2WaeKeOrMqDo9U) 🔄 Vectorización y Retrieval en RAG - Explicación de cómo se vectoriza el contenido para hacerlo numérico y comparable. - Proceso de búsqueda y selección de documentos relevantes en la base de datos. - Cómo RAG mejora la generación de respuestas basadas en el contenido recuperado. Espero que esta división en secciones sea útil para comprender mejor el concepto de Retrieval Augmented Generation (RAG).
@michaelcharlesthearchangel
@michaelcharlesthearchangel Ай бұрын
RAG // RSI from the Matrix
@worldof_AG
@worldof_AG 2 ай бұрын
Please create a video of an entire project to create a chatbot using RAG
@dwoodlock
@dwoodlock 9 күн бұрын
I have one now: kzbin.info/www/bejne/hmnXgJ2fjqp5p7c
@worldof_AG
@worldof_AG 9 күн бұрын
@@dwoodlock thanks
@JamesSpellos
@JamesSpellos 2 ай бұрын
So is a GPT that people can easily build essentially an architecture that uses a RAG approach, and if so, does it create a vector database from the documents the user uploads?
@dwoodlock
@dwoodlock 9 күн бұрын
Yes basically. And it also allows the user to put in custom instructions for their GPT.
@JamesSpellos
@JamesSpellos 9 күн бұрын
@@dwoodlock Thank you for the confirmation & clarification. Appreciate it.
@ramsescoraspe
@ramsescoraspe 2 ай бұрын
What about if the "content" is a relational database?
@dwoodlock
@dwoodlock 2 ай бұрын
Relational data is a fine source for this as well. Generally speaking you would 'textify' that data from the relational database to make it part of the prompt. For example when we use this approach with patient medical records, we create a text version of the medical record (which contains structured fields in a relational database) and use that text as part of the RAG model. You don't have to turn everything into complete sentences, but some textual form gives the LLMs the best chance of understanding structured data.
@serdalaslantas
@serdalaslantas Ай бұрын
Hi, I have a problem with my chatbot. It doesn’t have a memory and doesn’t remember my previous conversations! How do I solve this issue? Does RAG system solve this problem? Thanks for your answer.
@KrungThaiBank
@KrungThaiBank 14 күн бұрын
Can save question and answer for next question
@dwoodlock
@dwoodlock 9 күн бұрын
yes - I didn't demonstrate this but RAG systems also summarize the prior conversation (or the last few questions) and send that as context. So if you ask "Do you have parking?" and then "How much does it cost?", it knows that you are referring to parking vs. your knee surgery for example.
@NISHANTSUTAR-fi9xe
@NISHANTSUTAR-fi9xe 12 күн бұрын
If I don’t have any relative data in my document then how to avoid giving respond to that question instead of giving random answer unrelated answer
@dwoodlock
@dwoodlock 9 күн бұрын
Just add that to the prompt. "If you are not finding any relevant information in the context that I provided, please don't answer the question". That sort of thing.
@herotube789
@herotube789 2 ай бұрын
is what's called data embedding ?
@dwoodlock
@dwoodlock 2 ай бұрын
Yes - embedding is a key foundation that makes this work. Embedding is basically take the 'essence' of a paragraph of text and turning into a string of numbers. ML models really just work on numbers so this is clever process to do that.
@asapfilms2519
@asapfilms2519 2 ай бұрын
All human beings a large language models themselves. :)
@dwoodlock
@dwoodlock 9 күн бұрын
I guess so.
@nunobartolo2908
@nunobartolo2908 2 ай бұрын
Still limited by the max prompt size
@dwoodlock
@dwoodlock 2 ай бұрын
Yes - The prompt size limit is one of the reasons the RAG model was invented. Otherwise you could just all your content along with the prompt, theoretically at least.
@nunobartolo2908
@nunobartolo2908 2 ай бұрын
@@dwoodlock there is a better way convert your documents into question answer dataset and fine tune on it the model can memorize an unlimited context this way
@MDNQ-ud1ty
@MDNQ-ud1ty 2 ай бұрын
What I worry about is that many people will be using these tools as a substitute for their own brain and educational experience. They will look intelligent on the surface but there will be a feedback system being created which will amplify their ignorance. That is, as these people use AI to generate knowledge for them and that the AI itself is flawed then new AI will be trained on the knowledge they generate. We already have this problem to some degree which is what partially makes modern AI flawed(and humanity in general as this is a problem that is independent of AI). Eventually we will have people who will say "But the AI says you are wrong so you are wrong" to someone that, say, has proof or has researched the topic. We already have such things issues so with time the reinforced feedback mechanism will only amplify such problems. Ultimately cults will form around AI where AI becomes god and when AI is wrong he is not wrong, you are. This will happen. It is almost surely guaranteed. The only question is how far will it go. The worse thing about all this is the way the information is hacked up will surely create misinformation which will be disguised through the LLM's "re-hashing". The entire point of RAG seems to be to essentially generate a more realistic output rather than necessarily providing more information.
How to Use AI to Improve Patient No Show Rates
13:02
Don Woodlock
Рет қаралды 3,2 М.
How To Create Custom GPTs - Build your own ChatGPT
12:17
Skill Leap AI
Рет қаралды 290 М.
Leena AI Autonomous Cognitive Agent for Procurement
3:13
How to set up RAG - Retrieval Augmented Generation (demo)
19:52
Don Woodlock
Рет қаралды 2,5 М.
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
What is Retrieval-Augmented Generation (RAG)?
6:36
IBM Technology
Рет қаралды 441 М.
Stanford CS25: V3 I Retrieval Augmented Language Models
1:19:27
Stanford Online
Рет қаралды 123 М.
Generative AI 101: When to use RAG vs Fine Tuning?
6:08
Leena AI
Рет қаралды 3,6 М.
What is LangChain?
8:08
IBM Technology
Рет қаралды 98 М.
Transformer Neural Networks, ChatGPT's foundation, Clearly Explained!!!
36:15
StatQuest with Josh Starmer
Рет қаралды 538 М.
4 Methods of Prompt Engineering
12:42
IBM Technology
Рет қаралды 91 М.
AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"
23:47
Vortex Cannon vs Drone
20:44
Mark Rober
Рет қаралды 13 МЛН
The PA042 SAMSUNG S24 Ultra phone cage turns your phone into a pro camera!
0:24