Prompt Engineering, RAG, and Fine-tuning: Benefits and When to Use

  Рет қаралды 105,967

Entry Point AI

Entry Point AI

Күн бұрын

Пікірлер: 107
@XXCDMXX
@XXCDMXX Жыл бұрын
it's really fresh and accurate that u say "LLMs don't store facts, they just store probabilities". BTW, it's a very excellent video with wonderful backstage and straightforward script and presentation.
@steve_wk
@steve_wk Жыл бұрын
This is a very high quality explanation of the relationships between prompts, RAG and fine tuning. Really well done - thanks for taking the time to make this very clear.
@abtix
@abtix Жыл бұрын
These are some solid information you are providing in this video, I'm actually surprised that this video has such low amount of views. Just know, for those who are seeking information regarding the things you covered, these are very well presented and explained, and I really appreciate that.
@altrubalag
@altrubalag Жыл бұрын
This is simple and super useful. Remember, putting it in simpler terms is not an easy job. Simplicity is the ultimate sophistication! 👏
@JoshVonSchaumburg
@JoshVonSchaumburg 7 ай бұрын
I work in technical presales and delivery management. I probably watched 10 KZbin videos to try to better understand RAG vs. fine tuning (which I'm now calling TAG :)). This was by far the best explanation. I'm sending to all my coworkers!
@EntryPointAI
@EntryPointAI 7 ай бұрын
Awesome!! Glad I could help :)
@linluxv2275
@linluxv2275 Жыл бұрын
Thank you for explaining the differences between Prompt Engineering ,RAG and Fine-tuning.Your explanation is easy to understand, which is very helpful for those who are new to AI. I will continue to follow your updates!
@lifechamp007
@lifechamp007 Жыл бұрын
Finally I understood RAG and fine tuning so clearly - Thank you for awesome video !!
@marksaunders8430
@marksaunders8430 9 ай бұрын
Hands down, the best video I've found for this topic
@jong-keyongkim565
@jong-keyongkim565 9 ай бұрын
The most efficient and informative content on those terms and their respective usage. 👍
@routergods
@routergods 4 ай бұрын
I've been binge watching LLM-related videos and most have been regurgitation of docs or (probably) GPT/AI generated fluff pieces. This video clearly explained several concepts that I was trying to wrap my head around. Great job!
@someguyO2W
@someguyO2W 4 ай бұрын
Similar comments. He actually explained it properly.
@bobrarity
@bobrarity 3 ай бұрын
damn bro thanks, I'm new to fine-tuning, needed to migrate my rag model to the ragtag model, and your video was very clear and helped me a lot 😊
@EntryPointAI
@EntryPointAI 3 ай бұрын
Sick, glad to hear it!
@pantoffelslippers
@pantoffelslippers 6 ай бұрын
Probably one of THE BEST large language model videos I’ve seen. 😮
@luckelly2378
@luckelly2378 18 күн бұрын
Such an incredibly clear presentation! Thank you
@scottyb3b7
@scottyb3b7 10 ай бұрын
I work in this field - he did an exceptional job here. I would have liked to see FLARE in here an an extension of RAG.
@someguyO2W
@someguyO2W 4 ай бұрын
Best video I've seen on the topic. Thank you!
@upnyx-inno
@upnyx-inno Жыл бұрын
Awesome explanation man, like perfect, clears all confusion regarding LLM ES in 15 mins. All the very best for Entrypoint AI. Keep rocking.
@EntryPointAI
@EntryPointAI Жыл бұрын
Thanks so much!
@Mel-lp4hz
@Mel-lp4hz 7 ай бұрын
Very nice and clear explanation, thanks!
@ShresthShukla-h9n
@ShresthShukla-h9n 11 ай бұрын
This channel will be super useful in coming days
@pratikmandlecha6672
@pratikmandlecha6672 2 ай бұрын
Loved the explanation. Despite knowing these terms well, I was curious to see how it would be explained and I am glad that I watched this video
@deepstum
@deepstum 6 ай бұрын
Excellent explanation. Best wishes for your professional endeavors. While I rarely comment on KZbin videos, this one deserves all the praise.
@EntryPointAI
@EntryPointAI 6 ай бұрын
Thank you very much!
@jb-mk5ln
@jb-mk5ln 3 ай бұрын
Incredibly high quality content, thank you.
@rameshpjain
@rameshpjain 2 ай бұрын
Such a simple way to explain. Thank you.
@BorHouse
@BorHouse 7 ай бұрын
Tomorrow I have a presentation on RAG and you are like an angel to me right now 😅
@brianhauk8136
@brianhauk8136 11 ай бұрын
Thank you so much for your clear and useful explanation of the relationships between prompts, RAG and fine tuning. How affordable is fully-fledged RAG retrieval for a chatbot, say, for a dental office website using GTP-4? And is it likely to be fast enough going through many steps to provide site visitors with quality information before they give up waiting and leave the site? Maybe we need to first provide them with a quick, useful, partial answer while letting them know that additional information will be available in approximately 15 seconds? I'm concerned visitors will abandon the site and that compute costs may be too high for most business owners. Does anyone have experience with this?
@EntryPointAI
@EntryPointAI 11 ай бұрын
It depends on how many steps you need to optimize the RAG workflow. Each step adds latency. If you also need to do a quality-check (eg LLM self-review) before responding with the answer, that means you can't stream tokens in real-time, which makes it feel much slower. However, lots of people are doing this for chatbots and the basic retrieval lookup should be pretty blazing fast since it's essentially just a database query. For a dental office website, I would not try to build RAG from scratch unless it was a nationwide chain because I think you'd need either a dev team or a really good developer and cost could add up fast. Either way, I'd start with the simplest solution, trying to leverage something that has already been built. Fixie has some cool solutions you could look into for off-the-shelf RAG www.fixie.ai/
@brianhauk8136
@brianhauk8136 11 ай бұрын
​ @EntryPointAI I didn't know that building RAG from scratch today requires a dev team or really good developer to get specific responses. In your video you say to include required specific information in the prompt so responses stay grounded (in the reality contained in the documents) and true to that. Can fine tuning provide similar, grounded responses, or is our only option to use very long, expensive prompts? And is there any reliable approach to getting these specific, accurate responses when querying hundreds or thousands of documents? My understanding is that ChatGPT-4 allows up to only 20 documents, and that most LLMs aren't good at accurately retrieving and using content in the middle of longer context windows and documents. Also, are other methods of training models doing more than using prompt/completion pairs more effective?
@EntryPointAI
@EntryPointAI 10 ай бұрын
Fine-tuning can help prevent the model from answering questions that are out of scope for it's task. It can also teach it how to respond to certain inquiries, in a "customer service" type of tone. It depends on the nature of the task and how much domain-specific knowledge is required if fine-tuning can handle it. 99.9% of all model knowledge is learned during pretraining, so if you can quiz the LLM and get back accurate results without RAG then fine-tuning can be sufficient to customize for your use case or task. The documents get split into chunks and stored in a database, so they are split and logically separated has a big impact on how well your retrieval works. You can also train a custom embeddings model or experiment with different off-the-shelf embeddings models to try to improve the retrieval results. OpenAI just came out with some new ones. I am sure there are other training methods out there but the primary way to train LLMs is with prompt completion pairs. With chat-based models, you can also train on multi-turn chat, but all they are doing under the hood here is inserting special tokens between each chat message and labeling them (ChatML for example is the markup language OpenAI uses).
@robertomeers
@robertomeers Жыл бұрын
Excelente info. Muy concreta y clara. los ejemplos muy sencillos pero propicios para cada caso. RAGTAG team 👍 Suscrito 🔔
@darrin.jahnel
@darrin.jahnel 8 ай бұрын
This is one of the best AI videos I've seen (and I've watched hundreds). Great job!
@abdelrahmanmagdi6767
@abdelrahmanmagdi6767 14 сағат бұрын
just perfect explaination !
@Franchise-infoCa
@Franchise-infoCa Жыл бұрын
Very nice explanation. Thanks
@andrewsuttar
@andrewsuttar Жыл бұрын
Superb video man! This is exactly the content that fits what I've been aiming to achieve. And I love the RAGTAG acronym 😂👍
@EntryPointAI
@EntryPointAI Жыл бұрын
Awesome!! Thank you 😃
@JoanApita
@JoanApita 5 ай бұрын
these is what im looking before im jumping in coding. thanks man.
@neiltaggart1
@neiltaggart1 9 ай бұрын
Very clear & simple - well done!👏
@mohammedAlbared
@mohammedAlbared 4 ай бұрын
"Great video with an amazing explanation! Thank you for sharing."
@CodersArch
@CodersArch Жыл бұрын
Thank you for an educational video
@thunkin-ai
@thunkin-ai 9 ай бұрын
Nice explanation! I signed up for the masterclass, but, its 4am my time; might be a challenge
@Nanichinni-wb4fj
@Nanichinni-wb4fj 5 ай бұрын
This is crystal and clear❤. Awesome!
@hoyinleunghk
@hoyinleunghk 20 күн бұрын
Excellent explanation thank a lot!
@AryanKumarBaghel-cp1jv
@AryanKumarBaghel-cp1jv 3 ай бұрын
Perfect differentiations. Thankyou
@Aishakamath
@Aishakamath 9 ай бұрын
This was really really helpful! Thanks for the video😊
@aldotanca9430
@aldotanca9430 Жыл бұрын
Very clear, thanks!
@jachymdolezal5103
@jachymdolezal5103 Жыл бұрын
Great video thanks!
@saraswathisripada
@saraswathisripada 9 ай бұрын
Very well explained, thank you.
@windhusky4009
@windhusky4009 Жыл бұрын
This is really good!
@zebirdman
@zebirdman 9 ай бұрын
Love this video - thank you! Question: When RAG is used you say it shows up in the context window (i.e., where the user types in the prompt). However, my understanding / experience has been that once the knowledge is provided to the LLM via a vector database, the user can just type the prompt/query and nothing else is needed. Can you provide an example of a user's prompt/query when using RAG?
@EntryPointAI
@EntryPointAI 9 ай бұрын
The knowledge has to be retrieved for each new user inquiry, and inserted into a prompt alongside it each time.
@guapshonen
@guapshonen 6 ай бұрын
Hi. Can entry point be integrated with voiceflow?
@EntryPointAI
@EntryPointAI 5 ай бұрын
I'm not familiar with voiceflow, so I'm not sure.
@coachdennis6139
@coachdennis6139 Жыл бұрын
Nice. Thanks.
@bernard2735
@bernard2735 11 ай бұрын
Great explanation, thank you.
@devtest202
@devtest202 9 ай бұрын
Excellent material, I want to ask you a question, try to train the gpt2 (fine tuning) model for these purposes, do you see it as valid? On the other hand, do you have examples of how you have trained the model? It is very interesting what you say about how with a few examples you can see good results, for example if I want to train him so that he acts like a salesperson and that if he is asked about a certain type of product he always responds with a particular logic. I think it's great in the final part when you say that the great model would be a rag with only variables and the promt is somehow already trained
@EntryPointAI
@EntryPointAI 9 ай бұрын
I haven’t tried GPT 2, I would stick to more recent models for best results. For a sales agent that might be a good one to combine with RAG, to where relevant product info is looked up and inserted into the prompt, and then fine-tuning teaches the model how to “sell” it.
@funkyflorion
@funkyflorion 7 ай бұрын
Good summary, thanks!
@SarabjitMadan
@SarabjitMadan 10 ай бұрын
This was just brilliant. Thanks so much.
@Aspect0529
@Aspect0529 7 ай бұрын
Fantastically done!
@tunairaiol
@tunairaiol 11 ай бұрын
Perfect video, thank you.
@bryanbimantaka
@bryanbimantaka 9 ай бұрын
Thank you! This is what I need for my thesis. You explained it in an easy yet precise way. If I may make a request, could you provide a video of designing/building a closed-domain chatbot (using RAG for prompting) and employing a pre-trained LLM like GPT, BERT, or LLAMA? please 😅 Anyway, subscribed!
@EntryPointAI
@EntryPointAI 9 ай бұрын
Glad you found it helpful! That sounds like a fun topic, I will try to get to it eventually. It might be a while, though :)
@VerdonTrigance
@VerdonTrigance 10 ай бұрын
Hey! Nice and concise explanation of different technics. But I didn't get one moment. Whenever I want a big LLM to deal better with my specific task which is answering open question about the content of the book (for example Lord of the Rings) and if I want to fine-tune it, should I always provide a structured data in form of question and answer? In case of book it seems like impossible task to prepare such kind of input data. This is what I want to grab from the model, because I don't know it, right? So question is, can a LLM be fine-tuned just by giving a new content to it (like book chapters one by one)?
@EntryPointAI
@EntryPointAI 10 ай бұрын
You can but its not a practical way to teach knowledge. More likely it will learn the writing style but not give it the ability to retrieve specific facts. LLMs learn almost all their knowledge during pretraining.
@trackerprince6773
@trackerprince6773 Жыл бұрын
Whats the diff between custom gpts & fine-tuning gpt?
@EntryPointAI
@EntryPointAI Жыл бұрын
Custom GPTs do not re-train the underlying model in any way, they use prompt engineering and RAG if you upload documents.
@PopeCap
@PopeCap Жыл бұрын
I love the idea of TAG, but had a question. I'm not quite sure what you mean by taking all the examples out of the prompt and into a data set. Don't they still land in the prompt? Your graphic at 12:00 implies they're in a DB somewhere, but then isn't this just RAG again?
@EntryPointAI
@EntryPointAI Жыл бұрын
Thanks for your question! The database icons might be misleading there. The intent on that slide is to show a training dataset that is used to fine-tune a model on examples of the task.
@PopeCap
@PopeCap Жыл бұрын
Ahh ok - so that goes into a separate model that's separate from an LLM?@@EntryPointAI
@tituspowell
@tituspowell 5 ай бұрын
Very helpful - thank you!
@past_life_project
@past_life_project 11 ай бұрын
Wowzer ! that was brilliant ! Subscribed !
@FernandoOtt
@FernandoOtt 7 ай бұрын
Awesome content Mark. A Question. I need to create an AI psychologist and I need to store college data, but this college data is kind of a guide of what to speak, and not the content itself. In that case, what is the best approach, RAG or Fine-tuning?
@EntryPointAI
@EntryPointAI 7 ай бұрын
If it’s “how” to deliver the content, most likely fine-tuning.
@FernandoOtt
@FernandoOtt 7 ай бұрын
Tks Mark. I still question if fine-tuning works for that cause isn't the "how" for like personality but it is for something like "use the technique X or Y" to continue the interaction. Do you think it is still a fine-tuning approach?
@martyn_health
@martyn_health 11 ай бұрын
Brilliant bro -- so good actually
@RolandoLopezNieto
@RolandoLopezNieto 10 ай бұрын
Great video sir
@trackerprince6773
@trackerprince6773 Жыл бұрын
Clear and simple 👍. For gpt3.5 turbo fine-tuning do you always need a user role value or does it make sense to sometimes just provide the system and the assistant value? Curious if all you want to fine tune on is tone and style , doesnt it make sense to provide how the ai should respond without the user input words haveing any weight.
@EntryPointAI
@EntryPointAI Жыл бұрын
I haven’t tried this specifically, but it is possible to pass empty prompts (system and user) and teach the model to learn output style given no information. Usually what I do is have a minimal system prompt (longer for smaller training datasets) and then use GPT-4 to generate user inputs from my assistant completions in the way I would want to ultimately use and prompt the model (generally minimalist inputs or in the form of rough notes for a given output). Note that the model isn’t learning much if anything from your input in terms of style and formatting, its just using it for context for what kind of output to generate. Prompt learning rate is usually 0 or a nominal value like 0.01
@trackerprince6773
@trackerprince6773 Жыл бұрын
@@EntryPointAI would def like to test if the style or tone differs. Since my assumption is to some extent the input context should have some effect. But maybe for many use cases having an input(even synthetic generated) may be better. But I was watching another video( kzbin.info/www/bejne/g3vQZWyKdt6ta9kfeature=shared ) where a podcast transcript was used to train for capturing host tone and style. the creator found the style to be more similar to the host when the sentence spoken from the guest was removed in training.
@soharad-o8n
@soharad-o8n 11 ай бұрын
this is really informative and a well organised presentation. could you pls guide me a material or link to now how to implement RAG-TAG? and also i'm wondering what amount of data should we provide for Fine-tuning in case of RAG-TAG?
@EntryPointAI
@EntryPointAI 11 ай бұрын
I don't have a more detailed guide to implement RAGTAG, but I will take this as a request to prepare one 🤙 If you can prepare 24-48 high quality and diverse examples, that's enough to see benefits from fine-tuning. See this paper: arxiv.org/abs/2305.11206 Video on "How much data do you need for fine-tuning?" coming soon!
@soharad-o8n
@soharad-o8n 11 ай бұрын
cool 👍thanks alot🙏🙏@@EntryPointAI
@sivavenu5100
@sivavenu5100 9 ай бұрын
very clear explanation,
@hernandosierra8759
@hernandosierra8759 10 ай бұрын
Excelente. Gracias.
@stevenwessel9641
@stevenwessel9641 5 ай бұрын
Nice logo my guy
@italoaguiar
@italoaguiar 7 ай бұрын
Excellent video!! Congrats and thanks! +1 subscriber ;)
@Blogservice-Fuerth
@Blogservice-Fuerth 8 ай бұрын
Great Content 🙏🏽
@astrodysseus
@astrodysseus Жыл бұрын
are you using the apple vision to record yourself ?
@EntryPointAI
@EntryPointAI Жыл бұрын
Nope, Canon R6 Mk II
@astrodysseus
@astrodysseus Жыл бұрын
@@EntryPointAI ok, i was asking as it seems there are some heavy post processing.. but could be the results of something else. Cheers
@cprashanthreddy
@cprashanthreddy 11 ай бұрын
Good video... 😀
@yesid3777
@yesid3777 4 ай бұрын
Thanks!, an amazing explanation!, +1 Sub.
@JavierTorres-st7gt
@JavierTorres-st7gt 6 ай бұрын
How to protect a company's information with this technology?
@EntryPointAI
@EntryPointAI 5 ай бұрын
Depends on the situation and how you want to apply the company information. If you mean to prevent it leaking in chats, I would suggest cleaning your data from anything sensitive before training an LLM on it or performing an embedding.
@seanmeverett
@seanmeverett 6 ай бұрын
Such a key point: probabilities, not facts. They’re quantum in nature, not binary.
@dilippalwekar1223
@dilippalwekar1223 Жыл бұрын
loved it
@micbab-vg2mu
@micbab-vg2mu Жыл бұрын
nice
@gehdochnicht
@gehdochnicht 11 ай бұрын
Those eyes though 😍
@PlayOfLifeOfficial
@PlayOfLifeOfficial 6 ай бұрын
What I learned: We are no we’re near AGi
@tanveeriqbal6680
@tanveeriqbal6680 9 ай бұрын
Very clear explanation in its simple form, thanks @entrypointai
@FunnyVideos-ni4iu
@FunnyVideos-ni4iu 9 ай бұрын
very helpful! thanks
@Jas.in-bx3eg
@Jas.in-bx3eg 7 ай бұрын
Uncanny, avatar moderator.
LoRA & QLoRA Fine-tuning Explained In-Depth
14:39
Entry Point AI
Рет қаралды 53 М.
RAG vs. Fine Tuning
8:57
IBM Technology
Рет қаралды 91 М.
It’s all not real
00:15
V.A. show / Магика
Рет қаралды 20 МЛН
UFC 310 : Рахмонов VS Мачадо Гэрри
05:00
Setanta Sports UFC
Рет қаралды 1,2 МЛН
A Survey of Techniques for Maximizing LLM Performance
45:32
Fine-tuning 101 | Prompt Engineering Conference
19:17
Entry Point AI
Рет қаралды 7 М.
How to build Multimodal Retrieval-Augmented Generation (RAG) with Gemini
34:22
Google for Developers
Рет қаралды 74 М.
What is RAG? (Retrieval Augmented Generation)
11:37
Don Woodlock
Рет қаралды 180 М.
Fine Tuning Large Language Models with InstructLab
8:01
IBM Technology
Рет қаралды 16 М.
The Best RAG Technique Yet? Anthropic’s Contextual Retrieval Explained!
16:14
EASIEST Way to Fine-Tune a LLM and Use It With Ollama
5:18
warpdotdev
Рет қаралды 175 М.
How to set up RAG - Retrieval Augmented Generation (demo)
19:52
Don Woodlock
Рет қаралды 41 М.
It’s all not real
00:15
V.A. show / Магика
Рет қаралды 20 МЛН