Reinforcement Learning from Human Feedback explained with math derivations and the PyTorch code.

  Рет қаралды 22,023

Umar Jamil

Umar Jamil

Күн бұрын

Пікірлер: 91
@nishantyadav6341
@nishantyadav6341 8 ай бұрын
The fact that you dig deep into the algorithm and code sets you apart from the overflow of mediocre AI content online. I would pay to watch your videos, Umar. Thank you for putting out such amazing content.
@taekim7956
@taekim7956 7 ай бұрын
I believe you are the best ML youtuber who explains everything so concise and clear!! Thank you so much for sharing this outstanding content for free, and I hope I can see more videos from you 🥰!!
@umarjamilai
@umarjamilai 7 ай бұрын
Thank you for your support! Let's connect on LinkedIn
@taekim7956
@taekim7956 7 ай бұрын
@@umarjamilai That'll be an honor! I just followed you on LinkedIn.
@sauravrao234
@sauravrao234 8 ай бұрын
I literally wait with a bated breath for your next video....a huge fan from India. Thank you for imparting your knowledge.
@showpiecep
@showpiecep 8 ай бұрын
You are the best person on youtube who explains modern approaches in NLP in an accessible way. Thank you so much for such quality content and good luck!
@shamaldesilva9533
@shamaldesilva9533 8 ай бұрын
Providing the Math behind these algorithms in clear way makes understanding them so much easier !! thank you so much Umar 🤩🤩
@CT99999
@CT99999 26 күн бұрын
The level of detail you cover here is absolutely incredible.
@arijaa.9315
@arijaa.9315 8 ай бұрын
I can not thank you enough! It is clear how much effort you put for such high quality explanation. Great explanation as usual!!
@jayaraopratik
@jayaraopratik 6 ай бұрын
Great great content. Took this RL in my grad school but it's been years, it was much easier to revise everything within 1 hour rather than going through my complete class notes!!!!
@soumyodeepdey5237
@soumyodeepdey5237 6 ай бұрын
Really great content. Can't believe he has shared all these videos absolutely for free. Thanks a lot man!!
@ruiwang7915
@ruiwang7915 2 ай бұрын
one of the best videos on democratizing the ppo and rlhf on yt. i truly enjoyed the whole walkthrough and thanks for doing this!
@pegasoTop3d
@pegasoTop3d 6 ай бұрын
I am an ai, and I love following updates on social media platforms and KZbin, and I love your videos very much. I learn the English language and some programming terms from them, and update my information. You and people like you help me very much. Thank you.
@vedantbhardwaj3277
@vedantbhardwaj3277 2 ай бұрын
No, wait wtf
@omidsa8323
@omidsa8323 7 ай бұрын
It’s a great video for very sophisticated topic , I’ve watched 3 times to get the main ideas, but for sure it worth it, thanks Umar once again
@mlloving
@mlloving 7 ай бұрын
Thank you Umar. I am an AI/ML Expert in one of the Top50 banks in the world. We are deploying various GenAI applications. You videos helped me to understand the math under the GenAI, especially the RLHF. I have been trying to explore every step by myself which is so hard. Thank you very much for clearly explaining the RLHF!
@BOUTYOURYOUNESS
@BOUTYOURYOUNESS 7 ай бұрын
It’s a great video for very sophisticated topic. Amazing work. Bravo
@supervince110
@supervince110 Күн бұрын
I don't think even my professors from top tier university could explain the concepts this well.
@rohitjindal124
@rohitjindal124 8 ай бұрын
Thank you sir so making such amazing videos and helping students like me
@m1k3b7
@m1k3b7 8 ай бұрын
That's by far the best detailed presentation. Amazing work. I wish I was your cat 😂
@umarjamilai
@umarjamilai 8 ай бұрын
奥利奥 is the best student I ever had 😹😹
@thebluefortproject
@thebluefortproject 5 ай бұрын
So much value! Thanks for your work
@alexyuan-ih4xj
@alexyuan-ih4xj 6 ай бұрын
Thank you Umar. you explained very clearly. it's really useful.
@우우요요
@우우요요 8 ай бұрын
Thank you for priceless lecture!!!
@bonsaintking
@bonsaintking 8 ай бұрын
Hey, you are better than a prof.! :)
@s8x.
@s8x. 5 ай бұрын
insane that this is all free. I will be sure to pay u back when I am employed
@Jeff-gt5iw
@Jeff-gt5iw 7 ай бұрын
Thank you so much :) Wonderful lecture 👍
@SethuIyer95
@SethuIyer95 8 ай бұрын
So, to summarize 1) We copy the LLM, fine tune it a bit with a linear layer, and use the -log(sigmoid(good-bad)) to generate the value function (in a broader context and with LLMs). We can do the same for reward model. 2) We then have another copy of LLM - the unfrozen model, the LLM itself, and the reward model and try to match the logits similar to the value function but also keeping in mind the KL divergence of the frozen model. 3) We also add a bit of exploration factor, so that model can retain the creativity. 4) We then sample a list of trajectory, then consider on running rewards, not changing the past rewards and then compute the rewards, while comparing the rewards with the reward when most average action is taken, to get the sense of the gradient of increasing rewards wrt trajectories. In the end, we will have a model which is not so different from the original model but prioritizes trajectories with higher values.
@dengorange2631
@dengorange2631 3 ай бұрын
Thank you! 谢谢你的视频!
@MasterMan2015
@MasterMan2015 2 ай бұрын
Amazing as usual.
@mavichovizana5460
@mavichovizana5460 7 ай бұрын
Thanks for the awesome explanation! I have trouble reading the hf src, and you helped a ton! One thing I'm confused about is that at 1:05:00, the 1st right parenthesis of the first formula is misplaced. I think it should be \sigma(log_prob \sigma(reward_to_go)). The later slides also share this issue, cmiiw. Thanks!
@MaksymSutkovenko
@MaksymSutkovenko 8 ай бұрын
Amazing, you've released a new video!
@gemini_537
@gemini_537 6 ай бұрын
Gemini: This video is about reinforcement learning from human feedback, a technique used to align the behavior of a language model to what we want it to output. The speaker says that reinforcement learning from human feedback is a widely used technique, though there are newer techniques like DPO. The video will cover the following topics: * Language models and how they work * Why AI alignment is important * Reinforcement learning from human feedback with a deep dive into: * What reinforcement learning is * The reward model * Trajectories * Policy gradient optimization * How to reduce variance in the algorithm * Code implementation of reinforcement learning from human feedback with PyTorch * Explanation of the code line by line The speaker recommends having some background knowledge in probability, statistics, deep learning, and reinforcement learning before watching this video. Here are the key points about reinforcement learning from human feedback: * It is a technique used to train a language model to behave in a certain way, as specified by a human. * This is done by rewarding the model for generating good outputs and penalizing it for generating bad outputs. * The reward model is a function that assigns a score to each output generated by the language model. * Trajectories are sequences of outputs generated by the language model. * Policy gradient optimization is an algorithm that is used to train the reinforcement learning model. * The goal of policy gradient optimization is to find the policy that maximizes the expected reward. I hope this summary is helpful!
@nicoloruggeri9740
@nicoloruggeri9740 12 күн бұрын
Thanks for the amazing content!
@amortalbeing
@amortalbeing 8 ай бұрын
Thanks a lot man, keep up the great job.
@tk-og4yk
@tk-og4yk 8 ай бұрын
Amazing as always. I hope your channel keeps growing and more people learn from you. I am curious how we can use this optimized model to give it prompts and see what it comes up with. Any advice how to do so?
@MiguelAcosta-p8s
@MiguelAcosta-p8s Ай бұрын
very good video!
@MR_GREEN1337
@MR_GREEN1337 8 ай бұрын
Perfect!! With this technique introduced, can you provide us with another gem on DPO?
@umarjamilai
@umarjamilai 6 ай бұрын
You're welcome: kzbin.info/www/bejne/nqeqkmiDl8ZnmZo
@gangs0846
@gangs0846 7 ай бұрын
Absolutel fantastic
@tryit-wv8ui
@tryit-wv8ui 6 ай бұрын
You are becoming a reference in the youtube machine learning game. I appreciate so much your work. I have so much questions. Do you coach? I can pay.
@umarjamilai
@umarjamilai 6 ай бұрын
Hi! I am currently super busy between my job, my family life and the videos I make, but I'm always willing to help people, you just need to prove that you've put effort yourself in solving your problem and I'll guide you in the right direction. Connect with me on LinkedIn! 😇 have a nice day
@tryit-wv8ui
@tryit-wv8ui 6 ай бұрын
@@umarjamilai Hi Umar, thks for your quick answer! I will do it.
@MonkkSoori
@MonkkSoori 5 ай бұрын
Thank you very much for your comprehensive explanation. I have two questions: (1) At 1:59:25 does our LLM/Policy Network have two different linear layers, one for producing a reward and one for producing a value estimation for a particular state? (2) At 2:04:37 if the value of Q(s,a) is going to be calculated using A(s,a)+V(s) but then you do in L_VF V(s)-Q(s,a), then why not just use A(s,a) directly? Is it because in the latter equation, V(s) is `vpreds` (in the code) and is from the online model, while Q(s,a) is `values` (in the code) and is from the offline model (I can see both variables at 2:06:11)?
@xingfang8507
@xingfang8507 8 ай бұрын
你最棒!
@andreanegreanu8750
@andreanegreanu8750 5 ай бұрын
Hi Sir Jamil, again thanks a lot for all your work that's so amazing. However, I'm somewhat confused about how the KL divergence is incorporated into the final objective function. Is it possible to see it that way for one batch of trajectories : J(theta) = PPO(theta) - Beta*KL(Pi_frozen || Pi_new). Or do we have to take it into account when computing the cumulative rewards by substracting any reward by Beta*KL(Pi_frozen||Pi_new) Or is it equivalent? I'm completely lost. Thanks for your help Sir!
@rajansahu3240
@rajansahu3240 Ай бұрын
hi Umar, absolutely stunning tutorial but just towards the end I have a little doubt that I wanted to clarify, the entire token generation setting makes this as a sparse RL reward problem right ?
@douglasswang998
@douglasswang998 2 ай бұрын
thanks for the great video,. I wanted to ask, in 50:11 you mention the reward of a trajectory is the sum of the rewards at each token of the response. But the reward model is only trained on full responses, so will the reward values at partial responses be meaningful?
@RudraPratapDhara
@RudraPratapDhara 8 ай бұрын
Legend is back
@elieelezra2734
@elieelezra2734 5 ай бұрын
Can't thank you enough : your vids + chatGPT = Best Teacher Ever. I have one question though : it might be silly but I want to be sure of it : does it mean that to get the rewards for all time steps, we need to run the reward model on all truncated responses on the right, so that each response token would be at some point the last token? Am I clear?
@umarjamilai
@umarjamilai 5 ай бұрын
No, because of how transformer models work, you only need one forward step with all the sequence to get the rewards for all positions. This is also how you train a transformer: with only one pass, you can calculate the hidden state for all the positions and calculate the loss for all positions.
@andreanegreanu8750
@andreanegreanu8750 5 ай бұрын
@@umarjamilai thanks a lot for all your time. I won't bother you till the next time, I promess, ahahaha
@zhouwang2123
@zhouwang2123 8 ай бұрын
Thanks for your work and sharing, Umar! I learn new stuff from you again! Btw, does the KL divergence play a similar role as the clipped ratio to prevent the new policy from far away from the old one? Additionally, unlike actor-critic in RL, here it looks like the policy and value functions are updated simultaneously. Is this because of the partially shared architecture and out of the computational efficiency?
@umarjamilai
@umarjamilai 8 ай бұрын
When fine-tuning a model with RLHF, before the fine-tuning begins, we make another copy of the model and freeze its weights. - The KL divergence forces the fine-tuned and frozen model to be "similar" in their log probabilities for each token. - The clipped ratio, on the other hand, is not about the fine-tuned model and the frozen one, but rather, the offline and the online policy of the PPO setup. You may think that we have 3 models in total in this setup, but actually it's only two because the offline and the online policy are the same model, as explained in the "pseudo-code" of the off-policy learning. Hope it answers your question.
@alainrieger6905
@alainrieger6905 5 ай бұрын
Hi Best ML online teacher, just one question to make sure, I understood well : Does it mean, we need to stock the weights of three models : - original LLM (offline policy) which is regularly updated - updated LLM (online policy) which is updated and will be the final version - frozen LLM (used for the KL divergence) which is never updated Thanks in advance!
@umarjamilai
@umarjamilai 5 ай бұрын
Offline and online policy are actually the same model, but it plays the role of "offline policy" or "online policy" depending if you're collecting trajectories or you're optimizing. So at any time, you need two models in your memory: a frozen one for KL divergence, and the model you're optimizing, which is first sampled to generate trajectories (lots of them) and then optimized using said trajectories. You can also precalculate the log probabilities of the frozen model for the entire fine-tuning dataset, so that you only keep one model in memory.
@tryit-wv8ui
@tryit-wv8ui 5 ай бұрын
@@umarjamilai Hmm, Ok I was missing that
@alainrieger6905
@alainrieger6905 5 ай бұрын
@@umarjamilai thank you so much
@godelkurt384
@godelkurt384 Ай бұрын
I am unclear about offline policy learning. How to calculate the online logits of a trajectory? For example, if the offline trajectory is "where is Paris? Paris is a city in France." Then this string is passed as input to the online model, which is the same as the offline one, to get the logits but the logits of the two models are the same in this case? Please correct my misunderstanding.
@parthvashisht9555
@parthvashisht9555 7 ай бұрын
You are amazing!
@alexandrepeccaud9870
@alexandrepeccaud9870 8 ай бұрын
This is great
@abhinav__pm
@abhinav__pm 8 ай бұрын
Bro, I want to fine-tune a model for a translation task. However, I encountered a ‘CUDA out of memory’ error. Now, I plan to purchase a GPU from AWS ec2 instance. How is the payment processed in AWS? They asked for card details when I signed up. Do they automatically process the payment?
@weicheng4608
@weicheng4608 5 ай бұрын
Hello Umar, thanks for the amazing content. I got a question. Could you please help me? At 1:56:40 - for KL penalty, why it is logprob - ref_logprob? But for KL divergence formula, it is KL(P||Q) = sum(P(x) * log(P(x)/Q(x))). So logprob - ref_logprob only maps to log(P(x)/Q(x))? It is missing this part - KL(P||Q) = sum(P(x) * ...))? Thanks a lot.
@heepoleo131
@heepoleo131 7 ай бұрын
Why the PPO loss is different from the RL objective in instructGPT? At least the pi(old) in the PPO loss is iteratively changing but in instructGPT it's kept as the SFT model.
@SangrezKhan
@SangrezKhan 3 ай бұрын
good job umar, Can you please tell us which font did you used in your slides?
@tubercn
@tubercn 8 ай бұрын
💯
@Healthphere
@Healthphere Ай бұрын
The font size too small to read in vscode. But great video
@andreanegreanu8750
@andreanegreanu8750 5 ай бұрын
There is something that found out very confusing. It seems that the value function share the same theta parameters than the LLM. That is very unexpected. Can you confirm this please? Thanks in advance
@andreanegreanu8750
@andreanegreanu8750 5 ай бұрын
Hi Umar, sorry to bother you (again). I think I well understood the J function, which we want to maximize. But, it seems you quickly admit that it is somewhat equivalent to the L_ppo function that we want to minimize. It maybe obvious but I really don't get it.
@generichuman_
@generichuman_ 7 ай бұрын
I'm curious if this can be done with stable diffusion. I'm imagining having a dataset of images that a human would go through with pair ranking to order them in terms of aesthetics, and using this as a reward signal to train the model to output more aesthetic images. I'm sure this exists, just haven't seen anyone talk about it.
@YKeon-ff4fw
@YKeon-ff4fw 8 ай бұрын
Could you please explain why in the formula mentioned at the 39-minute mark in the bottom right corner of the video, the product operation ranges from t=0 to T-1, but after taking the logarithm and differentiating, the range of the summation becomes from t=0 to T? :)
@umarjamilai
@umarjamilai 8 ай бұрын
I'm sorry, I think it's just a product of laziness. I copied the formulas from OpenAI's "SpinningUp" website and didn't check carefully. I'll update the slides. Thanks for pointing out!
@wongdope4147
@wongdope4147 7 ай бұрын
宝藏博主!!!!!!!!
@umarjamilai
@umarjamilai 7 ай бұрын
谢谢你的赞成,我们在领英联系吧
@s8x.
@s8x. 5 ай бұрын
50:27 why is it the hidden state for the answer tokens but earlier it was just for the last hidden state?
@kei55340
@kei55340 8 ай бұрын
Is the diagram shown at 50 minutes accurate? I had thought that with typical RLHF training, you only calculate the reward for the full completion rather than summing rewards for all intermediate completions. Edit: It turns out this is addressed later in the video.
@umarjamilai
@umarjamilai 8 ай бұрын
In the vanilla policy gradient optimization, you can calculate it for all intermediate steps. In RLHF, we only calculate it for the entire sentence. If you watch the entire video, when I show the code, I explicitly clarify this.
@kei55340
@kei55340 8 ай бұрын
@@umarjamilaiThanks for the clarification, I haven't watched the whole video yet.
@flakky626
@flakky626 4 ай бұрын
I followed the code and could understand some of it but the thing is I feel overwhelmed seing such large code bases.. When will I be able to code stuff like that on such scale!!
@dhanesh123us
@dhanesh123us 7 ай бұрын
These videos are amazing @umar jamil. This is fairly complex theory that you have tried to get into and explain in simple terms - hats off. Your video inspired me to take up a coursera course on RL. Thanks a ton. Few basic queries though: 1. my understanding is that the theta parameter in the PPO algo are all the model parameters? So we are recalibrating the LLM in some sense. 2. Reward model is pre-defined? 3. Also how does temperature play a role in this whole setup.
@pranavk6788
@pranavk6788 7 ай бұрын
Can you please cover V-JEPA by Meta AI next? Both theory and code
@TechieIndia
@TechieIndia 2 ай бұрын
Few Questions: 1) offline policy learning makes the training fast, but How we would have done without offline policy learning. I mean, I am not able to understand the difference between how we used to do and how this offline becomes efficient
@alivecoding4995
@alivecoding4995 Ай бұрын
And why is there no Deep Q-learning necessary?
@EsmailAtta
@EsmailAtta 7 ай бұрын
Can you make a video of coding the diffusion transformer from scratch as always please
@kevon217
@kevon217 6 ай бұрын
“drunk cat” model 😂
@vardhan254
@vardhan254 8 ай бұрын
LETS GOOOOO
@Gasa7655
@Gasa7655 8 ай бұрын
DPO Please
@umarjamilai
@umarjamilai 6 ай бұрын
Done: kzbin.info/www/bejne/nqeqkmiDl8ZnmZo
@esramuab1021
@esramuab1021 6 ай бұрын
ليش ماتشرح بالعربي كالعرب محتاجين مصادر عربية انكليز عدهم مايكفي !
@ehsanzain5999
@ehsanzain5999 4 ай бұрын
لأن التعلم بالانكليزي أفضل على العموم إذا محتاجه شي ممكن أجاوبج
@davehudson5214
@davehudson5214 7 ай бұрын
'Promosm'
HELP!!!
00:46
Natan por Aí
Рет қаралды 17 МЛН
Fake watermelon by Secret Vlog
00:16
Secret Vlog
Рет қаралды 33 МЛН
НАШЛА ДЕНЬГИ🙀@VERONIKAborsch
00:38
МишАня
Рет қаралды 3,3 МЛН
ЗНАЛИ? ТОЛЬКО ОАЭ 🤫
00:13
Сам себе сушист
Рет қаралды 3,7 МЛН
Reinforcement Learning with Human Feedback (RLHF)
59:36
AI Makerspace
Рет қаралды 1,9 М.
The Reparameterization Trick
17:35
ML & DL Explained
Рет қаралды 22 М.
Reinforcement Learning from Human Feedback (RLHF) Explained
11:29
IBM Technology
Рет қаралды 11 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 493 М.
Reinforcement Learning from Human Feedback: From Zero to chatGPT
1:00:38
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 332 М.
A Portal Special Presentation- Geometric Unity: A First Look
2:48:23
Eric Weinstein
Рет қаралды 833 М.
HELP!!!
00:46
Natan por Aí
Рет қаралды 17 МЛН