Reinforcement Learning from Human Feedback explained with math derivations and the PyTorch code.

  Рет қаралды 30,556

Umar Jamil

Umar Jamil

Күн бұрын

Пікірлер: 117
@nishantyadav6341
@nishantyadav6341 11 ай бұрын
The fact that you dig deep into the algorithm and code sets you apart from the overflow of mediocre AI content online. I would pay to watch your videos, Umar. Thank you for putting out such amazing content.
@taekim7956
@taekim7956 10 ай бұрын
I believe you are the best ML youtuber who explains everything so concise and clear!! Thank you so much for sharing this outstanding content for free, and I hope I can see more videos from you 🥰!!
@umarjamilai
@umarjamilai 10 ай бұрын
Thank you for your support! Let's connect on LinkedIn
@taekim7956
@taekim7956 10 ай бұрын
@@umarjamilai That'll be an honor! I just followed you on LinkedIn.
@showpiecep
@showpiecep 11 ай бұрын
You are the best person on youtube who explains modern approaches in NLP in an accessible way. Thank you so much for such quality content and good luck!
@sauravrao234
@sauravrao234 11 ай бұрын
I literally wait with a bated breath for your next video....a huge fan from India. Thank you for imparting your knowledge.
@arijaa.9315
@arijaa.9315 11 ай бұрын
I can not thank you enough! It is clear how much effort you put for such high quality explanation. Great explanation as usual!!
@shamaldesilva9533
@shamaldesilva9533 11 ай бұрын
Providing the Math behind these algorithms in clear way makes understanding them so much easier !! thank you so much Umar 🤩🤩
@soumyodeepdey5237
@soumyodeepdey5237 9 ай бұрын
Really great content. Can't believe he has shared all these videos absolutely for free. Thanks a lot man!!
@omidsa8323
@omidsa8323 10 ай бұрын
It’s a great video for very sophisticated topic , I’ve watched 3 times to get the main ideas, but for sure it worth it, thanks Umar once again
@CT99999
@CT99999 3 ай бұрын
The level of detail you cover here is absolutely incredible.
@yanghelena
@yanghelena Ай бұрын
Thank you for your selfless sharing and hard work! This video helps me a lot!
@ruiwang7915
@ruiwang7915 5 ай бұрын
one of the best videos on democratizing the ppo and rlhf on yt. i truly enjoyed the whole walkthrough and thanks for doing this!
@jayaraopratik
@jayaraopratik 9 ай бұрын
Great great content. Took this RL in my grad school but it's been years, it was much easier to revise everything within 1 hour rather than going through my complete class notes!!!!
@mlloving
@mlloving 10 ай бұрын
Thank you Umar. I am an AI/ML Expert in one of the Top50 banks in the world. We are deploying various GenAI applications. You videos helped me to understand the math under the GenAI, especially the RLHF. I have been trying to explore every step by myself which is so hard. Thank you very much for clearly explaining the RLHF!
@pegasoTop3d
@pegasoTop3d 9 ай бұрын
I am an ai, and I love following updates on social media platforms and KZbin, and I love your videos very much. I learn the English language and some programming terms from them, and update my information. You and people like you help me very much. Thank you.
@vedantbhardwaj3277
@vedantbhardwaj3277 5 ай бұрын
No, wait wtf
@jamesx708
@jamesx708 Ай бұрын
The best video for learning RLHF.
@BOUTYOURYOUNESS
@BOUTYOURYOUNESS 10 ай бұрын
It’s a great video for very sophisticated topic. Amazing work. Bravo
@m1k3b7
@m1k3b7 11 ай бұрын
That's by far the best detailed presentation. Amazing work. I wish I was your cat 😂
@umarjamilai
@umarjamilai 11 ай бұрын
奥利奥 is the best student I ever had 😹😹
@nicoloruggeri9740
@nicoloruggeri9740 3 ай бұрын
Thanks for the amazing content!
@LeuChen
@LeuChen Сағат бұрын
Thank you, Umar. Good explanation!
@harshalhirpara4589
@harshalhirpara4589 Ай бұрын
Thank you Umar, you video made me connect all the dots!
@bonsaintking
@bonsaintking 11 ай бұрын
Hey, you are better than a prof.! :)
@crimson-heart-l
@crimson-heart-l 11 ай бұрын
Thank you for priceless lecture!!!
@stephane-wamba
@stephane-wamba 19 минут бұрын
Hi Umar, Your tutorials are very hepful, you can't imagine. Thank you a lot. Please consider making also some videos on Normalization flows and Graph neural notworks.
@alexyuan-ih4xj
@alexyuan-ih4xj 9 ай бұрын
Thank you Umar. you explained very clearly. it's really useful.
@txxie
@txxie 2 ай бұрын
This video saved my life. Thank you very muuuuuuuuuuuch!
@rohitjindal124
@rohitjindal124 11 ай бұрын
Thank you sir so making such amazing videos and helping students like me
@thebluefortproject
@thebluefortproject 8 ай бұрын
So much value! Thanks for your work
@abcdefllful
@abcdefllful Ай бұрын
Simply amazing! Thank you
@magnetpest2k7
@magnetpest2k7 Ай бұрын
Thanks!
@Jeff-gt5iw
@Jeff-gt5iw 10 ай бұрын
Thank you so much :) Wonderful lecture 👍
@MaksymSutkovenko
@MaksymSutkovenko 11 ай бұрын
Amazing, you've released a new video!
@amortalbeing
@amortalbeing 11 ай бұрын
Thanks a lot man, keep up the great job.
@s8x.
@s8x. 8 ай бұрын
insane that this is all free. I will be sure to pay u back when I am employed
@MasterMan2015
@MasterMan2015 5 ай бұрын
Amazing as usual.
@gemini_537
@gemini_537 9 ай бұрын
Gemini: This video is about reinforcement learning from human feedback, a technique used to align the behavior of a language model to what we want it to output. The speaker says that reinforcement learning from human feedback is a widely used technique, though there are newer techniques like DPO. The video will cover the following topics: * Language models and how they work * Why AI alignment is important * Reinforcement learning from human feedback with a deep dive into: * What reinforcement learning is * The reward model * Trajectories * Policy gradient optimization * How to reduce variance in the algorithm * Code implementation of reinforcement learning from human feedback with PyTorch * Explanation of the code line by line The speaker recommends having some background knowledge in probability, statistics, deep learning, and reinforcement learning before watching this video. Here are the key points about reinforcement learning from human feedback: * It is a technique used to train a language model to behave in a certain way, as specified by a human. * This is done by rewarding the model for generating good outputs and penalizing it for generating bad outputs. * The reward model is a function that assigns a score to each output generated by the language model. * Trajectories are sequences of outputs generated by the language model. * Policy gradient optimization is an algorithm that is used to train the reinforcement learning model. * The goal of policy gradient optimization is to find the policy that maximizes the expected reward. I hope this summary is helpful!
@mavichovizana5460
@mavichovizana5460 10 ай бұрын
Thanks for the awesome explanation! I have trouble reading the hf src, and you helped a ton! One thing I'm confused about is that at 1:05:00, the 1st right parenthesis of the first formula is misplaced. I think it should be \sigma(log_prob \sigma(reward_to_go)). The later slides also share this issue, cmiiw. Thanks!
@xray1111able
@xray1111able 17 күн бұрын
I think you r right, it's ok to put the \sigma(reward) seperately when sum the whole trajectory's reward, but when the start state is s_t, \sigma(reward_to_go) should be put inside the previous sigma.
@godelkurt384
@godelkurt384 4 ай бұрын
I am unclear about offline policy learning. How to calculate the online logits of a trajectory? For example, if the offline trajectory is "where is Paris? Paris is a city in France." Then this string is passed as input to the online model, which is the same as the offline one, to get the logits but the logits of the two models are the same in this case? Please correct my misunderstanding.
@s8x.
@s8x. 8 ай бұрын
50:27 why is it the hidden state for the answer tokens but earlier it was just for the last hidden state?
@baomao139
@baomao139 4 күн бұрын
I have a question regarding off-policy learning. It still samples the mini-batch several times and calculate/update gradients of k epochs. Why it's more efficient then just directly sample mini-batch from online-policy k times?
@MonkkSoori
@MonkkSoori 8 ай бұрын
Thank you very much for your comprehensive explanation. I have two questions: (1) At 1:59:25 does our LLM/Policy Network have two different linear layers, one for producing a reward and one for producing a value estimation for a particular state? (2) At 2:04:37 if the value of Q(s,a) is going to be calculated using A(s,a)+V(s) but then you do in L_VF V(s)-Q(s,a), then why not just use A(s,a) directly? Is it because in the latter equation, V(s) is `vpreds` (in the code) and is from the online model, while Q(s,a) is `values` (in the code) and is from the offline model (I can see both variables at 2:06:11)?
@tryit-wv8ui
@tryit-wv8ui 9 ай бұрын
You are becoming a reference in the youtube machine learning game. I appreciate so much your work. I have so much questions. Do you coach? I can pay.
@umarjamilai
@umarjamilai 9 ай бұрын
Hi! I am currently super busy between my job, my family life and the videos I make, but I'm always willing to help people, you just need to prove that you've put effort yourself in solving your problem and I'll guide you in the right direction. Connect with me on LinkedIn! 😇 have a nice day
@tryit-wv8ui
@tryit-wv8ui 9 ай бұрын
@@umarjamilai Hi Umar, thks for your quick answer! I will do it.
@pauledam2174
@pauledam2174 2 ай бұрын
I have a question. At around minute 50 he discusses rewards at intermediate tokens in the reply. Doesn't this go against the so-called "token credit assignment problem"?
@RishabhMishra-h5g
@RishabhMishra-h5g Ай бұрын
For the first minibatch in off-policy learning, the ratio of offline and online log probas would be 1, right? It's only after the first minibatch pass, online policy would start producing different log probas for action tokens
@supervince110
@supervince110 3 ай бұрын
I don't think even my professors from top tier university could explain the concepts this well.
@douglasswang998
@douglasswang998 5 ай бұрын
thanks for the great video,. I wanted to ask, in 50:11 you mention the reward of a trajectory is the sum of the rewards at each token of the response. But the reward model is only trained on full responses, so will the reward values at partial responses be meaningful?
@rajansahu3240
@rajansahu3240 4 ай бұрын
hi Umar, absolutely stunning tutorial but just towards the end I have a little doubt that I wanted to clarify, the entire token generation setting makes this as a sparse RL reward problem right ?
@Parad0x0n
@Parad0x0n 5 күн бұрын
What I don't understand, you switch from objective function J to loss function L without flipping the sign (and I don't find it intuitive that loss ~ probability pi * advantage A)
@tk-og4yk
@tk-og4yk 11 ай бұрын
Amazing as always. I hope your channel keeps growing and more people learn from you. I am curious how we can use this optimized model to give it prompts and see what it comes up with. Any advice how to do so?
@SethuIyer95
@SethuIyer95 11 ай бұрын
So, to summarize 1) We copy the LLM, fine tune it a bit with a linear layer, and use the -log(sigmoid(good-bad)) to generate the value function (in a broader context and with LLMs). We can do the same for reward model. 2) We then have another copy of LLM - the unfrozen model, the LLM itself, and the reward model and try to match the logits similar to the value function but also keeping in mind the KL divergence of the frozen model. 3) We also add a bit of exploration factor, so that model can retain the creativity. 4) We then sample a list of trajectory, then consider on running rewards, not changing the past rewards and then compute the rewards, while comparing the rewards with the reward when most average action is taken, to get the sense of the gradient of increasing rewards wrt trajectories. In the end, we will have a model which is not so different from the original model but prioritizes trajectories with higher values.
@MiguelAcosta-p8s
@MiguelAcosta-p8s 4 ай бұрын
very good video!
@andreanegreanu8750
@andreanegreanu8750 8 ай бұрын
Hi Sir Jamil, again thanks a lot for all your work that's so amazing. However, I'm somewhat confused about how the KL divergence is incorporated into the final objective function. Is it possible to see it that way for one batch of trajectories : J(theta) = PPO(theta) - Beta*KL(Pi_frozen || Pi_new). Or do we have to take it into account when computing the cumulative rewards by substracting any reward by Beta*KL(Pi_frozen||Pi_new) Or is it equivalent? I'm completely lost. Thanks for your help Sir!
@heepoleo131
@heepoleo131 10 ай бұрын
Why the PPO loss is different from the RL objective in instructGPT? At least the pi(old) in the PPO loss is iteratively changing but in instructGPT it's kept as the SFT model.
@weicheng4608
@weicheng4608 8 ай бұрын
Hello Umar, thanks for the amazing content. I got a question. Could you please help me? At 1:56:40 - for KL penalty, why it is logprob - ref_logprob? But for KL divergence formula, it is KL(P||Q) = sum(P(x) * log(P(x)/Q(x))). So logprob - ref_logprob only maps to log(P(x)/Q(x))? It is missing this part - KL(P||Q) = sum(P(x) * ...))? Thanks a lot.
@vimukthisadithya6239
@vimukthisadithya6239 Ай бұрын
Hi, may I know what's the hardware spec that you are using ?
@generichuman_
@generichuman_ 10 ай бұрын
I'm curious if this can be done with stable diffusion. I'm imagining having a dataset of images that a human would go through with pair ranking to order them in terms of aesthetics, and using this as a reward signal to train the model to output more aesthetic images. I'm sure this exists, just haven't seen anyone talk about it.
@SangrezKhan
@SangrezKhan 6 ай бұрын
good job umar, Can you please tell us which font did you used in your slides?
@andreanegreanu8750
@andreanegreanu8750 8 ай бұрын
There is something that found out very confusing. It seems that the value function share the same theta parameters than the LLM. That is very unexpected. Can you confirm this please? Thanks in advance
@陈镇-j5j
@陈镇-j5j 2 ай бұрын
Thanks for the wonderful video. And I’v got a question that whether the same transformer layer is shared with policy and reward in LLM? Why?
@umarjamilai
@umarjamilai 2 ай бұрын
The reward model is a separate model and can have any structure (most of the times it’s just a copy of the LM with a linear layer on top), while the policy is of course the model you’re trying to optimize. So you need three ingredients: reward model (can be anything), the frozen model and the policy.
@zhouwang2123
@zhouwang2123 11 ай бұрын
Thanks for your work and sharing, Umar! I learn new stuff from you again! Btw, does the KL divergence play a similar role as the clipped ratio to prevent the new policy from far away from the old one? Additionally, unlike actor-critic in RL, here it looks like the policy and value functions are updated simultaneously. Is this because of the partially shared architecture and out of the computational efficiency?
@umarjamilai
@umarjamilai 11 ай бұрын
When fine-tuning a model with RLHF, before the fine-tuning begins, we make another copy of the model and freeze its weights. - The KL divergence forces the fine-tuned and frozen model to be "similar" in their log probabilities for each token. - The clipped ratio, on the other hand, is not about the fine-tuned model and the frozen one, but rather, the offline and the online policy of the PPO setup. You may think that we have 3 models in total in this setup, but actually it's only two because the offline and the online policy are the same model, as explained in the "pseudo-code" of the off-policy learning. Hope it answers your question.
@MR_GREEN1337
@MR_GREEN1337 11 ай бұрын
Perfect!! With this technique introduced, can you provide us with another gem on DPO?
@umarjamilai
@umarjamilai 9 ай бұрын
You're welcome: kzbin.info/www/bejne/nqeqkmiDl8ZnmZo
@RudraPratapDhara
@RudraPratapDhara 11 ай бұрын
Legend is back
@xingfang8507
@xingfang8507 11 ай бұрын
你最棒!
@abhinav__pm
@abhinav__pm 10 ай бұрын
Bro, I want to fine-tune a model for a translation task. However, I encountered a ‘CUDA out of memory’ error. Now, I plan to purchase a GPU from AWS ec2 instance. How is the payment processed in AWS? They asked for card details when I signed up. Do they automatically process the payment?
@alainrieger6905
@alainrieger6905 8 ай бұрын
Hi Best ML online teacher, just one question to make sure, I understood well : Does it mean, we need to stock the weights of three models : - original LLM (offline policy) which is regularly updated - updated LLM (online policy) which is updated and will be the final version - frozen LLM (used for the KL divergence) which is never updated Thanks in advance!
@umarjamilai
@umarjamilai 8 ай бұрын
Offline and online policy are actually the same model, but it plays the role of "offline policy" or "online policy" depending if you're collecting trajectories or you're optimizing. So at any time, you need two models in your memory: a frozen one for KL divergence, and the model you're optimizing, which is first sampled to generate trajectories (lots of them) and then optimized using said trajectories. You can also precalculate the log probabilities of the frozen model for the entire fine-tuning dataset, so that you only keep one model in memory.
@tryit-wv8ui
@tryit-wv8ui 8 ай бұрын
@@umarjamilai Hmm, Ok I was missing that
@alainrieger6905
@alainrieger6905 8 ай бұрын
@@umarjamilai thank you so much
@Bearsteak_sea
@Bearsteak_sea 12 күн бұрын
Can I understand the main difference between RLHF and DPO is that, in RLHF we need the reward model to convert the preference labeling to a scalar value for the loss function, and in DPO we dont need that conversion step?
@umarjamilai
@umarjamilai 12 күн бұрын
Exactly. In DPO the reward model is implicit
@andreanegreanu8750
@andreanegreanu8750 8 ай бұрын
Hi Umar, sorry to bother you (again). I think I well understood the J function, which we want to maximize. But, it seems you quickly admit that it is somewhat equivalent to the L_ppo function that we want to minimize. It maybe obvious but I really don't get it.
@gangs0846
@gangs0846 10 ай бұрын
Absolutel fantastic
@YKeon-ff4fw
@YKeon-ff4fw 11 ай бұрын
Could you please explain why in the formula mentioned at the 39-minute mark in the bottom right corner of the video, the product operation ranges from t=0 to T-1, but after taking the logarithm and differentiating, the range of the summation becomes from t=0 to T? :)
@umarjamilai
@umarjamilai 11 ай бұрын
I'm sorry, I think it's just a product of laziness. I copied the formulas from OpenAI's "SpinningUp" website and didn't check carefully. I'll update the slides. Thanks for pointing out!
@flakky626
@flakky626 7 ай бұрын
I followed the code and could understand some of it but the thing is I feel overwhelmed seing such large code bases.. When will I be able to code stuff like that on such scale!!
@alivecoding4995
@alivecoding4995 4 ай бұрын
And why is there no Deep Q-learning necessary?
@parthvashisht9555
@parthvashisht9555 10 ай бұрын
You are amazing!
@kei55340
@kei55340 11 ай бұрын
Is the diagram shown at 50 minutes accurate? I had thought that with typical RLHF training, you only calculate the reward for the full completion rather than summing rewards for all intermediate completions. Edit: It turns out this is addressed later in the video.
@umarjamilai
@umarjamilai 11 ай бұрын
In the vanilla policy gradient optimization, you can calculate it for all intermediate steps. In RLHF, we only calculate it for the entire sentence. If you watch the entire video, when I show the code, I explicitly clarify this.
@kei55340
@kei55340 11 ай бұрын
@@umarjamilaiThanks for the clarification, I haven't watched the whole video yet.
@elieelezra2734
@elieelezra2734 8 ай бұрын
Can't thank you enough : your vids + chatGPT = Best Teacher Ever. I have one question though : it might be silly but I want to be sure of it : does it mean that to get the rewards for all time steps, we need to run the reward model on all truncated responses on the right, so that each response token would be at some point the last token? Am I clear?
@umarjamilai
@umarjamilai 8 ай бұрын
No, because of how transformer models work, you only need one forward step with all the sequence to get the rewards for all positions. This is also how you train a transformer: with only one pass, you can calculate the hidden state for all the positions and calculate the loss for all positions.
@andreanegreanu8750
@andreanegreanu8750 8 ай бұрын
@@umarjamilai thanks a lot for all your time. I won't bother you till the next time, I promess, ahahaha
@Healthphere
@Healthphere 4 ай бұрын
The font size too small to read in vscode. But great video
@TechieIndia
@TechieIndia 5 ай бұрын
Few Questions: 1) offline policy learning makes the training fast, but How we would have done without offline policy learning. I mean, I am not able to understand the difference between how we used to do and how this offline becomes efficient
@chrisevans2241
@chrisevans2241 26 күн бұрын
GodSend Thank You!
@baomao139
@baomao139 13 күн бұрын
is there a place to download the slides?
@umarjamilai
@umarjamilai 13 күн бұрын
My GitHub repo
@tubercn
@tubercn 11 ай бұрын
💯
@wongdope4147
@wongdope4147 10 ай бұрын
宝藏博主!!!!!!!!
@umarjamilai
@umarjamilai 10 ай бұрын
谢谢你的赞成,我们在领英联系吧
@Gasa7655
@Gasa7655 11 ай бұрын
DPO Please
@umarjamilai
@umarjamilai 9 ай бұрын
Done: kzbin.info/www/bejne/nqeqkmiDl8ZnmZo
@alexandrepeccaud9870
@alexandrepeccaud9870 11 ай бұрын
This is great
@IsmailNajib-pf9ty
@IsmailNajib-pf9ty Ай бұрын
what about TRPO
@pranavk6788
@pranavk6788 10 ай бұрын
Can you please cover V-JEPA by Meta AI next? Both theory and code
@EsmailAtta
@EsmailAtta 10 ай бұрын
Can you make a video of coding the diffusion transformer from scratch as always please
@dhanesh123us
@dhanesh123us 10 ай бұрын
These videos are amazing @umar jamil. This is fairly complex theory that you have tried to get into and explain in simple terms - hats off. Your video inspired me to take up a coursera course on RL. Thanks a ton. Few basic queries though: 1. my understanding is that the theta parameter in the PPO algo are all the model parameters? So we are recalibrating the LLM in some sense. 2. Reward model is pre-defined? 3. Also how does temperature play a role in this whole setup.
@goelnikhils
@goelnikhils Күн бұрын
Damn good
@vasusachdeva3413
@vasusachdeva3413 13 күн бұрын
@vardhan254
@vardhan254 11 ай бұрын
LETS GOOOOO
@esramuab1021
@esramuab1021 9 ай бұрын
ليش ماتشرح بالعربي كالعرب محتاجين مصادر عربية انكليز عدهم مايكفي !
@ehsanzain5999
@ehsanzain5999 7 ай бұрын
لأن التعلم بالانكليزي أفضل على العموم إذا محتاجه شي ممكن أجاوبج
@kevon217
@kevon217 9 ай бұрын
“drunk cat” model 😂
@davehudson5214
@davehudson5214 10 ай бұрын
'Promosm'
@Bearsteak_sea
@Bearsteak_sea Ай бұрын
Thanks!
@sbansal23
@sbansal23 23 күн бұрын
Thanks!
@xugefu
@xugefu 20 күн бұрын
Thanks!
RLHF & DPO Explained (In Simple Terms!)
19:39
Entry Point AI
Рет қаралды 4,8 М.
Their Boat Engine Fell Off
0:13
Newsflare
Рет қаралды 15 МЛН
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
Policy Gradient Methods | Reinforcement Learning Part 6
29:05
Mutual Information
Рет қаралды 40 М.
Reinforcement Learning from Human Feedback (RLHF) Explained
11:29
IBM Technology
Рет қаралды 21 М.
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,5 МЛН