The fact that you dig deep into the algorithm and code sets you apart from the overflow of mediocre AI content online. I would pay to watch your videos, Umar. Thank you for putting out such amazing content.
@taekim79567 ай бұрын
I believe you are the best ML youtuber who explains everything so concise and clear!! Thank you so much for sharing this outstanding content for free, and I hope I can see more videos from you 🥰!!
@umarjamilai7 ай бұрын
Thank you for your support! Let's connect on LinkedIn
@taekim79567 ай бұрын
@@umarjamilai That'll be an honor! I just followed you on LinkedIn.
@sauravrao2348 ай бұрын
I literally wait with a bated breath for your next video....a huge fan from India. Thank you for imparting your knowledge.
@showpiecep8 ай бұрын
You are the best person on youtube who explains modern approaches in NLP in an accessible way. Thank you so much for such quality content and good luck!
@shamaldesilva95338 ай бұрын
Providing the Math behind these algorithms in clear way makes understanding them so much easier !! thank you so much Umar 🤩🤩
@CT9999926 күн бұрын
The level of detail you cover here is absolutely incredible.
@arijaa.93158 ай бұрын
I can not thank you enough! It is clear how much effort you put for such high quality explanation. Great explanation as usual!!
@jayaraopratik6 ай бұрын
Great great content. Took this RL in my grad school but it's been years, it was much easier to revise everything within 1 hour rather than going through my complete class notes!!!!
@soumyodeepdey52376 ай бұрын
Really great content. Can't believe he has shared all these videos absolutely for free. Thanks a lot man!!
@ruiwang79152 ай бұрын
one of the best videos on democratizing the ppo and rlhf on yt. i truly enjoyed the whole walkthrough and thanks for doing this!
@pegasoTop3d6 ай бұрын
I am an ai, and I love following updates on social media platforms and KZbin, and I love your videos very much. I learn the English language and some programming terms from them, and update my information. You and people like you help me very much. Thank you.
@vedantbhardwaj32772 ай бұрын
No, wait wtf
@omidsa83237 ай бұрын
It’s a great video for very sophisticated topic , I’ve watched 3 times to get the main ideas, but for sure it worth it, thanks Umar once again
@mlloving7 ай бұрын
Thank you Umar. I am an AI/ML Expert in one of the Top50 banks in the world. We are deploying various GenAI applications. You videos helped me to understand the math under the GenAI, especially the RLHF. I have been trying to explore every step by myself which is so hard. Thank you very much for clearly explaining the RLHF!
@BOUTYOURYOUNESS7 ай бұрын
It’s a great video for very sophisticated topic. Amazing work. Bravo
@supervince110Күн бұрын
I don't think even my professors from top tier university could explain the concepts this well.
@rohitjindal1248 ай бұрын
Thank you sir so making such amazing videos and helping students like me
@m1k3b78 ай бұрын
That's by far the best detailed presentation. Amazing work. I wish I was your cat 😂
@umarjamilai8 ай бұрын
奥利奥 is the best student I ever had 😹😹
@thebluefortproject5 ай бұрын
So much value! Thanks for your work
@alexyuan-ih4xj6 ай бұрын
Thank you Umar. you explained very clearly. it's really useful.
@우우요요8 ай бұрын
Thank you for priceless lecture!!!
@bonsaintking8 ай бұрын
Hey, you are better than a prof.! :)
@s8x.5 ай бұрын
insane that this is all free. I will be sure to pay u back when I am employed
@Jeff-gt5iw7 ай бұрын
Thank you so much :) Wonderful lecture 👍
@SethuIyer958 ай бұрын
So, to summarize 1) We copy the LLM, fine tune it a bit with a linear layer, and use the -log(sigmoid(good-bad)) to generate the value function (in a broader context and with LLMs). We can do the same for reward model. 2) We then have another copy of LLM - the unfrozen model, the LLM itself, and the reward model and try to match the logits similar to the value function but also keeping in mind the KL divergence of the frozen model. 3) We also add a bit of exploration factor, so that model can retain the creativity. 4) We then sample a list of trajectory, then consider on running rewards, not changing the past rewards and then compute the rewards, while comparing the rewards with the reward when most average action is taken, to get the sense of the gradient of increasing rewards wrt trajectories. In the end, we will have a model which is not so different from the original model but prioritizes trajectories with higher values.
@dengorange26313 ай бұрын
Thank you! 谢谢你的视频!
@MasterMan20152 ай бұрын
Amazing as usual.
@mavichovizana54607 ай бұрын
Thanks for the awesome explanation! I have trouble reading the hf src, and you helped a ton! One thing I'm confused about is that at 1:05:00, the 1st right parenthesis of the first formula is misplaced. I think it should be \sigma(log_prob \sigma(reward_to_go)). The later slides also share this issue, cmiiw. Thanks!
@MaksymSutkovenko8 ай бұрын
Amazing, you've released a new video!
@gemini_5376 ай бұрын
Gemini: This video is about reinforcement learning from human feedback, a technique used to align the behavior of a language model to what we want it to output. The speaker says that reinforcement learning from human feedback is a widely used technique, though there are newer techniques like DPO. The video will cover the following topics: * Language models and how they work * Why AI alignment is important * Reinforcement learning from human feedback with a deep dive into: * What reinforcement learning is * The reward model * Trajectories * Policy gradient optimization * How to reduce variance in the algorithm * Code implementation of reinforcement learning from human feedback with PyTorch * Explanation of the code line by line The speaker recommends having some background knowledge in probability, statistics, deep learning, and reinforcement learning before watching this video. Here are the key points about reinforcement learning from human feedback: * It is a technique used to train a language model to behave in a certain way, as specified by a human. * This is done by rewarding the model for generating good outputs and penalizing it for generating bad outputs. * The reward model is a function that assigns a score to each output generated by the language model. * Trajectories are sequences of outputs generated by the language model. * Policy gradient optimization is an algorithm that is used to train the reinforcement learning model. * The goal of policy gradient optimization is to find the policy that maximizes the expected reward. I hope this summary is helpful!
@nicoloruggeri974012 күн бұрын
Thanks for the amazing content!
@amortalbeing8 ай бұрын
Thanks a lot man, keep up the great job.
@tk-og4yk8 ай бұрын
Amazing as always. I hope your channel keeps growing and more people learn from you. I am curious how we can use this optimized model to give it prompts and see what it comes up with. Any advice how to do so?
@MiguelAcosta-p8sАй бұрын
very good video!
@MR_GREEN13378 ай бұрын
Perfect!! With this technique introduced, can you provide us with another gem on DPO?
You are becoming a reference in the youtube machine learning game. I appreciate so much your work. I have so much questions. Do you coach? I can pay.
@umarjamilai6 ай бұрын
Hi! I am currently super busy between my job, my family life and the videos I make, but I'm always willing to help people, you just need to prove that you've put effort yourself in solving your problem and I'll guide you in the right direction. Connect with me on LinkedIn! 😇 have a nice day
@tryit-wv8ui6 ай бұрын
@@umarjamilai Hi Umar, thks for your quick answer! I will do it.
@MonkkSoori5 ай бұрын
Thank you very much for your comprehensive explanation. I have two questions: (1) At 1:59:25 does our LLM/Policy Network have two different linear layers, one for producing a reward and one for producing a value estimation for a particular state? (2) At 2:04:37 if the value of Q(s,a) is going to be calculated using A(s,a)+V(s) but then you do in L_VF V(s)-Q(s,a), then why not just use A(s,a) directly? Is it because in the latter equation, V(s) is `vpreds` (in the code) and is from the online model, while Q(s,a) is `values` (in the code) and is from the offline model (I can see both variables at 2:06:11)?
@xingfang85078 ай бұрын
你最棒!
@andreanegreanu87505 ай бұрын
Hi Sir Jamil, again thanks a lot for all your work that's so amazing. However, I'm somewhat confused about how the KL divergence is incorporated into the final objective function. Is it possible to see it that way for one batch of trajectories : J(theta) = PPO(theta) - Beta*KL(Pi_frozen || Pi_new). Or do we have to take it into account when computing the cumulative rewards by substracting any reward by Beta*KL(Pi_frozen||Pi_new) Or is it equivalent? I'm completely lost. Thanks for your help Sir!
@rajansahu3240Ай бұрын
hi Umar, absolutely stunning tutorial but just towards the end I have a little doubt that I wanted to clarify, the entire token generation setting makes this as a sparse RL reward problem right ?
@douglasswang9982 ай бұрын
thanks for the great video,. I wanted to ask, in 50:11 you mention the reward of a trajectory is the sum of the rewards at each token of the response. But the reward model is only trained on full responses, so will the reward values at partial responses be meaningful?
@RudraPratapDhara8 ай бұрын
Legend is back
@elieelezra27345 ай бұрын
Can't thank you enough : your vids + chatGPT = Best Teacher Ever. I have one question though : it might be silly but I want to be sure of it : does it mean that to get the rewards for all time steps, we need to run the reward model on all truncated responses on the right, so that each response token would be at some point the last token? Am I clear?
@umarjamilai5 ай бұрын
No, because of how transformer models work, you only need one forward step with all the sequence to get the rewards for all positions. This is also how you train a transformer: with only one pass, you can calculate the hidden state for all the positions and calculate the loss for all positions.
@andreanegreanu87505 ай бұрын
@@umarjamilai thanks a lot for all your time. I won't bother you till the next time, I promess, ahahaha
@zhouwang21238 ай бұрын
Thanks for your work and sharing, Umar! I learn new stuff from you again! Btw, does the KL divergence play a similar role as the clipped ratio to prevent the new policy from far away from the old one? Additionally, unlike actor-critic in RL, here it looks like the policy and value functions are updated simultaneously. Is this because of the partially shared architecture and out of the computational efficiency?
@umarjamilai8 ай бұрын
When fine-tuning a model with RLHF, before the fine-tuning begins, we make another copy of the model and freeze its weights. - The KL divergence forces the fine-tuned and frozen model to be "similar" in their log probabilities for each token. - The clipped ratio, on the other hand, is not about the fine-tuned model and the frozen one, but rather, the offline and the online policy of the PPO setup. You may think that we have 3 models in total in this setup, but actually it's only two because the offline and the online policy are the same model, as explained in the "pseudo-code" of the off-policy learning. Hope it answers your question.
@alainrieger69055 ай бұрын
Hi Best ML online teacher, just one question to make sure, I understood well : Does it mean, we need to stock the weights of three models : - original LLM (offline policy) which is regularly updated - updated LLM (online policy) which is updated and will be the final version - frozen LLM (used for the KL divergence) which is never updated Thanks in advance!
@umarjamilai5 ай бұрын
Offline and online policy are actually the same model, but it plays the role of "offline policy" or "online policy" depending if you're collecting trajectories or you're optimizing. So at any time, you need two models in your memory: a frozen one for KL divergence, and the model you're optimizing, which is first sampled to generate trajectories (lots of them) and then optimized using said trajectories. You can also precalculate the log probabilities of the frozen model for the entire fine-tuning dataset, so that you only keep one model in memory.
@tryit-wv8ui5 ай бұрын
@@umarjamilai Hmm, Ok I was missing that
@alainrieger69055 ай бұрын
@@umarjamilai thank you so much
@godelkurt384Ай бұрын
I am unclear about offline policy learning. How to calculate the online logits of a trajectory? For example, if the offline trajectory is "where is Paris? Paris is a city in France." Then this string is passed as input to the online model, which is the same as the offline one, to get the logits but the logits of the two models are the same in this case? Please correct my misunderstanding.
@parthvashisht95557 ай бұрын
You are amazing!
@alexandrepeccaud98708 ай бұрын
This is great
@abhinav__pm8 ай бұрын
Bro, I want to fine-tune a model for a translation task. However, I encountered a ‘CUDA out of memory’ error. Now, I plan to purchase a GPU from AWS ec2 instance. How is the payment processed in AWS? They asked for card details when I signed up. Do they automatically process the payment?
@weicheng46085 ай бұрын
Hello Umar, thanks for the amazing content. I got a question. Could you please help me? At 1:56:40 - for KL penalty, why it is logprob - ref_logprob? But for KL divergence formula, it is KL(P||Q) = sum(P(x) * log(P(x)/Q(x))). So logprob - ref_logprob only maps to log(P(x)/Q(x))? It is missing this part - KL(P||Q) = sum(P(x) * ...))? Thanks a lot.
@heepoleo1317 ай бұрын
Why the PPO loss is different from the RL objective in instructGPT? At least the pi(old) in the PPO loss is iteratively changing but in instructGPT it's kept as the SFT model.
@SangrezKhan3 ай бұрын
good job umar, Can you please tell us which font did you used in your slides?
@tubercn8 ай бұрын
💯
@HealthphereАй бұрын
The font size too small to read in vscode. But great video
@andreanegreanu87505 ай бұрын
There is something that found out very confusing. It seems that the value function share the same theta parameters than the LLM. That is very unexpected. Can you confirm this please? Thanks in advance
@andreanegreanu87505 ай бұрын
Hi Umar, sorry to bother you (again). I think I well understood the J function, which we want to maximize. But, it seems you quickly admit that it is somewhat equivalent to the L_ppo function that we want to minimize. It maybe obvious but I really don't get it.
@generichuman_7 ай бұрын
I'm curious if this can be done with stable diffusion. I'm imagining having a dataset of images that a human would go through with pair ranking to order them in terms of aesthetics, and using this as a reward signal to train the model to output more aesthetic images. I'm sure this exists, just haven't seen anyone talk about it.
@YKeon-ff4fw8 ай бұрын
Could you please explain why in the formula mentioned at the 39-minute mark in the bottom right corner of the video, the product operation ranges from t=0 to T-1, but after taking the logarithm and differentiating, the range of the summation becomes from t=0 to T? :)
@umarjamilai8 ай бұрын
I'm sorry, I think it's just a product of laziness. I copied the formulas from OpenAI's "SpinningUp" website and didn't check carefully. I'll update the slides. Thanks for pointing out!
@wongdope41477 ай бұрын
宝藏博主!!!!!!!!
@umarjamilai7 ай бұрын
谢谢你的赞成,我们在领英联系吧
@s8x.5 ай бұрын
50:27 why is it the hidden state for the answer tokens but earlier it was just for the last hidden state?
@kei553408 ай бұрын
Is the diagram shown at 50 minutes accurate? I had thought that with typical RLHF training, you only calculate the reward for the full completion rather than summing rewards for all intermediate completions. Edit: It turns out this is addressed later in the video.
@umarjamilai8 ай бұрын
In the vanilla policy gradient optimization, you can calculate it for all intermediate steps. In RLHF, we only calculate it for the entire sentence. If you watch the entire video, when I show the code, I explicitly clarify this.
@kei553408 ай бұрын
@@umarjamilaiThanks for the clarification, I haven't watched the whole video yet.
@flakky6264 ай бұрын
I followed the code and could understand some of it but the thing is I feel overwhelmed seing such large code bases.. When will I be able to code stuff like that on such scale!!
@dhanesh123us7 ай бұрын
These videos are amazing @umar jamil. This is fairly complex theory that you have tried to get into and explain in simple terms - hats off. Your video inspired me to take up a coursera course on RL. Thanks a ton. Few basic queries though: 1. my understanding is that the theta parameter in the PPO algo are all the model parameters? So we are recalibrating the LLM in some sense. 2. Reward model is pre-defined? 3. Also how does temperature play a role in this whole setup.
@pranavk67887 ай бұрын
Can you please cover V-JEPA by Meta AI next? Both theory and code
@TechieIndia2 ай бұрын
Few Questions: 1) offline policy learning makes the training fast, but How we would have done without offline policy learning. I mean, I am not able to understand the difference between how we used to do and how this offline becomes efficient
@alivecoding4995Ай бұрын
And why is there no Deep Q-learning necessary?
@EsmailAtta7 ай бұрын
Can you make a video of coding the diffusion transformer from scratch as always please
@kevon2176 ай бұрын
“drunk cat” model 😂
@vardhan2548 ай бұрын
LETS GOOOOO
@Gasa76558 ай бұрын
DPO Please
@umarjamilai6 ай бұрын
Done: kzbin.info/www/bejne/nqeqkmiDl8ZnmZo
@esramuab10216 ай бұрын
ليش ماتشرح بالعربي كالعرب محتاجين مصادر عربية انكليز عدهم مايكفي !
@ehsanzain59994 ай бұрын
لأن التعلم بالانكليزي أفضل على العموم إذا محتاجه شي ممكن أجاوبج