Thank you for the clear explanation! But next time please use screenshots of the actual formulas this way it is much more readable.
@sordesderisor2 жыл бұрын
If you also read the TRPO and PPO paper this video provides the perfect concise summary of PPO !
@alph4b3th Жыл бұрын
Sensational! Dude, you explain in such a simple way! I was wondering what the difference was between deep Q-Learning and PPO, and I was looking for exactly a video like this. Congratulations on your great didactic way of explaining the basic mathematical concepts and abstracting them to a more intuitive approach; you are really very good at this! Excellent video!
@GnuSnu Жыл бұрын
4:25 "let me write it real quick" 💀💀
@James-qv1lh Жыл бұрын
Insanely good video! Simple and straight to the point - thanks so much! :)
@sayyidj640610 ай бұрын
i wish i know this channel sooner. thanks for video
@marcotroster8247 Жыл бұрын
Just evaluate the derivative of the policy gradients. Only then, you can really understand why PPO works. PPO adds the policy ratio as a factor to the derivative of the vanilla policy gradients. The clipping erases samples from the dataset with bad policy ratios because the derivative of a constant is zero. Also you need to understand from advantage actor-critic that the sign of the advantage determines whether the probabilities increase or decrease. Given the same training data, positive advantages will increase probs for good actions and decrease probs for bad actions. And the min always picks the clipped objective for bad policy ratios, so the gradients become constants. Otherweise they're the same and make only steps of policy ratios withing the epsilon bound. And because the policy gradients are multiplied by the policy ratio, this actually works as expected and gives PPO its stability.
@carloscampo9119 Жыл бұрын
That was very, very well done. Thank you for the clear explanation.
@alexkonopatski4292 жыл бұрын
I really love your vids and I also love how you explain things! And could you pls maybe make a video about TRPO, 'cause it is a really complex thing to understand in my opinion and the lack of available resources makes the situation not better. Therefor, I and I think a lot of others would be really glad about a good explanation! Thanks in advance
@crwhhxАй бұрын
When you say dqn is offline, were you trying to say it is off policy?
@boldizsarszabo8832 жыл бұрын
This video was super helpful and informative! Thank you so much for your effort!
@ivanwong8633 жыл бұрын
DQN is not an offline method is it?
@EdanMeyer3 жыл бұрын
My bad, I meant to say it’s an off-policy method, q-learning performs very poorly an in offline setting
@canoksuzoglu65403 ай бұрын
Thanks dude. That was perfect explanation
@datonefaridze15032 жыл бұрын
Thank you for your effort, i really appreciate it, you are working for us to learn, thanks
@hemanthvemuluri9997 Жыл бұрын
for DQN you mean Offpolicy method right? DQN is not an Offline method.