Thanks for watching! If you think I deserve it, please consider hitting that like button as it will help spread this channel. More break downs to come!
@Punch_Card4 күн бұрын
What's the quiz answers
@арсланвалеев-д9у10 ай бұрын
Hi! Great video! Could you answer my question about training policy? This happening on 10:00. Why obtained probability of actions are different from probs, taken on gathering data? I think that we havent changed policy network before this action. So, if we havent changed network yet, on 10:08 we would have received ratio == 1 on every step(
@jyotsnachoudhary8999Ай бұрын
Finally, a great video that explains the entire training process so clearly and effectively! Thank you so much for this. Would be great if you can create a video on Direct Preference Optimization(DPO) as well. :)
@PatrickConnor-i2q Жыл бұрын
I like the clarity that your video provides. Thanks for this primer. A couple things, though, that were a bit unclear and perhaps you could elaborate on here in the comments. - It wasn't obvious to me how/why you would submit all of the states at once (to either network) and update with an average loss as opposed to training on each state independently. I get that we have an episode of related/dependent states here -- maybe that's why we use the average instead of the directly associated discounted future reward? - Secondly, in your initial data sampling stage you collected outputs from the policy network. During the training phase of the network it looks like you're sampling again but your values are different. How is this possible unless you're network has changed somehow? Maybe you're using drop-out or something like that? Forgive the questions -- I'm just learning about this methodology for the first time.
@ТимИсаков-ц7щ6 ай бұрын
I'm also interested in the answer to the second question.
@cauamp2 ай бұрын
@@ТимИсаков-ц7щ Any answers?
@martinleykauf68574 ай бұрын
Hi! I'm writing my thesis currently and using PPO in my project. Your video was of great help to get a more intuitive understanding about the algorithm! Keep it up man, very very helpful.
@vastabyss6496 Жыл бұрын
What's the purpose of having a separate policy network and value network? Wouldn't the value network already give you the best move in a given state, since we can simply select the action the value network predicts will have the highest future reward?
@yeeehees29739 ай бұрын
More to do with balancing exploration/exploitation, as simply picking the maximum Q-value from the value network yields suboptimal results due to limited exploration. Alternatively, using on a policy network would yield too noisy updates, resulting in unstable training.
@sudiptasarkar44389 ай бұрын
@@yeeehees2973I feel that this video is misleading at 02:06. Previously I thought value function objective is to estimate the max reward value of current state, but this guy is saying otherwise
@yeeehees29739 ай бұрын
@@sudiptasarkar4438 the Q-values inherently try to maximize the future rewards, so a Q value of being in a certain state can be interpreted as maximums future reward given this state.
@patrickmann41228 ай бұрын
It helps with something called “baselining” which is a variance reduction technique to improve policy gradients
@好了-t4d6 ай бұрын
That’s because this kind of algorithm deals with continuous action not like DQN. That’s the key point of involving policy gradient to Q-learning which is the value network.
@srivatsa1193 Жыл бұрын
I ve really enjoyed this series so far. Great work ! The world needs more pasionate teachers like youeself. Cheers!
@CodeEmporium Жыл бұрын
Thanks so much for the kind words I really appreciate it :)
@vlknmrt2 ай бұрын
Thanks, it is really a very explanatory video!
@swagatochakraborty25839 ай бұрын
Great presentation. One question : why the policy network is a separate network than the value network? Seems like the probability of the actions should be based on estimating the expected reward values I think in my Coursera course on Reinforcement learning - I saw they were using the same network and simply copying over the weights from one to another. So they were essentially the time shifted version of the same network and trained just once.
@ashishbhong5901 Жыл бұрын
Good presentation and break down of concepts. Liked your video.
@burnytech5 ай бұрын
Great stuff mate
@ZhechengLi-wk8gy Жыл бұрын
Like your channel very much, looking forward to the coding part of RL.😀
@vivian.deeplearningАй бұрын
didnt explain why clipping, min or anything for the loss
@iliyafarahani-y3cАй бұрын
hi , thanks for your great video ,but one question , you mean for calculating the loss for policy network we need to get the prob for actions two times and then calculate it ?
@borneoland-hk2il3 ай бұрын
this PPO you explained is PPO-Penalty or PPO-Clip, and what is the different?
@2_Tou7 ай бұрын
I think the calculation shown on 5:45 is not the advantage. The advantage of an action is calculated by taking the average value of all actions in that state and find the difference between the average value and the value of the action you are interested in. That calculation looks more like a MC target to me. Please point out if I made a mistake because I always do...
@ericgonzales505710 ай бұрын
WHERE DID YOU LEARN THIS?!??! PLEASE ANSWER
@victoruzondu66259 ай бұрын
What are vf updates and how do we get the value for our clipped ratio. You didn't seem to explain them I could only tell the last quiz is a B because the other options complement the policy nextwork not the value network
@ns-eb7dw2 ай бұрын
You define the Value function as essentially being the Q function (ie a binary function that takes a state and an action arguments) and you say A(s,a) = R_t - Q(s,a) where R_t is the total discounted reward from step t onwards. Many other sources define the Value function as unary, ie it only takes a state argument and say that A(s,a) = R_t - V(s). Can you comment on this difference?
@inderjeetsingh2367 Жыл бұрын
Thanks for sharing 🙏
@CodeEmporium Жыл бұрын
My pleasure! Thank you for watching
@OPASNIY_KIRPI4 Жыл бұрын
Please explain how you can apply back propagation over the network simply by using a single loss number? As far as I understand, an input vector and a target vector are needed to train a neural network. I will be very grateful for an explanation.
@CodeEmporium Жыл бұрын
The single loss is “back propagated” through the network to compute the gradient of the loss with respect to each parameter of the network. This gradient is later used by an optimizer algorithm (like gradient descent) to update the neural network parameter, effectively “learning”. I have a video coming out on this tomorrow explaining back propagation in my new playlist “Deep Learning 101”. So do keep an eye out for this
@OPASNIY_KIRPI4 Жыл бұрын
Thanks for the answer! I'm waiting for a video on this topic.
@obieda_ananbeh Жыл бұрын
Thank you!
@footube311 ай бұрын
Could you please explain what up, down, left and right signify. In which data structure are we going up, down, left or right?
@CodeEmporium11 ай бұрын
Up down left and right are individual actions that an agent can possibly take. You could store these data types in an “enum” and sample a random action from this
@borneoland-hk2il4 ай бұрын
make Soft Actor-Critic videos please, is it part of Policy-based or VF or neither both.
@kuteron3074 күн бұрын
I don't think you've explained the value network correctly. The output of this network should be a one-dimensional scalar value (e.g. A score of 0.92 for a certain state), rather than being as big as the action space. By having the output be a scalar, you get different results for this network than the policy network, which can then be trained on the reward function.
@pushkinarora58005 ай бұрын
Q1: B Q2: B Q3:B
@diegosabajo218218 күн бұрын
Quiz 1: B
@paull923 Жыл бұрын
Great video! Especially, the quizzes are a good idea. B B B I‘d say
@CodeEmporium Жыл бұрын
Thanks so much! It’s fun making them too. I thought it would be a good way to engage. And yep the 3 Bs sound right to me too 😊
@zakariaabderrahmanesadelao3048 Жыл бұрын
The answer is B.
@CodeEmporium Жыл бұрын
Ding ding ding for the Quiz 1!
@sujalsuri1109Ай бұрын
Q1. B
@BboyDschafar Жыл бұрын
FEEDBACK. Either from experts/ teachers, or from the enviroment.
@sujalsuri1109Ай бұрын
2. B
@0xabaki11 ай бұрын
haha finally no one has done quiz time yet! I propose the following answers: 0) seeing the opportunity cost of an action is low 1) A 2) B 3) D
@sashayakubov69247 ай бұрын
I did not understand nothing... apparently I'll need to ask chatgpt for clarificaions