Function Approximation | Reinforcement Learning Part 5

  Рет қаралды 19,415

Mutual Information

Mutual Information

Күн бұрын

Пікірлер: 66
@JetzYT
@JetzYT Жыл бұрын
Thanks Duane, loving these videos. They're a big help for my group of undergrads who are interested in getting into RL research!
@Mutual_Information
@Mutual_Information Жыл бұрын
Oh that's awesome! Getting my vids into classrooms is the ideal case - thank you for passing it along
@hassaniftikhar5564
@hassaniftikhar5564 Ай бұрын
Best Teacher ever of RL.
@joeystenbeck6697
@joeystenbeck6697 2 ай бұрын
Deadly triad reminds me of CAP theorem from databases. "You can only keep two of consistency, availability, and partition tolerance." (Consistency = data is consistent between partitions, availability = data is able to be retrieved, partition tolerance = partitions that aren't turned off being able to uphold consistency and availability, even when another partition becomes unavailable.) - Function approximation = availability bc we're able to have a model that we can pass inputs to and it generalizes across all input-output mappings, - Off-policy training = partition tolerance bc the exploration policy and (attempted) optimal policy diverge and they're essentially two partitions that are trying to maintain coherent communication/maintain modularity so they can communicate, - Bootstrapping = consistency bc we're trying to shortcut what our expected value is but we have to trade off exploring every possible path with sampling to get a good enough but not perfect expected value. Admittedly I feel like I'm stretching a bit here but I feel like it fits somehow and I just haven't found the exact way yet. It feels like there has to be a foundation for them to be stable on and when all three of the deadly triad are present it's like that spiderman meme where they're pointing to each other for who will be the stable foundation. If none of those three are the foundation, then what is? 🤔 It feels like the only way is to invert flow and try to predict what oneself will do/predict how to scaffold intermediate reward, rather than try to calculate the final answer (and rather than having an algorithm that has an exploration policy based on its ability to predict the final reward rather than intermediate. I may be misunderstanding this though based on what you said about MCTS vs Q-learning(?). I'm not sure if the predicting how to scaffold part is equivalent to Q-learning. I'm still learning sorry haha.). I think that's pretty much what predictive coding in the brain is. Not sure how to break it down correctly into subproblem though so that we can do "while surprise.exists(): build". Maybe one thing is that humans have more punctuated phases of adjusting value and policy in wake and sleep. Wake = learn about environment, REM = learn about one's brain itself. Curious if anyone has any thoughts on the CAP theorem comparison or any of the other stuff. Thanks so much for the great video(s)! They help me learn a lot and help really get to the essence of the concepts. And are really clear and concise. And are entertaining.
@Lukas-wm8dy
@Lukas-wm8dy Жыл бұрын
Your explanations are brilliant, thanks for making these videos
@Mutual_Information
@Mutual_Information Жыл бұрын
Thanks Lukas, happy to do it
@siddharthbisht1287
@siddharthbisht1287 Жыл бұрын
Change the title to RL with DJ featuring Lake Moraine. 😂😂😂😂 . The green screen is actually really useful. Once again, grateful for these videos. You are making content that can be binge watched with a notebook 😂😂😂😂
@Mutual_Information
@Mutual_Information Жыл бұрын
Thanks Siddharth - the show is a work in progress, and I've actually managed to pull off some progress :)
@fang-panglin7691
@fang-panglin7691 Жыл бұрын
One of the best in youtube! Thanks!
@neithane7262
@neithane7262 9 ай бұрын
Great playlist ! It would have been cool to include the time of each trainning
@glowish1993
@glowish1993 Жыл бұрын
Your videos are so fricking good! Thank you for such quality content on YT many of us appreciate it. I'm sure the channel will blow up in the future!!
@Mutual_Information
@Mutual_Information Жыл бұрын
I hope you're right. Thank you!
@RANDOMGUY-wz9ur
@RANDOMGUY-wz9ur Жыл бұрын
amazing series. really appreciate your work!
@Mutual_Information
@Mutual_Information Жыл бұрын
Thanks guy!
@rr00676
@rr00676 9 ай бұрын
These videos are great! I really did not like the formatting of Barto and Sutton (eg. definitions in the middle of paragraphs), but you've done an awesome job of exacting and presenting the most valuable concepts
@Mutual_Information
@Mutual_Information 9 ай бұрын
Thank you for appreciating it! Barto and Sutton is a big bite, so I was intending to ease the digestion with these videos.
@Tehom1
@Tehom1 Жыл бұрын
"Who needs theorems when you've got hopes?" - words to live by.
@bean217
@bean217 7 ай бұрын
Amen
@aptxkok5242
@aptxkok5242 Жыл бұрын
Looking forward to the part6 video. Any idea when will it be out?
@Mutual_Information
@Mutual_Information Жыл бұрын
Working on it as we speak! I have a lot of non-YT stuff going on as well, so I've been delayed. Let's say.. 3 weeks?
@iiiiii-w8h
@iiiiii-w8h Жыл бұрын
>tfw irl all data is spread in multiple excel files throughout the company with no structure whatsoever.
@ericsung14
@ericsung14 Жыл бұрын
Thank you. it really help me a lots.
@Mutual_Information
@Mutual_Information Жыл бұрын
Awesome, happy to hear it
@imanmossavat9383
@imanmossavat9383 Жыл бұрын
I am waiting for your policy gradient video to use in my class! Are you going to release it any time soon?👀🙏
@Mutual_Information
@Mutual_Information Жыл бұрын
Yes, I've finished shooting it. Just in the editing phase now. It'll be post in about a week
@rewixx69420
@rewixx69420 Жыл бұрын
finally :-)
@abdulrhmanaun
@abdulrhmanaun 7 ай бұрын
Thank you so much
@zeeinabba4979
@zeeinabba4979 3 ай бұрын
oh my god my head aches. do I really have to understand everything deeply with no prior knowledge of RL? I am just taking things as they are and don't really understand everything and I am worried
@Mutual_Information
@Mutual_Information 3 ай бұрын
No that's a good point in fact. I don't remember the details either. Eventually you just abstract stuff away e.g. "Ok function approx just groups states together and that's bad but you gotta do it".. stuff like that.
@zeeinabba4979
@zeeinabba4979 3 ай бұрын
@@Mutual_Information THANK YOU!!
@zenchiassassin283
@zenchiassassin283 Жыл бұрын
The end looks like tabu search x)
@youtubeuser1794
@youtubeuser1794 Жыл бұрын
Thanks!
@fergalhennessy775
@fergalhennessy775 Ай бұрын
ur my GOAT i hope u know :)
@datsplit2571
@datsplit2571 Жыл бұрын
Thank you for the great explanations and animations! Helped me a lot with passing the Advanced Machine Learning course! (Passed with an 8, this is approximately an A in the US grade system) Is there any way I can donate €5 to your PayPal? I wasn't able to do this through patreon/youtube as they both require a creditcard, which I don't have. (Creditcards are not that common in the Netherlands, especially not as a student)
@Mutual_Information
@Mutual_Information Жыл бұрын
That's very kind of you! I don't actually have PayPal, so I'm not sure how this transfer would work. But that's ok - there's no need! One thing that I would appreciate much more than the money is if you recommend this channel to someone in your class. Word of mouth is a big deal for a channel like this :)
@datsplit2571
@datsplit2571 Жыл бұрын
@@Mutual_Information I already recommended your channel in the teams channel of the university course :) at the start of 2023. I'll also sbare your channel with some friends of mine. Looking forward to part 6!
@Mutual_Information
@Mutual_Information Жыл бұрын
@@datsplit2571 you're a hero! Thank you!!
@pauredonmunoz8221
@pauredonmunoz8221 Жыл бұрын
Just 2 days before the exam😍😍
@mCoding
@mCoding Жыл бұрын
That animation updating the estimates and showing the path the ball -- err "Car" -- took was spectacular. Great work as always!
@Mutual_Information
@Mutual_Information Жыл бұрын
Thank you my man! When reading the text, this was the example that convinced me it needed to be animated
@noobtopro8699
@noobtopro8699 4 ай бұрын
Sir can you provide the coding of these classes theory is really great but I am having trouble in implementation. One more playlist, please
@Mutual_Information
@Mutual_Information 4 ай бұрын
Code links in the description :) And another playlist lol... I'm tired
@BohdanMushkevych
@BohdanMushkevych Жыл бұрын
Thank you for great series! BTW - changing background to completely dark allows to concentrate on the content better
@Mutual_Information
@Mutual_Information Жыл бұрын
Yea, now that I've changed to the green screen, I think it's much better. We're a new channel now!
@mightymonke2527
@mightymonke2527 Жыл бұрын
You're really criminally underrated, should have hundreds of thousands of views at least
@Mutual_Information
@Mutual_Information Жыл бұрын
Ha thank you, means a lot to hear that. In my view, I still have a lot of wrinkles in what I'm producing. It's OK to not have a massive audience while I try to figure out how to make great videos. But eventually, they'll be really great and I think the attention will come then.
@danielawesome12
@danielawesome12 Жыл бұрын
Can you help motivate the need for the proto points? You already had a complete encoding of the state with just 2 dimensions: (position, velocity). Encoding the state in 1200 dimensions seems like overparameterization/redundancy. I assume there's a practical reason such as "dividing up the state space into 1200 discretized regions then learning the optimal behavior per region" but I can't wrap my head around why that would be necessary. This confusion carries over into Part 6 where proto points come up again, but now we only have two.
@Mutual_Information
@Mutual_Information Жыл бұрын
You got me! A length 1200 feature vector is indeed overkill. What I'm doing is, I don't want the representational capacity of the parameterization I chose to be a limiting factor. So I go over the top and I'm doing effectively exactly what you describe: "diving up the state space 1200 discretized regions" and learning the value in each region, almost independently (but not *actually* independently). In practice, we'd take a lot more care to choose a parsimonious parameterization that would be more sample effect (assuming we chose the parameterization wisely). But doing so requires machinery I'd rather not use; e.g a neural network. By picking something simple, I was avoiding the headache of our more powerful tool, but you saw it's ugly symptom.
@danielawesome12
@danielawesome12 Жыл бұрын
Thanks for confirming! And with a speedy response time too! Thanks for making this series!!
@Mutual_Information
@Mutual_Information Жыл бұрын
@@danielawesome12 Happy to - love it when people check out the RL series
@bonettimauricio
@bonettimauricio 11 ай бұрын
Thank you very much for sharing this amazing content, I have a question. I think the obvious choices for features in the mountain car example are distance and velocity. I don't understand why you (or the book that used tile coding) chose to use normalized radial basis to convert these 2 features into 1225 (352) features. My understanding of function approximation was that its main goal was to shrink a huge space state into a smaller one. I get the impression that this solution expands the state space.
@Mutual_Information
@Mutual_Information 11 ай бұрын
The essential *information* is distance and velocity, but how you feature those into a model is a different story. Let's say we didn't use a NRB, what's the alternative? E.g. if you do something linear, you'll quickly see do-able actions over the state space can never produce a sequence that'll get out of the valley.
@bonettimauricio
@bonettimauricio 11 ай бұрын
@@Mutual_Information I will experiment with a polynomial expression combining position and velocity instead and check if it converges to the optimal solution. The NRB solution is great. I do not have a standard procedure to use for feature selection, do not even know if it exists at all, if you know any literature about it please let me know. Again, thanks for this content!
@Mutual_Information
@Mutual_Information 11 ай бұрын
@@bonettimauricio There's a section of Sutton's textbook that's devoted to how to featurize the state space, in case you're interested
@arnaupadresmasdemont4057
@arnaupadresmasdemont4057 Жыл бұрын
Wonderful!
@marcin.sobocinski
@marcin.sobocinski Жыл бұрын
Just a bit of "surfing" on the very broad topic (like just mentioning a "deadly triad" without any hints as to how to deal with it) ;), but the mountain car animation is just wonderful! Thank you for the code! It's always a pleasure to watch such a well prepared videos :D
@Mutual_Information
@Mutual_Information Жыл бұрын
Thank you Marcin - great to see you back here! Yea, sometimes surfing is the best I can afford :) Glad to hear the code is appreciated. For the those who are *really* curious about the details, the code can fill in the gaps
@Pedritox0953
@Pedritox0953 Жыл бұрын
Great video! very ilustrative
@TheElementFive
@TheElementFive Жыл бұрын
If I may suggest a future video topic, how about a deep dive into mercer's theorem and how it is applicable to support vector machines?
@Mutual_Information
@Mutual_Information Жыл бұрын
Mercer's theorem.. not a bad idea. That would probably get wrapped in a broader conversation with the kernel trick, and SVMs would get mentioned there. Added it to the queue!
@qiguosun129
@qiguosun129 Жыл бұрын
Excellent course!
@letadangkhoa
@letadangkhoa Жыл бұрын
Thanks a lot for the great content. May I know when the final video will be released?
@Mutual_Information
@Mutual_Information Жыл бұрын
It will be about a month form now. It may help to turn notifications on :)
@sounakmojumder5689
@sounakmojumder5689 Жыл бұрын
Hi, thanks,TD value is not exactly the belman update (TV), in case of off-policy learning, it may capture a not important sample and new update can lead to a wrong direction, which may cause divergence and the update is projected into feature space (linear approximator let's say), and then projected bellman error is minimized , am I right?
Policy Gradient Methods | Reinforcement Learning Part 6
29:05
Mutual Information
Рет қаралды 30 М.
Reinforcement Learning, by the Book
18:19
Mutual Information
Рет қаралды 95 М.
Spongebob ate Michael Jackson 😱 #meme #spongebob #gmod
00:14
Mr. LoLo
Рет қаралды 9 МЛН
Миллионер | 1 - серия
34:31
Million Show
Рет қаралды 1,8 МЛН
iPhone or Chocolate??
00:16
Hungry FAM
Рет қаралды 38 МЛН
MAMBA and State Space Models explained | SSM explained
22:27
AI Coffee Break with Letitia
Рет қаралды 48 М.
Monte Carlo And Off-Policy Methods | Reinforcement Learning Part 3
27:06
Mutual Information
Рет қаралды 44 М.
REINFORCE: Reinforcement Learning Most Fundamental Algorithm
13:42
Andriy Drozdyuk
Рет қаралды 9 М.
Importance Sampling
12:46
Mutual Information
Рет қаралды 61 М.
The BEST Way to Find a Random Point in a Circle | #SoME1 #3b1b
18:35
The Most Important (and Surprising) Result from Information Theory
9:10
Mutual Information
Рет қаралды 90 М.
Attention in transformers, visually explained | Chapter 6, Deep Learning
26:10
Why Neural Networks can learn (almost) anything
10:30
Emergent Garden
Рет қаралды 1,2 МЛН
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,3 МЛН
Spongebob ate Michael Jackson 😱 #meme #spongebob #gmod
00:14
Mr. LoLo
Рет қаралды 9 МЛН