Love this guy. As an RL PhD student, your videos are golden.
@nikhillondhe58156 жыл бұрын
RL PhD sounds so interesting!
@andres18m6 жыл бұрын
Institute name?
@Ayanwesha5 жыл бұрын
hello..sir i am a grad stud can anyone tell me plzz if back propagation is necessary in supervised and unsupervised learning?or it is only used in reinforcement learning thanks
@hcgaron5 жыл бұрын
Ayanwesha 12345 yes, back propagation is used as a basis for gradient based methods of optimization
@ernie21115 жыл бұрын
"RL PhD" didn't know such things exist lol
@denebvegaaltair11462 жыл бұрын
Your videos have just the right amount of technical terms such that student engineers can learn something, and also the right amount of summary and rewording such that beginners can get a vague idea of concepts. Thank you so much
@yuanyuansun35213 жыл бұрын
“If u only give it a positive reward when it successfully stacked a block, it’ll never get to see any of those reward” Only if my tutors realise this.
@snippletrap5 жыл бұрын
The perils of reward shaping are well understood in a public policy context, where incentives can lead to "unintended consequences".
@Hyuts5 жыл бұрын
Explains in an elegant manner more than I have learned in half a semester of my AI college course.
@SukhwinderSingh-fb9qw6 жыл бұрын
This was one of the best videos on RL that I have seen. Extremely informative. The way you explain things is awesome. Keep up the great work! Cheers man!
@atcer51 Жыл бұрын
fiiiinnnaaaallly after tons of googling, I finally fund a USEFUL video that accually EXPLAINS how to reward the agent, and not just saying: 'oh u just reward it'
@davidfield52956 жыл бұрын
The misuse of 'literally' notwithstanding, this was an excellent video. Very clear and concise explanation.
@cemgocer81854 жыл бұрын
Quality of the video is off the charts. Topics u have chosen to explain the field, the way u explain them and especially pointing the common misconceptions that make it harder for us to get into what AI really is... I'm sad that there is no superlike button. Rare to see videos of this quality and honesty
@rednassie11014 жыл бұрын
People: ANN ARE TAKING OVER THE WORLD AND STUFF WILL NEVER BE THE SAME my horribly trained network on a cat: "dog"
@I_Lemaire4 жыл бұрын
Could they help with the necessary government takeovers associated with COVID-19? Temporary command economies could be more efficient.
@revimfadli46664 жыл бұрын
KZbin's bots: "Robot fighting is animal cruelty"
@floriandebrauwer91405 жыл бұрын
Thanks for your work ! I like the way you present such a complex field in a clear manner for poeple without any background. Thanks to you I know where to start in my learning journey !
@allamasadi79706 жыл бұрын
Your channel deserves more views 👍
@akramsystems6 жыл бұрын
agree %100
@lohithArcot4 жыл бұрын
Not many reach these topics.
@Lilowillow423 жыл бұрын
Just wanted you to know that in my university course for introduction to AI our professor recommended your videos for machine learning. Your explanation is highly enjoyable and informative. Thank you!
@TheBeansChopper3 жыл бұрын
I think the comment section speaks for itself. This is a fantastic grasp of the basic concepts and issues with this technologies in such short time, without diving unnecessarily into formalism. Thanks :)
@rutexgreat3619Ай бұрын
Very clear material, very clear representation, thank you for your time and video.
@DotCSV6 жыл бұрын
Hi Xander, just found your KZbin channel and I'm very amazed about your content! I also run a KZbin channel with the same topic but for the Spanish speaking audience, and I'm happy to see that more new channels are growing to educate in the field of machine learning. I hope in the future we can crossover our contents :)
@ArxivInsights6 жыл бұрын
Checked out your channel, great stuff man!! It's indeed nice to see that many people are starting to contribute to the online ML community in such a huge variety of ways :p
@OcramRatte4 жыл бұрын
eeee yo creo que te acabo de ver en tiktok
@shirishbajpai9486 Жыл бұрын
watched in 2023 after all the LLMs stuff going on... still such relevant and pure gold!
@funpy7723 жыл бұрын
Just wanted to tell you people.. this video is still awesome.
@thanasispappas62 Жыл бұрын
By far the best video of RL ive ever seen.
@TY-un4no4 жыл бұрын
Complex stuff made simple and easy, this is a very good intro video to RL. Starting to learn RL for work and your video gave me a great starting point, thank you!
@PasseScience4 жыл бұрын
Hello, an RL idea I had, I am curious to know if you came accross similar things. Let's put the context in a very general way: a predictive/policy part doing it's usual job: having a latent/feature representation of the time line. (this time line including sense-data and action outputs, both past and predicted). and an RL part that can use the prediction of the policy part to make decisions. (determine action outputs). If we remain general both parts are working in some kind of a loop (policy predicts a future, decision parts tries to use it to determine futur actions, policy predicts again a futur based on what is planned etc...) We have here a very basic feature we usually seek: action that requires initially a huge number of back and forth in prediction-decision, can be eventually learned by the prediction parts (that will prefill the output actions based on it's prediction). Here nothing new, I am just talking about an abstraction that can suit a large number of RL systems. But now is the idea: usually the decision part job is to fill action output, but if we allow it to fill the "sense data prediction part" we end up with something interesting: we can see the prediction part in a more general way, not only as something that can predict but as something that can fill the gap (prediction is filling a specific gap), and so if the decision parts prefil the predicted sense-data with "i fill an apple in my hand" the predictive part (now more a "time-line filling gap part") can try to determine the actions that leads to this sense-data. Here we invent a new way of decision to communicate: by "will". It describes what it wants and the planification to get it is delegated to the first part of the engine. Was I clear? have you seen this kind of thing?
@PriyanshuGupta-hf2hm3 жыл бұрын
You explained so well that I understood each and everything in your video. I am overjoyed!
@HARtalks4 жыл бұрын
It was really interesting and helped me to get a clear picture of what reinforcement learning is... Thank you!!
@laeeqahmed19805 жыл бұрын
Great talk. Humans are not good at multiple sound recognition and you added music to your video.
@aanex20055 жыл бұрын
I have no idea about RL but your video has given me a good jump start. Thanks man
@biiigates73814 жыл бұрын
I've been learning AI for almost a year now and on all the channels I've spent with this is the best one. Very underrated! (btw its the first time i discovered this channel and I instantly subscribed)
@mundeepcool4 жыл бұрын
Same here, loved this video and I instantly subscribed... and also oh yeah yeah
@mantische4 жыл бұрын
One of the best explanations I've seen
@MuditBachhawatIn4 жыл бұрын
I have been meaning to read about RL for a long time. This video couldn't be more simple and clear introduction to it. Thanks man!
@williamkyburz6 жыл бұрын
Xander, extremely well done, lucid and cogent. You should be teaching at M.I.T. or Universiteit Gent). The ability to teach complex subjects in an intuitive and simple way is a gift. Wish you the best in everything. Peace
@ArxivInsights6 жыл бұрын
Thanks William! I am actually doing my PhD in Gent at the moment :)
@Krimson5pride5 жыл бұрын
It was both professional and entertaining at the same time. Great and precise explanation.
@gusbakker5 жыл бұрын
Great balance between a very well explained content and the interesting facts about current progress in AI at the end. Good work
@alirezaparsay8518 Жыл бұрын
The explanation was so clear. Thank you.
@LearnRoboticsAndAI3 жыл бұрын
Summary : - State-of-the-art Robotics is a Software challenge and not a hardware challenge (Robots are physically capable of challenging tasks) - Supervised Learning * Known - Inputs, Outputs * Compute gradients using Backpropagation to train the network to predict outputs for new inputs * E.g. for a game of ping pong, the data can be screenshots at specific time instants and the key (Up/down) pressed at each instant by the user (recorded from a user playing the game) and it can be used to train a neural network to predict output for a new input image * Disadvantage is the creation of a dataset, which isn't always easy to do * Another disadvantage is that since the data is recorded from human playing the game, it can never be better than a human playing the game - Reinforcement Learning * Difference to supervised learning is that we do not know the target label as we have no dataset * The network that transforms the input states to output actions is called policy network * One simple way to train a policy network (can be fully connected or convoluted) is a method called policy gradients 1. In Policy Gradient method, we start with a completely random network 2. Feed the network a frame from the game engine, it produces a random output 3. Send that action back to the game engine 4. The game engine produces the next frame 5. Outputs are represented probabilistically sampled from a distribution such that the same exact actions are not repeated again and again 6. Rewards are given if the agent scores a goal (+1) and penalty is given is the opponent scores a goal (-1) 7. Entire goal is to optimize its policy to receive maximum reward 8. We get a bunch of experience by feeding frames to the network, getting random actions 9. Sometimes (rarely) the result would be a WIN 10. We use normal gradients to increase the probability of those actions (that result in WIN) in the future 11. For a negative reward we use the same gradient but multiply it with -1 * Credit Assignment Problem - Most of the steps were good but it lost at the end so our network will think that the particular sequence of actions is bad * Sparse reward setting, very sample inefficient * Reward shaping- Additional intermediate rewards. But must be designed individually for specific problems. Not scalable.
@espangie4 жыл бұрын
This was really helpful. Thank you to people like you for creating this content. Appreciate you, Xander!
@doctorartin5 жыл бұрын
Doing part of my PhD on potantial AI-strategies fordecision-making in healthcare, and this was very useful, thank you.
@varshinis69304 жыл бұрын
Which university??
@doctorartin4 жыл бұрын
@@varshinis6930 Lund University
@govindnarasimman15365 жыл бұрын
Very clear naration and true to.ground comments. All the euphoria about AI needs to be grounded.
@ArnauViaMartinezSeara6 жыл бұрын
Really useful. I am preparing a Reinforcement Learning class aplied to finance and it is really helpful. Can't wait to see next episode. Thanks
@majeedhussain32766 жыл бұрын
You deserve million subscribers hopefully one day you will. So much clarity in every video. Keep going...
@shashankshivakumar47325 жыл бұрын
I love this video. I love his criticial and grounded thinking. Great work !
@rishidixit7939 Жыл бұрын
The sudden surprise of hearing Bruno Mars makes you pause video for other open tabs
@poojanpatel24376 жыл бұрын
Best Channel on yt for ml/dl/rl/ai... Keep up the good work... Would love to see your new video weekly...
@ArxivInsights6 жыл бұрын
I'd love to make more videos too! But since I'm currently doing this 100% in my spare time and 1 vid takes about 30hrs of work, there's really no way I can do one per week for now :(
@poojanpatel24376 жыл бұрын
Arxiv Insights Still amazing work till now... Love to see your more videos in future.. ❤
@ms_19185 жыл бұрын
well came here for a 1 min intro to reinforcement learning for first class of course, stopped after 16 minutes what a superb experience.
@soumyakantadash59865 жыл бұрын
These videos are gem!!!..... incredible, precise and knowledgeable!!!!
@RoxanaNoe6 жыл бұрын
Your channel is a great resource for getting into Deep Learning and AI.
@josefpolasek66664 жыл бұрын
Your videos are absolutely amazing! Thank you very much for explaining concept of RL in 16 minutes.
@Z4NT03 жыл бұрын
I learned so much in just 16 minutes. Awesome Video!
@dean81473 жыл бұрын
You’re a legend mate. Honestly, thanks for all of your hard work
@78106 жыл бұрын
Good stuff to learn the RL in terms of basic knowledge as well as the challenge it will face. Thanks for your time and sharing!
@papaman10376 жыл бұрын
Your content is far better than that guy that copies someone's code from GitHub makes an obscure reference to the original author and states that he added a wrapper to make the code easier to use (a lie Everytime I've checked). He uploads the code as an original comit (no fork from the rightful author's repo). He intentionally misleads people and profits from it -- a legal necessity for calling it fraudulent. Your content is excellent, clearly founded in recent research papers and you very professionally point out that material and more. You add value with your discussion of the topic. Thank you for an excellent channel. I would use patreon but I am Ill and not working. I'm doing my best to spread the word.
@steadymedia2345 жыл бұрын
This is a great presentation on RL, short and clear content.
@ingeniouswild5 жыл бұрын
Very nice episode! One thing that struck me about your suggestion that without Reward Shaping, the auto-learning of the 2600 games would be intractable: even for a human, this would be extremely difficult - we succeed with new, undocumented games because they often have similar sub-components and sub-goals that we already know from other games (or life). But I'm sure you could easily construct a game which would be impossible for a human to learn without any hints, while still having the same overall complexity.
@gudusangtani4 жыл бұрын
So well explained ....I also liked the comments on Boston robotics considering the hype and buzz about AI and ML.. You are doing a very good job !
@nateshrager5126 жыл бұрын
Great job introducing the topic. Very nice job dispelling misconceptions surrounding the topic as well. I put on that notification for your next videos, looking forward to em : )
@orfeasliossatos6 жыл бұрын
I've been literally looking all over for a video like this, thank you so much
@nemx4u6 жыл бұрын
You explain hard topics beautifully! great job. Would love to see more RL videos!
@codyheiner36366 жыл бұрын
Love the philosophical discussion at the end!
@thaermashkoor62253 жыл бұрын
Thanks for this clear introduction.
@OliverZeigermann5 жыл бұрын
Very lively and understandable. Great work!
@HarutakaShimizu5 ай бұрын
Wow, this was a very clearly explained video, thanks!
@robertfairburn99796 жыл бұрын
When I was a psychology student when trained chickens using reinforcement training with reward shaping. However it was a form supervised training in reality
@khajasaen6 жыл бұрын
Best channel in the crowd ... keep it up Xander
@bsudharsh5 жыл бұрын
succinct; its a brilliant rendition on reinforcement learning
@geraldkenneth1192 жыл бұрын
It seems to me one way, albeit a rather difficult one, to help AI deal with sparse rewards is to 1. Give them a reward function that doesn’t work based on if they accomplished the task or not, but on how close they got to achieving it 2. Give them the ability to generate plans for achieving a goal, and to recognize why they failed
@gorillapimpin29786 жыл бұрын
my new favorite channel
@Alex-gc2vo6 жыл бұрын
your videos are some of the best explanations I've found for a lot of these very advanced subjects. I suspect your viewer count is going to jump very quickly. keep it up.
@mujahid13244 жыл бұрын
I would say "Wow'. You nailed it in10 mnts what's "reinforcement learning" is. Please keep sending more and more Ai . keep it up, Xander :)
@sidharthaparhi79306 жыл бұрын
Also your intro is very high quality, like an intro to a good TV show
@amitredkar1406 жыл бұрын
Great video!!!! Explained exceptionally, liked other videos as well from your channel. Would love to see more stuff related to AI/DL or RL. Thanks in advance. Keep up the good work....
@Jshizzle25 жыл бұрын
Perfect video, so much more intuitive than my lectures. Thanks a bunch!
@sharadrawatindia6 жыл бұрын
Hey Xander! Great videos. Looking forwards for your next video.
@shirishbajpai9486 Жыл бұрын
3:12 - Why reinforcement learning 4:00 - RL framework 4:30 - Policy Gradients 5:37 - Training Policy network 7:50 - Problem with policy gradient(credit assignment problem) 9:25 - Where sparse reward setting fails 11:00 - Reward shaping
@jackwhite93326 жыл бұрын
Impressive explanation, found this very useful. Thank you!
@ArturoMoraSoto4 жыл бұрын
Nice explanation, thanks for taking the time to create this great video.
@alanator25 Жыл бұрын
Thank you! This was a great introduction!
@32isaias5 жыл бұрын
The one that will take Siraj's crown, well deserved.
@mehdisauvage12346 жыл бұрын
Your videos are so useful and interesting ! This is pure gold to me :)
@ahilanpalarajah31595 жыл бұрын
Only way to describe this guy is "22 Two's - Jay-Z". Excellent video.
@qandos-nour Жыл бұрын
Great and clear explanation
@saaniausaf96216 жыл бұрын
I loved the way you explained everything. Thanks!
@stefano38084 жыл бұрын
really high quality videos, thanks for that
@colorlace5 жыл бұрын
The Lebowski Theorem: No superintelligent AI is going to bother with a task that is harder than hacking its reward function.
@wizardOfRobots4 жыл бұрын
Unless it's reward function punishes it for it. Now we have the Meta-Lebowski theorem: It's not going to bother with a task harder than hacking it's hack-detection algorithm.
@halifakx3 жыл бұрын
perhaps, a machine become smart, and then smarter as it decides becoiming smarter is shorterst path to reward... finally so smart to realize their reward is just color mirrors? and create a new program inside the program that cancels or outweigh the previous reward and create new rewards? programming this new reawards in their own languaje, not apparent to us....like facebook robot talking their own languaje
@halifakx3 жыл бұрын
estramboticusssssss dangerosicusss hahaha
@sridhasridharan36003 жыл бұрын
Great Videos! I am recommending these to my students.
@punitpalial3 жыл бұрын
Here the notes from the video, LEARNZY (please ignore the timestamps, they are not accurate) 01:57 : Peter Abeel gave a demonstration of robots doing all the mundane tasks of the house like cleaning, cooking, and bringing a bottle of beer. It showed our remarkable achievements in the field of robotics We are sufficiently advanced(mechanically and in hardware essence) to build a robot capable of doing complex actions but the reason we aren't able to make terminator-like robots is that we still haven't embedded intelligence into these robots. So creating intelligent robots is a software problem, not a hardware problem 02:03 : Reinforcement learning is basically about letting computers learn on their own by learning from themselves. Like it's said, you can only be as good as your master. Therefore if a computer learns from the world's best chess player then the best it can become is to become equal to the best chess player but to surpass her, AI needs to learn much more than just from the best chess player, and that is made possible by learning from itself, allowing it to take random decisions and then regarding the decisions which lead to a positive outcoming and punishing for decisions which led to a negative outcome, and rewarding the AI for not just winning the war but when it wins the battle too. This learning from itself is called reinforcement learning. 04:07 : The difference between Reinforcement and SUpervised learning is that unlike in supervised learning where we need a training set like the moves of the best chess player to train our AI, and then the computer recognizes patterns picks the best pattern. In reinforcement learning, there is no training data and the computer pretty much learns by taking random decisions and figuring out which random move worked best 04:41 : Policry gradients- AI does a random action >>checks if it is good>> if good asks it to repeat it and reward it>>if not, then punish it 05:02 : 📌the entire goal of the policy network is to maximize the reward. It just receives the scoreboard as a checking mark 05:37 : 📌read Andrej Karpath's blog on dep reinforcement learning: pong from pixels 07:50 : the problem with policy gradient is that it rewards the end goal and not the process. so even if the AI took all the right steps in the game but only lost out on the last move, then the policy gradient would put all the moves made in the game as negative and will punish the ai for it. This problem is called the "Credit assignment Problem' To correct this, the AI can be rewarded for all the right moves in the game rather than winning the game. The solution given is called reward shaping. but the problem with reward shaping is that it has to be configured for all the cases where it is used. therefore makes it difficult to be used universally 12:03 : Reward shaping can also have "the alignment problem" where the ai is getting all the rewards but isn't doing what it is supposed to do 14:08 : Boston Dynamics has some pretty cool robots but those robots cant take autonomous intelligent decisions. They pre-programmed for doing what they do. They don't actively decide for themselves what they want to do. Hence they are not really intelligent and just a marketing gimmick at this point
@papaman10376 жыл бұрын
Even in games w/o an RL actor loops without achieving a goal occur. The long time solution was to periodically perturb the system sufficiently that such learned patterns get interrupted.
@alenasazanova83314 жыл бұрын
That's very interesting and understantable video. Thank you very much!
@ipuhbamrash67085 жыл бұрын
Fabulous!! No other word for you!!
@mohammadhatoum6 жыл бұрын
Great job.. Explained the subject in a simple way. Keep it up and looking forward for new videos
@azmathmoosa43246 жыл бұрын
I like how u don't hype up anything. Great mate! I subscribe!
@digvijaybhandari9747 Жыл бұрын
Really enjoyed the content here!
@tnmygrwl6 жыл бұрын
You do an awesome of structuring the content. Loved the video.
@senri-6 жыл бұрын
Cant wait for the next videos keep up the great work!
@LongTheRevolution2 жыл бұрын
Amazing video. Thanks braddah
@FujihiroCZ4 жыл бұрын
You are a GOD!
@mgilson6 жыл бұрын
I can't wait for your next video !! 😍😍😍
@wzyjoseph73172 жыл бұрын
Very clear explaination! Thanks for the work!!!!XD
@tyfoodsforthought Жыл бұрын
That was a really good video. Sheesh. What a download!
@tonakkie6355 жыл бұрын
Great overview, well explained👍.Thanks
@josephedappully14826 жыл бұрын
This is a great video; thanks for making it! Looking forward to your next one.
@yasermahmoud62975 жыл бұрын
awesome video! Discussing AI without its long term negative repercussions is absolutely useless.
@robertpalmercoaching6 жыл бұрын
Rewards and Reinforcements need clarification. A reward is focused on a result, and a reinforcement is focused on behavior. Sometimes the difference is very subtle and hence the confusion, but the outcome is significantly different.