MIT 6.S191: Reinforcement Learning

  Рет қаралды 70,086

Alexander Amini

Alexander Amini

Күн бұрын

Пікірлер: 37
@izharulhaq2436
@izharulhaq2436 7 ай бұрын
One of the best intro to RL. Recommended to every student interested in this field to watch this amazing lecture. I have just completed it at 1:40 AM...Now waiting for Actor-Critic Type RL Agent to be released soon...Thanks and Good night.
@visheshphutela
@visheshphutela 7 ай бұрын
Babe wake up new 6.S191 lecture just dropped
@BheezHandle
@BheezHandle 7 ай бұрын
Lol...
@VisatoVino
@VisatoVino 7 ай бұрын
@@BheezHandle Feel the vibessssss
@crarewhiteheadpoin9471
@crarewhiteheadpoin9471 6 ай бұрын
U got it
@artukikemty
@artukikemty 7 ай бұрын
Amazing intro to the subject. Since it is interrelated to control theory it is mandatory to have a good back ground on control theory such as state space models and optimal control
@bookish3018
@bookish3018 28 күн бұрын
one of the best presentations about deep reinforcement learning concept, thanks a bunch for sharing it
@Asif-fp8gy
@Asif-fp8gy 7 ай бұрын
Awesome job. Only curious if someone can explain how was the target part of the loss function computed at 26:40?
@ravenclaw3693
@ravenclaw3693 5 ай бұрын
immediate reward + discounted best possible future reward
@maazshaikh7905
@maazshaikh7905 9 күн бұрын
26:58 my doubt is how did we figure out the target beforehand, isnt this contrary to the definition of reinforcement learning?
@gamalieliissacnyambacha3029
@gamalieliissacnyambacha3029 7 ай бұрын
I'm curious to listen to this lecture. I need more concepts to apply in my Thesis. I'm looking forward to seeing this happen soon.
@agnitapandian
@agnitapandian 11 күн бұрын
Fantastic talk
@maxsuphidden667
@maxsuphidden667 10 күн бұрын
Thank you sir 🙋‍♂️
@melvinkuriakose2708
@melvinkuriakose2708 6 ай бұрын
10:30 equation for total reward should be summation of rewards from t=0 to t=t, right? But in equation its from t to infinity...why?
@rorisangsitoboli4601
@rorisangsitoboli4601 5 ай бұрын
The total reward is from time 't' to a later time/time in far future (t^inf). Initial value of reward is r_t. The next one will be r_{t+1}, r_{t+2}, ..., till termination-assumed some time in the future but can be user chosen, e.g. time {t+n} as the termination time. Remember you can be rewarded now (t) or anytime in the far future (inf) so you sum over the entire duration.
@xxyyzz8464
@xxyyzz8464 3 ай бұрын
You’re correct the lecturer screwed up here. What he says in spoken language does not match the equation he shows. His equation is the expected return (total future rewards) from time t given no uncertainty in future rewards as you follow the policy until the end of an episode, but in language he claims it is the sum of all rewards from time t=0 to time t, but that is clearly not what the equation states. I haven’t finished this but it’s likely the equation is right but his statement in language is wrong given he then shows the form where you discount future rewards. You would not discount past rewards which is why I think the equation is right but he just is not describing the equation properly in language.
@anoopitiss
@anoopitiss 7 ай бұрын
Following since 3 years
@hrishabhg
@hrishabhg 7 ай бұрын
Lovely lecture.❤ Self driving car is a dynamic environment as compared to Gaming environment. It may be mentioned.
@ViolentWarrior
@ViolentWarrior 2 ай бұрын
What are the system requirements?
@ssrwarrior7978
@ssrwarrior7978 4 ай бұрын
This is Awesome !!!!!
@artukikemty
@artukikemty 7 ай бұрын
Transformers can be used as a direct replacement for DRL since it can process sequences as well. There is an article in medium related to this alternative.
@collinspo
@collinspo 3 ай бұрын
Got a link?
@Crashrapescrypto
@Crashrapescrypto 7 ай бұрын
can you advise for my startup, we applied for YC, we want to setup up indian team and RLHF as well as using SIMPO to agentify the hospital system and remove the inefficiences faced in the current hospital systems. im an aussie coming to america. we have hardware as well, been in guangzhou for the last 6 weeks finding the best containers and cameras triend to train for guaging container volume for measuring stock remaining.
@christianrink4093
@christianrink4093 5 ай бұрын
Can one conclude from the AlphaGo vs. AlphaZero showcase, that the bottleneck of "achieving" AGI/ASI, are we humans and the ethical/safety restrictions we have set?
@Radiant-84
@Radiant-84 5 ай бұрын
Both alphago and zero rely on world models (and self play) which they can use to try out or plan different moves based on the simulated results. While it's super easy to do this simulation in board games, where the rules are deterministic, creating such a world model for something with drastically more complexity like the real world is far more challenging. Algorithims like MuZero, which use learned models, are getting their, but technically speaking, Deepminds got a lot more work to do before they can make Alpha-terminator ;)
@foregroundtreble05
@foregroundtreble05 7 ай бұрын
Needed u
@TheNewton
@TheNewton 7 ай бұрын
Please repeat questions, question askers audio is blown out or intelligible. Some of the questions manage to be in the captions others but not all. The professors mic is perfect however with a great mix one of the few series where you don't have to be max volume all the time.
@Huayi-x3p
@Huayi-x3p 5 ай бұрын
Hi, when i tried to run the modeling building part of lab 1, the line "tf.keras.layers.Embedding(vocab_size, embedding_dim, batch_input_shape=[batch_size, None])," does not work, and the error says batch_input_shape is an unrecognized keyword argument to Embeddings, has anyone else encountered this problem? I looked up the tf.keras.Embeddings documentation and couldnt' find anything to replace it...What did you guys to solve it? Thanks!
@Yeanpc
@Yeanpc 4 ай бұрын
Hi, from my understanding when looking at TF documentation, Embeding doesn't take a batch_input_shape as parameter. I justg went ahead and executed the embedding as: tf.keras.layers.Embedding(input_dim=vocab_size, output_dim=embedding_dim) and it worked for me.
@wangfenjin
@wangfenjin 7 ай бұрын
太牛了
@ikpesuemmanuel7359
@ikpesuemmanuel7359 7 ай бұрын
Is there an application of reinforcement learning for subsurface reservoir simulation?
@Diego0wnz
@Diego0wnz 7 ай бұрын
👏
@breezecreator8751
@breezecreator8751 7 ай бұрын
🎉
@Yume-x9v
@Yume-x9v 7 ай бұрын
Kenchin kokoro no.tabi.Study of the waste.
MIT 6.S191: Language Models and New Frontiers
56:15
Alexander Amini
Рет қаралды 41 М.
MIT 6.S191: Convolutional Neural Networks
1:07:58
Alexander Amini
Рет қаралды 110 М.
Правильный подход к детям
00:18
Beatrise
Рет қаралды 11 МЛН
Chain Game Strong ⛓️
00:21
Anwar Jibawi
Рет қаралды 41 МЛН
Reinforcement Learning with sparse rewards
16:01
Arxiv Insights
Рет қаралды 119 М.
26. Chernobyl - How It Happened
54:24
MIT OpenCourseWare
Рет қаралды 2,9 МЛН
What is Q-Learning (back to basics)
45:44
Yannic Kilcher
Рет қаралды 101 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,3 МЛН
Reinforcement Learning Series: Overview of Methods
21:37
Steve Brunton
Рет қаралды 109 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 839 М.
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
Reinforcement Learning, by the Book
18:19
Mutual Information
Рет қаралды 118 М.
MIT 6.S191: Deep Generative Modeling
56:19
Alexander Amini
Рет қаралды 75 М.
Правильный подход к детям
00:18
Beatrise
Рет қаралды 11 МЛН