MIT 6.S094: Recurrent Neural Networks for Steering Through Time

  Рет қаралды 142,589

Lex Fridman

Lex Fridman

Күн бұрын

Пікірлер: 103
@SamirKhan-os2pr
@SamirKhan-os2pr 7 жыл бұрын
Academic by day... bouncer by night
@spaceyfounder5040
@spaceyfounder5040 6 жыл бұрын
rly?
@samirelzein1978
@samirelzein1978 4 жыл бұрын
Nailed it! 1st impression!
@JaswanthPadi
@JaswanthPadi 2 жыл бұрын
😅😅😂
@MasterAufBauer
@MasterAufBauer 7 жыл бұрын
About the Udacity Challenge: It is very likely that none of these winning models will be able to steer a car. What they are really good at is predicting the steering angle depending on the last few frames. Actually you may be able to compute that by using the difference image of the last two frames without any network and achieve almost the same performance as the winning teams. As Nvidia mentioned in their paper End to End Learning for Self-Driving Cars without learning error correction the car will just leave the center of the road and the network has no idea how to correct that because the situation is not known from the training data. My point is steering a car correctly on a real road and the prediction of steering angles based on a video are two different challenges.
@spaceanarchist1107
@spaceanarchist1107 3 жыл бұрын
What happens if you stop it from remembering the last few frames so it has to make the decision based on other factors?
@sewellrw
@sewellrw 2 жыл бұрын
I've been having these videos come up as recommendations to watch on youtube. Lex is so good at teaching these topics that can get a bit complicated. I wish I had more professors like Lex in college for courses that had difficult to understand concepts. He really breaks things down slowly, and explains things in a way that people can understand.
@abdulelahalkhoraif4495
@abdulelahalkhoraif4495 7 жыл бұрын
I have to say you are very talented for teaching very complex topics. Thank you so much MIT for choosing such a brilliant presenter.
@stefanfaiciuc4560
@stefanfaiciuc4560 4 жыл бұрын
Thanks so much for this, Lex. Your lecture was how I finally understood how RNNs work and it helped me to successfully complete my university thesis back in 2017. It's funny how I came across you again through Joe Rogan and your podcast and figured it's the same dude that helped me through college. Hope you get to be the one that builds robots better than anybody else in the world.
@anthonyarthurmensah5240
@anthonyarthurmensah5240 2 күн бұрын
Ive been following your videos but never knew you are/were a tutor/lecturer... I am going to enjoy this.
@Gannicus99
@Gannicus99 5 жыл бұрын
Some, hopefully helpful for the audience, remarks: 1. You need a lot of data. Depends. A lot of unlabeled data helps - to model the world. Then u need very little supervised data. Easy problems require little data. Hard or badly defined tasks require a lot of data. You can always pick an easier to solve proxy objective and use data augmentation. 2. rnns are dynamic length. Hard Set sequence lengths are for speed since: sentences come at different lengths. So u cant create batches, unless you set a hard sequence length and then train same length sentences together in a batch, or fill up sentences that are too short by padding. If you batch sentence you can compute on them in parallel. Now of you are trying to predict relations between consecutive sentences, batching/ parallelization would not update the weights after each sentence, but on all of them at once - making it near impossible to learn inter (between) sentence relations but allowing the net to learn intra (within) sentence relations. Tip: read karparthys blog on rnns not the Colah one. Karpathys is more detailed allowing you to really grasp what an rnn does. An lstm is „just“ and rnn with attention/gating. Hope this helps, even if some concepts are very high level.
@vman049
@vman049 7 жыл бұрын
Great video, but I wish there was more math and a more thorough explanation of BPTT and the vanishing gradient problem.
@chrysr7900
@chrysr7900 4 жыл бұрын
Ooh I recognize what's on the blackboard! It's the spherical coordinate system...
@carlossegura403
@carlossegura403 3 жыл бұрын
I still prefer the LSTM (for accuracy) | GRU (for speed) over the Transformer's architecture for both; their ability to learn long-dependencies and their simplicity.
@anthonybyne2724
@anthonybyne2724 4 жыл бұрын
I've never listened to anyone before without understanding anything at all. It's fascinating for me watching with zero understanding. I'm literally just listening to his words... 😂
@matiasiribarren9685
@matiasiribarren9685 7 жыл бұрын
"It's producing something that sounds like words...That could do this lecture for me. I wish..." 1:02:42 Oh Lex would rather be researching xD.
@quintrankid8045
@quintrankid8045 2 жыл бұрын
Will there be some example of ML that can do the research too?
@funduk89
@funduk89 4 жыл бұрын
I think before introducing the backprop it is a good idea to start with the forward mode
@piyushmajgawali1611
@piyushmajgawali1611 4 жыл бұрын
Self driving cars 1:04:00
@tayg8370
@tayg8370 3 жыл бұрын
Bruh I have no idea what he’s talking about but I’m some how interested
@quintrankid8045
@quintrankid8045 2 жыл бұрын
Parameter tuning can't be taught? But it can be learned? I wonder if that would be a useful thing to apply ML to?
@nemis123
@nemis123 Жыл бұрын
This gave me so much understanding. Thank you for uploading!
@JamesNortonSound
@JamesNortonSound 7 жыл бұрын
Am I the only person who thought that the video compression makes his shadow look like a low resolution shadow map...? Awesome content, great for getting into ML! A quick question regarding LSTM's, why do we need a separate way of saying 'this information isn't important, I don't want to update my weights'. Doesn't gradient descent already take care of this? That is, if we happen to see a feature that is unimportant, won't we compute low gradients, thus telling us we don't need to move the weight by much? Why doesn't that work here?
@Gannicus99
@Gannicus99 5 жыл бұрын
James Norton Theory vs practice. The gates separate out learning mechanism like accept knowledge, apply knowledge and forget knowledge making it easier for the memory cell to learn. The memory is just a fully connected layer. By modeling gates you ad human bias on how - by what learning concepts - it should learn. That way u give it prior knowledge on how to precess the data, which means faster learning or analogously learning better from less data. In probabilistic models we work with priors and regularizers (math tools) in NNs we can also work with code/information flow design (programmer tools). Cnns model sequences as well. Just not dynamic lengths. Since they see the whole sequence at once, rather than sequentially, they do not struggle with longer term sequence element dependencies - e.g. lstms struggle connecting words from the start of a long sentence with the end of the sentence. Cnns/ transformers see all words at once so they dont care.
@mikashen5053
@mikashen5053 7 жыл бұрын
Hey,Lex. Really great video! But as English is not my mother tongue, sometimes it's difficult to understand the video very well, it would be nice if you can turn on the cc subtitle options, thanks!
@AlaveNei
@AlaveNei 2 ай бұрын
I find the subject content, 'Recurrent Neural Networks for Steering Through Time', very well presented. An interactive class environment that was improvised self-learning discussion topics were explained. A suggestion for @lexfridman: On Modularity Expanded (kzbin.info/www/bejne/pHe3gmqhfbaKqsU). You can definitely present how you visualize running graphs! Music; an example for ratings. A side note: Keep your ‘walking through the abstract’ as your own understanding, which I believe can be really useful to learners, and thank you for keeping it up :)
@klrshak776
@klrshak776 4 жыл бұрын
1:14:17 Who else thought their battery was low paused the video to check their battery😂🤣😂🤣
@MM-vw1ck
@MM-vw1ck 10 ай бұрын
My god, Lex is so lost on this lecture. It's almost like he forgot what he wanted to say when building the presentation.
@subramaniannk3364
@subramaniannk3364 4 жыл бұрын
Is it that most explanations given for RNN are top-down and most explanations for CNN are bottom-up?
@codyneil97
@codyneil97 7 жыл бұрын
Hi Lex, thanks for uploading these! When it comes to CNN + RNNs, is it only possible to use a pretrained CNN as a feature extractor, and then train just the weights in the RNN? Or, is it possible to use backprop to train the CNN and the RNN simultaneously? If it is possible to train both, is it desirable? Know if anyone has published anything on this?
@verynicehuman
@verynicehuman 7 жыл бұрын
The actual process is to train both CNN and RNN simultaneously. But a pretrained CNN would give a feature representation which will work pretty well with the RNN too. But yes, the actual method is to train the CNN and RNN simultaneously.
@pakigya
@pakigya 7 жыл бұрын
Cody Neil they have done this for action recognition
@Constantinesis
@Constantinesis 6 жыл бұрын
Hello, thanks for uploading these lectures! Can LSTM networks integrate symbolic constructs in natural language learning? Can it help computers understand the relationship between language structure and real world? For example if I ask "Why is only raining outside? " It should know that the roof stops the rain falling inside. I have a feeling that we are mostly teaching the algorithm to interact with us, in some kind of smart language simulation but at it's core it doesn't really understand the meaning and relationships between words. Do you know some online references towards this?
@niazhimselfangels
@niazhimselfangels 7 жыл бұрын
Hi Lex, thank you for these lectures! Would you be uploading the guest lectures as well? There isn't any mention of them in the course home page now .
@lexfridman
@lexfridman 7 жыл бұрын
Niaz, definitely, working on it.
@StevenWANGTH
@StevenWANGTH 7 жыл бұрын
Great lecture :) Mark and keep watching tomorrow ^.^
@ravenmoore3399
@ravenmoore3399 4 жыл бұрын
Vanilla is also what we call squares people who prefer missionary position are vanilla lol just saying
@dongwang8848
@dongwang8848 7 жыл бұрын
Great course with huge content in it, I am curious whether the Guest Talks are available too.
@john-paulglotzer1276
@john-paulglotzer1276 7 жыл бұрын
Really great video. Thanks! Quick question... At 1:13:34, the team used a CNN to create a distributed representation of each frame, and then they use this as the input to the RNN. Was this just a generic CNN trained on completely different types of images? Or did they train a new one using the driving images? If the latter, what target variable would they use to train it? Thanks!
@soumen_das
@soumen_das 7 жыл бұрын
Thank you Lex and MIT
@Phantastischphil
@Phantastischphil Жыл бұрын
Is there a playlist for the lectures leading up to this?
@luchi1097
@luchi1097 7 жыл бұрын
Thanks for your sharing! I don't understand why the value of wheel could be bigger than pi/2, and what's the meaning of it?
@lexfridman
@lexfridman 7 жыл бұрын
Thanks! The position of the steering wheel doesn't equal 1:1 the angle of the car tire. There's a steering ratio that control how you map one to the other. Here's a helpful link: en.wikipedia.org/wiki/Steering_ratio
@yasar723
@yasar723 5 жыл бұрын
Perfect at 1.25 speed
@FezanRafique
@FezanRafique 5 жыл бұрын
I am watching at 2x, but this guy is amazing teacher.
@acqua_exp6420
@acqua_exp6420 4 жыл бұрын
Its super interesting - he really take his time with the answer when asked a question - but the quality of the answers is actually really high - also watching at 1.25 :) - Great Lectures!
@VishalKumarTech
@VishalKumarTech 7 жыл бұрын
Hi Lex, To work around problems with local minima, is smoothing up the source data itself a good solution. The output needs to be approximate anyways. Why not manipulate the source data so that all the gradients are smooth and converging to one local minima?
@henrymhp
@henrymhp 7 жыл бұрын
To smooth it reliably, you need to know the landscape of the data. In other words you actually need to know the minima. After you know them there's no point in smoothing the data anymore, as you know all minima.
@VishalKumarTech
@VishalKumarTech 7 жыл бұрын
I am not sure if you would actually need to know the minima. I have seen and used algorithms that smoothens 3d point data. This is a different field but the end result of these algorithms is to remove rough edges or deep crevices. I wonder if we can come up with a similar general algorithm to smoothen multidimensional data.
@snehalbhartiya6724
@snehalbhartiya6724 5 жыл бұрын
"If you enjoyed the video" he said ! Maybe you should rethink what you say.
@sibyjoseplathottam4828
@sibyjoseplathottam4828 7 жыл бұрын
Great Lecture. Thanks a lot for explaining gradient descent in a simple way. What is your opinion on using PSO for training DNN's? Do you think there is scope for research in that area?
@p.o2697
@p.o2697 7 жыл бұрын
It's says that things like PSO or GA (genetic algorithm) doesn't work better than SGD, for deep learning. Also says that it exist research in this area. Based in my expererience, PSO or GA are more robust to being stuck on locals minima, but they need a lot more computacional power ( Time ), than traditional numerical optimization methods. Deep learning takes now a lot of computer days time, so time is critic. Maybe it would be interesting some mix between the randomness of the bio inspired optimization method (PSO, GA) and the computacion eficiency of the classic numerical methods ( SGD do some kind of this mix whit the mini batches part, but it could be other approach).
@antikoo1
@antikoo1 7 жыл бұрын
Thank you for your sharing and i will expect the coming lectures and the guest talks!
@mehdimashayekhi1675
@mehdimashayekhi1675 7 жыл бұрын
Lex, Thanks for sharing, really appreciated!
@yangyang5447
@yangyang5447 7 жыл бұрын
Thanks for the course. I learned a lot from it. Thanks!
@feixyzliu5432
@feixyzliu5432 7 жыл бұрын
Really cool course! Hi Lex, why only this the fourth lecture has no subtitle (Other 4 lectures do have)? Could you please upload one? Thank you.
@allyc0des972
@allyc0des972 7 жыл бұрын
Fantastic lecture. It explained a lot.
@pratik245
@pratik245 2 жыл бұрын
Can someone tell me why those vanishing gradients or exploding gradients occur, since i am such a dumb guy, i want to correlate it to nature.
@joeyhershel2311
@joeyhershel2311 Жыл бұрын
Well since its a recurrent neural network, one of your gradients might end up being multiplied by 2, so for everytime you back propogate the gradient would increase by 2^n, where n is the number of times you run the recurrent neural network. Sometimes your network can run like 40 times, where in your gradient would come out to 2^40, which is like close the the number of hydrogen atoms in the universe, no good. Same thing can happen when you are multiplying by n^-x where x is just a number greater than one, which would make your gradient infintesimally small. Sorry for my grammer and spelling.
@joeyhershel2311
@joeyhershel2311 Жыл бұрын
Oh also most important part, this is why you use the sigmoid function, because as the gradient aproaches larger and larger numbers it normalizes out to 1
@Graywolf116
@Graywolf116 7 жыл бұрын
Hi Lex, are the Traffic/Tesla competitions still running? I see they're up on the site but with no end-dates. Were the prizes only for ver. 1.0 or also for the ver 1.1 currently up?
@lexfridman
@lexfridman 7 жыл бұрын
Hey, yes 1.1 is still running with no firm deadline. I'm working hard to turn 1.1 to 2.0 in May or June with big prizes. Stay tuned.
@Graywolf116
@Graywolf116 7 жыл бұрын
Great to hear. Good luck & I'll be working on it.
@pratik245
@pratik245 2 жыл бұрын
Where is that sky view 360?
@israelgoytom6085
@israelgoytom6085 7 жыл бұрын
Hey , Lex It is Really Great lecture. Actually i am working on Deep Learning Specially on Autonomous Cars. Those Lectures are helped me a lot. But i have some Questions. the first thing is How could Motion Planing can be Machine learning ? if we are using GPS? second one, ( this is one might not related with your lectures) i am thinking about measuring a distance between the objects and the camera without using any Focal length or something related to the camera's property, to make it portable t any!! can we do this by deep learning! ?? and One more question , What is the title of the book ? (the Priceless one ) Thanks
@lexfridman
@lexfridman 7 жыл бұрын
Hey Israel, good questions. Answers: 1. Check out the deep RL lecture for how motion planning can be formulated as a machine learning problem: kzbin.info/www/bejne/h3XdfmuoaLyaeNk 2. The problem you're describing is essentially localization and visual odometry. Deep learning is beginning to be used for these applications, but there's a lot of work left to be done. 3. The deep learning book is called "Deep Learning" and you can find more about it here: www.deeplearningbook.org
@israelgoytom6085
@israelgoytom6085 7 жыл бұрын
Great , Thank you Very Much!
@misssurreal2602
@misssurreal2602 Жыл бұрын
So you are familiar with vanilla... of course...
@ajiteshsingh3764
@ajiteshsingh3764 2 жыл бұрын
Beautiful, Just Beautiful.
@ankursharma7909
@ankursharma7909 7 жыл бұрын
hi @Lex Fridman , this whole course will be put on KZbin?
@lexfridman
@lexfridman 7 жыл бұрын
Ankur, yes every lecture and most of the guest talks will be uploaded to KZbin. Just follow the site cars.mit.edu and the playlist goo.gl/SLCb1y
@AhmedThabit99
@AhmedThabit99 7 жыл бұрын
why their is no subtitle for this vedio ?!
@philwilson1445
@philwilson1445 7 жыл бұрын
What is 3D convolution of an Image? Is there a good link to study it?
@lexfridman
@lexfridman 7 жыл бұрын
Usually 3D-CNN is referring to convolution that works on a sequence of image not just a single image (even if it has 3 channels). I would recommend you check out this paper from Karpathy et al.: www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Karpathy_Large-scale_Video_Classification_2014_CVPR_paper.pdf
@philwilson1445
@philwilson1445 7 жыл бұрын
Thanks Lex. Will check it out.
@강태우-w6e
@강태우-w6e 5 жыл бұрын
amazing lecture
@Schoppekoning
@Schoppekoning 7 жыл бұрын
I like this course. Thank you!
@nguyenthanhdat93
@nguyenthanhdat93 7 жыл бұрын
Woo hoo! Excellent lecture!!
@prudhvithirumalaraju1228
@prudhvithirumalaraju1228 7 жыл бұрын
Thank you so much Lex!!
@norik1616
@norik1616 2 жыл бұрын
You look very cute compared to 2022 here 🤩
@physicsguy877
@physicsguy877 5 жыл бұрын
It is extremely concerning that these students are not expected to know calculus cold. There is no such thing as, "but I understand the concepts". You use basic technical skill to check your understanding of concepts, so without knowing your abcs, you will tend to convince yourself of things that aren't true. There is a lot of democratizing technology out there now where you don't need to know what's going on "under-the-hood", but without at least some knowledge, all you will be able to do is press buttons and make graphs.
@inigoreiriz1299
@inigoreiriz1299 7 жыл бұрын
very nice lecture!
@aidenstill7179
@aidenstill7179 5 жыл бұрын
Please answer me. What do I need to know to create my own Python deep learning framework? What are the books and courses to get knowledge for this?
@flamingxombie
@flamingxombie 7 жыл бұрын
Good video!
@manojj888
@manojj888 7 жыл бұрын
Thanks For Sharing
@FezanRafique
@FezanRafique 5 жыл бұрын
Brilliant
@Anon_life
@Anon_life 4 жыл бұрын
I love this!
@TheAcujlGamer
@TheAcujlGamer 4 жыл бұрын
Aweosome!
@vitoriasousa6298
@vitoriasousa6298 4 жыл бұрын
És um infeliz
@henryavery4461
@henryavery4461 2 жыл бұрын
@@vitoriasousa6298 Ai ai senhora Vitória
@joelwillis2043
@joelwillis2043 26 күн бұрын
No wonder he took up podcasting. He is very confused by simple calculus.
@Steve-3P0
@Steve-3P0 3 жыл бұрын
43:43
@justinchen207
@justinchen207 11 ай бұрын
god DAMN he was chunky. Rlly Came a long way
@alikhatami6610
@alikhatami6610 10 ай бұрын
why the hell is this allowed to be on youtube ? he is literally just reading through the slides. There is no explanation, here
@usf5914
@usf5914 3 жыл бұрын
this presentation was bad. 1000 thumbs up was ...?
@quintrankid8045
@quintrankid8045 2 жыл бұрын
What specifically?
@Something-tx6cl
@Something-tx6cl Жыл бұрын
@@quintrankid8045 He's just reading crap, not teaching.
@handokosupeno5425
@handokosupeno5425 Жыл бұрын
Amazing lecture
@pravachanpatra4012
@pravachanpatra4012 Жыл бұрын
54:27
Recurrent Neural Networks (RNNs), Clearly Explained!!!
16:37
StatQuest with Josh Starmer
Рет қаралды 596 М.
Turn Off the Vacum And Sit Back and Laugh 🤣
00:34
SKITSFUL
Рет қаралды 3,1 МЛН
Amazing remote control#devil  #lilith #funny #shorts
00:30
Devil Lilith
Рет қаралды 16 МЛН
Из какого города смотришь? 😃
00:34
МЯТНАЯ ФАНТА
Рет қаралды 2,5 МЛН
MIT 6.S094: Deep Reinforcement Learning for Motion Planning
1:27:30
Lex Fridman
Рет қаралды 235 М.
MIT 6.S191: Recurrent Neural Networks, Transformers, and Attention
1:01:31
Alexander Amini
Рет қаралды 199 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 520 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,3 МЛН
MIT 6.S091: Introduction to Deep Reinforcement Learning (Deep RL)
1:07:30
Fool-proof RNN explanation | What are RNNs, how do they work?
16:05
MIT 6.S094: Computer Vision
53:14
Lex Fridman
Рет қаралды 114 М.
Recurrent Neural Networks : Data Science Concepts
27:17
ritvikmath
Рет қаралды 31 М.
Turn Off the Vacum And Sit Back and Laugh 🤣
00:34
SKITSFUL
Рет қаралды 3,1 МЛН