MIT 6.S191 (2023): Recurrent Neural Networks, Transformers, and Attention

  Рет қаралды 646,605

Alexander Amini

Alexander Amini

Күн бұрын

MIT Introduction to Deep Learning 6.S191: Lecture 2
Recurrent Neural Networks
Lecturer: Ava Amini
2023 Edition
For all lectures, slides, and lab materials: introtodeeplearning.com
Lecture Outline
0:00​ - Introduction
3:07​ - Sequence modeling
5:09​ - Neurons with recurrence
12:05 - Recurrent neural networks
13:47 - RNN intuition
15:03​ - Unfolding RNNs
18:57 - RNNs from scratch
21:50 - Design criteria for sequential modeling
23:45 - Word prediction example
29:57​ - Backpropagation through time
32:25 - Gradient issues
37:03​ - Long short term memory (LSTM)
39:50​ - RNN applications
44:50 - Attention fundamentals
48:10 - Intuition of attention
50:30 - Attention and search relationship
52:40 - Learning attention with neural networks
58:16 - Scaling attention and applications
1:02:02 - Summary
Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!

Пікірлер: 284
@lonewolf-_-8634
@lonewolf-_-8634 11 ай бұрын
I just can't believe how amazing the educators are and damn !! they're providing it out here for free... Hats off to the team !!
@js913
@js913 11 ай бұрын
researchers are providing the content for free too
@jurycould4275
@jurycould4275 2 ай бұрын
Would love it, if they found mature experts on these topics instead of children.
@deepakspace
@deepakspace Жыл бұрын
I am a Professor and this is the best course I have found to learn about Machine learning and Deep learning....
@Rhapsody83
@Rhapsody83 Жыл бұрын
I just took a paid course in this subject matter, and this free explanation is so much more intelligible.
@sijiaxiao1557
@sijiaxiao1557 Жыл бұрын
agreed
@avinashdwivedi2015
@avinashdwivedi2015 10 ай бұрын
Coursera machine learning specialization
@olutoki
@olutoki 3 ай бұрын
Why do I think you are an undergraduate student 😂
@PriyanshuAman-dn5jx
@PriyanshuAman-dn5jx Ай бұрын
@@olutokigenes
@tgyawali
@tgyawali 11 ай бұрын
Thank you so much MIT and instructors for making these very high quality lectures available to everyone. Students from developing countries who have aspirations to achieve something big is now possible with this type of content and information!
@geosaiofficial1070
@geosaiofficial1070 11 ай бұрын
couldn't agree more. thanks once again MIT for providing world class education.
@xvaruunx
@xvaruunx Жыл бұрын
Best end to the lecture: “Thank you for your attention.” ❤😂
@gemini_537
@gemini_537 3 ай бұрын
Summary by Gemini: The lecture is about recurrent neural networks, transformers, and attention. The speaker, Ava, starts the lecture by introducing the concept of sequential data and how it is different from the data that we typically work with in neural networks. She then goes on to discuss the different types of sequential modeling problems, such as text generation, machine translation, and image captioning. Next, Ava introduces the concept of recurrent neural networks (RNNs) and how they can be used to process sequential data. She explains that RNNs are able to learn from the past and use that information to make predictions about the future. However, she also points out that RNNs can suffer from vanishing and exploding gradients, which can make them difficult to train. To address these limitations, Ava introduces the concept of transformers. Transformers are a type of neural network that does not rely on recurrence. Instead, they use attention to focus on the most important parts of the input data. Ava explains that transformers have been shown to be very effective for a variety of sequential modeling tasks, including machine translation and text generation. In the last part of the lecture, Ava discusses the applications of transformers in various fields, such as biology, medicine, and computer vision. She concludes the lecture by summarizing the key points and encouraging the audience to ask questions.
@Shadowfaex
@Shadowfaex 2 ай бұрын
👍🌚
@user-mc5ox7cv8k
@user-mc5ox7cv8k 24 күн бұрын
You should comment on every video. Liked it.
@pankajsinha385
@pankajsinha385 Жыл бұрын
One of the best lectures I have seen on Sequence Models, with crystal clear explanations! :)
@lazydart4117
@lazydart4117 Жыл бұрын
Watching those MIT courses alongside course at my Uni in Poland, so grateful to be able to experience such a high quality education
@GuinessOriginal
@GuinessOriginal Жыл бұрын
This girl looks so young
@ukaszkasprzak5921
@ukaszkasprzak5921 Жыл бұрын
Mogę spytać gdzie i co studiujesz ? ( jestem maturzystą i chciałbym wiedzieć gdzie w Polsce są kierunki podobnego typu )
@lazydart4117
@lazydart4117 Жыл бұрын
@@ukaszkasprzak5921 Kognitywistyka UW Zagadnienia z AI, machine learningu i matematyki są tu omawiane obok zagadnień humanistycznych: Lingwistyka, Filozofia Umysłu, Psychologia Poznawcza etc. Radzę przejrzeć Program studiów, proste googlowanie wystarczy
@sorover111
@sorover111 Жыл бұрын
ty to MIT for giving back a little in an impactful way
@joxa6119
@joxa6119 7 ай бұрын
Over all videos on KZbin that explained about Transformer architecture (including the visual explanation) , this is the BEST EXPLANATION ever done. Simple, contextual, high level, step by step complexity progression. Thank you the educators and MIT!
@roy11883
@roy11883 11 ай бұрын
Indeed commendable the way this lecture has been ordered and difficult topic like self-attention has been lucidly explained. Thanks to the instructors, really appreciated.
@vsevolodnedora7779
@vsevolodnedora7779 Жыл бұрын
Extremely informative, well structured and paced. A pleasure to watch and follow. Thank you.
@excitingtomorrow
@excitingtomorrow 11 ай бұрын
Your explanation of attention took me 2 revisits to this video to truly truly understand! But now when I did, my love for deep learning got stronger :)
@manojbp07
@manojbp07 Ай бұрын
oh epochs=3 rofl
@anshikajain3298
@anshikajain3298 Жыл бұрын
This is what we need in this day and age, the teaching is amazing and can be understood by people of variable intelligence. Nice work and thanks for this course.
@MrPejotah
@MrPejotah Жыл бұрын
These are some spectacular lessons. Thank you very much for making this available.
@kiarashgeraili8595
@kiarashgeraili8595 5 ай бұрын
As a CS student from University of Tehran, you guys don't have any idea how much such content could be helpful and the idea that all of this is free make it really amazing. Really appreciate it Alexander and Ava. Best hops.
@hamza-325
@hamza-325 10 ай бұрын
I watched and read a lot of content about Transformers and never understood what are those three Q, K, and V vectors doing so I coulnd't understand how attention works, until today when I watched this lecture doing the analogy of KZbin search and the Iron Man picture. Now it became much much clearer! Thanks for the brilliant analogies that you are making!
@vohra82
@vohra82 4 ай бұрын
I am an auditor and have very little to do with this subject, except for my curiosity. I feel lucky that these kind of videos are available for free
@nataliameira2283
@nataliameira2283 Жыл бұрын
Thank you for this amazing content! There are many concepts discussed intuitively!
@hullabulla
@hullabulla Жыл бұрын
These lectures are simply amazing. Thank you so much!
@Djellowman
@Djellowman Жыл бұрын
She absolutely killed it. Amazing lecture(r)!
@cienciadedados
@cienciadedados 10 ай бұрын
I have many years of lecturing experience and just wish I was as competent she is. Great job.
@umarfarooq-gc7vz
@umarfarooq-gc7vz 10 ай бұрын
I was searching about RNN for my Thesis work.She solved it...Nice Miss:)
@TimelyTimeSeries
@TimelyTimeSeries 4 ай бұрын
Came here to refresh my memory of deep learning for sequential data. I really like how Ava brings us from one algorithm to another. It makes perfect sense to me.
@nagashayanreddy7237
@nagashayanreddy7237 9 ай бұрын
Wow, Transformers, and Attention was an absolute lifesaver! 🚀🙌 The explanations were crystal clear, and I finally have a solid grasp on these concepts. This video saved me so much time and confusion. Huge thanks to the Ava for making such an informative and engaging tutorial! Can't wait to delve deeper into the world of AI and machine learning. 🤖💡
@ViniciusVA1
@ViniciusVA1 Жыл бұрын
This is incredible! Thanks a lot for this video, it’s going to help me a lot in my undergrad reasearch :)
@nitul_singha
@nitul_singha 3 ай бұрын
I am trying to step into deep learning for last couple of month. This is the best thing I have found so far. Thank you sir!.
@gidi1899
@gidi1899 Жыл бұрын
This is my favorite subject :) (following is self clarification of said words that feel exaggerated) 4:08 - binary classification or filtering is a sequence of steps: - new recording - retrieval of a constant record - compare new and constant record - express a property of the compare process So, sequencing really is a property of maybe all systems. While "wave sequencing" is built on top of a Sequencer System, that repeatedly uses the "same actions" per sequence element.
@jerahmeelsangil247
@jerahmeelsangil247 4 ай бұрын
The fact that these videos now have millions of views.... the world is evolving so fast scientifically or at least scientific culture.
@aravindsd6839
@aravindsd6839 10 ай бұрын
50:30 - Attention mechnaism beautifully explained. Thank you #AvaAmini
@AIlysAI
@AIlysAI Жыл бұрын
The most intutive explanation of Self Attention I have seen!
@jackq2331
@jackq2331 Жыл бұрын
I have used LSTM and Transformer a lot, but I can still get more insights from this lecture.
@michaelngecha9227
@michaelngecha9227 Жыл бұрын
I always meant to watch these lectures since 2020, but something always comes up. Now, nothing is going to stop me. Not even nothing. Great lectures, best way to learn.
@josephlee392
@josephlee392 Жыл бұрын
Same man. The academic stress as an undergraduate was my "something always comes up," but since I just graduated a few days ago, I now have no excuse to not indulge myself in these videos lol.
@Itangalo
@Itangalo 10 ай бұрын
This was the third video I watched in search of understanding what transformers are, and by far the best one. Thanks.
@alhassanchoubassi2441
@alhassanchoubassi2441 Жыл бұрын
Just watched lecture 1, looking forward to this and the lab coming after. Thanks for this great open resource!
@subcorney
@subcorney Жыл бұрын
Are there the labs available as well?
@megalomaniacal
@megalomaniacal Жыл бұрын
I am 6 years old, and I have been able to follow everything said, after watching 3 times.
@johnpaily
@johnpaily Ай бұрын
Life works on what she is speaking . We need to look deep into life to evolve and make a shift in thinking
@jamesandino8346
@jamesandino8346 4 ай бұрын
Great Presentation @8:00 minutes it really explained a circuitry I was looking forward to exploring
@mostinho7
@mostinho7 5 ай бұрын
15:05 we have different weights matrix for generating h_t and generating y_t h_t generated using two different weights matrix, to take contribution from previous state and current input 51:20 start of attention explanation 59:30 each attention head focus on some part similar to how each filter in cnn can learn to extract specific features like horizontal lines etc
@ngrunmann
@ngrunmann Жыл бұрын
Amazing course! Thank you so much!
@tcoc15yuktamore4
@tcoc15yuktamore4 11 ай бұрын
How beautifully explained. Loved it 🥰
@nazrinnagori
@nazrinnagori 5 ай бұрын
query key value pairs always put me off whener I start to learn about transformers, this time I actually finished the video. Thanks MIT
@goswamimohit
@goswamimohit 10 ай бұрын
Wow just amazing, no words left. Really Thanks 🙏
@chineduezeofor2481
@chineduezeofor2481 19 күн бұрын
Thank you for this beautiful lecture.
@varunahlawat9013
@varunahlawat9013 Жыл бұрын
Lovely presentation! It couldn't get more interesting!
@ellenxiao223
@ellenxiao223 Жыл бұрын
Great lecture, learnt a lot. Thank you for sharing!
@monome3038
@monome3038 5 ай бұрын
Grateful for the efforts of MIT and its incredible professors delivering high quality free lectures. Filling every gap I have in my current classes ❤
@RNDbyvaibhav
@RNDbyvaibhav 3 ай бұрын
Till Now best Course, I am doing great when I found these MIT's Lecture
@AnonymousIguana
@AnonymousIguana Жыл бұрын
Wonderful, easy to focus and understand :). Great quality! Grateful that this is open source!
@ziku8910
@ziku8910 Жыл бұрын
Very intuitive explanation, thanks!
@FREAK-st6kk
@FREAK-st6kk Ай бұрын
Whoever is listening to this awesome lecture I just want to say, Attention is all you need!!
@luizmeier
@luizmeier Жыл бұрын
I already have some knowledge on the subject, however, I like to keep myself updated and there is always something new to learn. She clearly explains how what she is teaching really works. The whole video is worth watching.
@digitalnomad2196
@digitalnomad2196 Жыл бұрын
amazing lecture series, thanks for sharing this knowledge with the world. I am curious if theres a lecture on LSTM'S
@MuhammadIbrahim-ut3rq
@MuhammadIbrahim-ut3rq 4 ай бұрын
Thank you very much for this great oppurtunity to watch MIT lectures. always dreamt of a world class education and finally im doing a degree in AI and such videos are supporting my learning process very much
@Reaperaxe9
@Reaperaxe9 Жыл бұрын
Fully understand transformers. One of the clearest and succinct explanations out there, so intuitive. Thank you!!
@mohadreza9419
@mohadreza9419 5 ай бұрын
Mr Amini thanks for your channel
@jingji6665
@jingji6665 10 ай бұрын
Thank you so much for the free course. Benifit and appreciate
@akj3344
@akj3344 11 ай бұрын
Code showed at RNN Intuition chapter at 14:00 makes thing clear af. I literally said "Wow"
@eee8
@eee8 9 ай бұрын
Great Teamwork of Alex Amini and Ava Amini.
@estherni9412
@estherni9412 Жыл бұрын
Thank you for this amazing and easy to understand course! I'm a beginner of the RNN, but I can almost know all the concepts from this lecture!
@nerualbrain
@nerualbrain 5 ай бұрын
Thanks for this amazing course
@bohanwang-nt7qz
@bohanwang-nt7qz 3 ай бұрын
🎯Course outline for quick navigation: [00:09-02:02]Sequence modeling with neural networks -[00:09-00:37]Ava introduces second lecture on sequence modeling in neural networks. -[00:55-01:46]The lecture aims to demystify sequential modeling by starting from foundational concepts and developing intuition through step-by-step explanations. [02:02-13:24]Sequential data processing and modeling -[02:02-02:46]Sequential data is all around us, from sound waves to text and language. -[03:10-03:50]Sequential modeling can be applied to classification and regression problems, with feed-forward models operating in a fixed, static setting. -[05:02-05:26]Lecture covers building neural networks for recurrent and transformer architectures. -[11:56-12:37]Rnn captures cyclic temporal dependency in maintaining and updating state at each time step. [13:24-20:04]Understanding rnn computation -[14:40-15:04]Explains rnn's prediction for next word, updating state, and processing sequential information. -[15:05-15:47]Rnn computes hidden state update and output prediction. -[16:17-17:05]Rnn updates hidden state and generates output in single operation. -[18:45-19:39]The total loss for a particular input to the rnn is computed by summing individual loss terms. the rnn implementation in tensorflow involves defining an rnn as a layer operation and class, initializing weight matrices and hidden state, and passing forward through the rnn network to process a given input x. [20:05-29:13]Rnn in tensorflow -[20:05-20:54]Tensorflow abstracts rnn network definition for efficiency. practice rnn implementation in today's lab. -[21:16-21:43]Today's software lab focuses on many-to-many processing and sequential modeling. -[22:53-23:21]Sequence implies order, impacting predictions. parameter sharing is crucial for effective information processing. -[25:04-25:29]Language must be numerically represented for processing, requiring translation into a vector. -[28:29-28:56]Predict next word with short, long, and even longer sequences while tracking dependencies across different lengths. [29:14-41:53]Rnn training and issues -[30:02-30:27]Training neural network models using backpropagation algorithm for sequential information. -[30:45-31:43]Rnns use backpropagation through time to adjust network weights and minimize overall loss through individual time steps. -[32:03-32:57]Repeated multiplications of big weight matrices can lead to exploding gradients, making it infeasible to train the network stably. -[35:45-37:18]Three ways to mitigate vanishing gradient problem: change activation functions, initialize parameters, use a more robust version of recurrent neural unit. -[36:13-37:01]Relu activation function helps mitigate vanishing gradient problem by maintaining derivatives greater than one, and weight initialization with identity matrices prevents rapid shrinkage of weight updates. -[37:54-38:25]Lstms are effective at tracking long-term dependencies by controlling information flow through gates. -[40:18-41:13]Build rnn to predict musical notes and generate new sequences, e.g. completing schubert's unfinished symphony. [41:53-50:11]Challenges in rnn and self-attention -[43:58-44:40]Rnns face challenges in slow processing and limited capacity for long memory data. -[46:37-47:00]Concatenate all time steps into one vector input for the model -[47:21-47:45]Feed-forward network lacks scalability, loses in-order information, and hinders long-term memory. -[48:11-48:34]Self-attention is a powerful concept in deep learning and ai, foundational in transformer architecture. -[48:58-49:25]Exploring the power of self-attention in neural networks, focusing on attending to important parts of an input example. [50:13-56:20]Neural network attention mechanism -[50:13-50:43]Understanding the concept of search and its role in extracting important information from a larger data set. -[51:52-55:24]Neural networks use self-attention to extract relevant information, like in the example of identifying a relevant video on deep learning, by computing similarity scores between queries and keys. -[53:32-53:54]A neural network encodes positional information to process time steps all at once in singular data. -[55:32-55:57]Comparing vectors using dot product to measure similarity. [56:20-01:02:47]Self-attention mechanism in nlp -[56:20-57:14]Computing attention scores to define relationships in sequential data. -[59:11-59:39]Self-attention heads extract high attention features, forming larger network architectures. -[01:00:32-01:00:56]Self-attention is a key operation in powerful neural networks like gpt-3. offered by Coursnap
@dotmalec
@dotmalec 3 ай бұрын
What an amazing content! Thank you! ❤️
@maduresenerd5716
@maduresenerd5716 7 ай бұрын
I just started learning about RNN and LSTM especially for NLP and found this video very helpful to me. It would be really exciting if you provided a video about transformers in more depth :)
@twiddlebit
@twiddlebit Жыл бұрын
I come back every year to check these lectures and to see what innovations made it into the lectures. Pleasantly surprised to see the name change, congrats!
@agamersdiary1622
@agamersdiary1622 Жыл бұрын
What do you mean by name change?
@diamondshock4405
@diamondshock4405 11 ай бұрын
@@agamersdiary1622 This woman got married to one of the other lecturers (the channel owner Alexander).
@glowish1993
@glowish1993 6 ай бұрын
legendary lecture, thank you for sharing
@NoppadatchSukchote
@NoppadatchSukchote 11 ай бұрын
Awesome Course, Very easy to understand+++, Thx all MIT instructors 😊😊😊
@elu1
@elu1 Жыл бұрын
Finally I understand the transformer concept now. Great lecture series👍!
@riyajunjannat7294
@riyajunjannat7294 10 ай бұрын
I worked in spatial statistics during my graduation. And now, I think your classes will push me more and more towards the machine learning. Looking forward to apply my learning in my upcoming level of study. Thanks for your efforts 💝
@user-xq3sw9fj3d
@user-xq3sw9fj3d 7 ай бұрын
Штоэто.запрасмоттр.непанядно
@pw7225
@pw7225 9 ай бұрын
She is fantastic at teaching. I love how easily understandable she makes it. Thank you, Prof Amini.
@terryliu3635
@terryliu3635 Ай бұрын
That's the reason why people wanted to go to the top universities such as MIT!! The explanation is so clear!!!
@nikteshy9131
@nikteshy9131 Жыл бұрын
Thank you Ava Soleimany and MIT ☺😊🤗💜
@holderstown643
@holderstown643 Ай бұрын
Thank you for the awesome lecture
@TJ-hs1qm
@TJ-hs1qm Жыл бұрын
best Friday after-work fun thanks!
@BruWozniak
@BruWozniak Жыл бұрын
Simply brilliant!
@tapanmahata8330
@tapanmahata8330 7 ай бұрын
Amazing . thank you MIT.
@jennifergo2024
@jennifergo2024 5 ай бұрын
Thanks for sharing!
@andyandurkar7814
@andyandurkar7814 5 ай бұрын
Great material and the best educator!. Thank you for the fantastic video! The material was not only informative but also engaging, and the quality of the presentation was top-notch. Your depth of knowledge truly shines through, making the learning experience both enriching and enjoyable. Presented such complex material with such ease. You've done an exceptional job in communicating the concepts clearly. Great work!" and everything is free! Great job MIT team!!
@chukwunta
@chukwunta Жыл бұрын
This is some really deep learning. MIT is the height of institutional education. 👏👏. Thanks for sharing.
@vin-deep
@vin-deep 11 ай бұрын
Best explanation ever!!!! thank you
@alexchow9629
@alexchow9629 2 ай бұрын
This is shockingly good. Thank you.
@johnpaily
@johnpaily Ай бұрын
It is striving to bring back our memory of interrelationship and oneness
@forheuristiclifeksh7836
@forheuristiclifeksh7836 24 күн бұрын
3:00 Sequencial Data
@johnpaily
@johnpaily Ай бұрын
Salutes hopr to come back MIT Deep learning. I feel you peple need to look deep inro life
@Sal-imm
@Sal-imm Жыл бұрын
Pretty straight forward lecture.
@gksr
@gksr 5 ай бұрын
Thank you@MIT
@prishamaiti
@prishamaiti Жыл бұрын
I've always wanted to study deep learning, but I never really knew where to start. This MIT course was my answer
@sciencely8601
@sciencely8601 2 ай бұрын
00:16 Building neural networks for handling sequential data 03:19 Sequential data introduces new problem definitions for neural networks 10:03 Recurrent Neural Networks link computation and information via recurrent relation. 13:37 RNN processes temporal information and generates predictions. 20:22 Key criteria for designing effective RNNs 23:33 Recurrent neural networks design criteria and need for more powerful architectures. 30:08 Back propagation through time in RNN involves back propagating loss through individual time steps and handling sequential information. 33:23 Vanishing gradient problem in recurrent neural networks 40:03 RNNs used for music generation and sentiment classification 43:32 RNNs have encoding bottlenecks and processing limitations 49:45 Self-attention involves identifying important parts and extracting relevant information. 52:51 Transformers eliminate recurrence and capture positional order information through positional encoding and attention mechanism. 59:35 Self-attention heads extract salient features from data. 1:02:49 Starting work on the labs
@theneumann7
@theneumann7 Жыл бұрын
Thanks for sharing such high quality content! 👌
@Roy-hk8yh
@Roy-hk8yh Жыл бұрын
This is amazing. Studying from Kenya, and this absolutely is quality lectures.
@johnpaily
@johnpaily Ай бұрын
Great I don't know math , but you are feeding my conceptual thoughts about life and the universe from an informational point
@NoppadatchSukchote
@NoppadatchSukchote 11 ай бұрын
Awesome Course, Very easy to understand+++
@meghan______669
@meghan______669 2 ай бұрын
Really helpful! ⭐️
@Jupiter-Optimus-Maximus
@Jupiter-Optimus-Maximus 8 ай бұрын
Awsome! Video!! Very well thought out lecture. Keep rockin' !!! You just solved my problem in my NNW optimization project, in just two sentences.🤣 For 4 months, this has been driving me completely insane.💥🤣🔫 I think I'm in love.😀
@johnpaily
@johnpaily Ай бұрын
Great lecture
@johanliebert6206
@johanliebert6206 Ай бұрын
Thank you so much
@joshismyhandle
@joshismyhandle Жыл бұрын
Thanks for sharing
@derrickxu908
@derrickxu908 3 ай бұрын
She is so good!!!!🎉🎉❤❤
@johnpaily
@johnpaily Ай бұрын
The way forward is dynamic quantim computing, possible throug blackhole nets
@yongqinzhao8087
@yongqinzhao8087 Жыл бұрын
Would like to see the coming lectures and the interesting student projects!
@omerfarukcelebi6813
@omerfarukcelebi6813 25 күн бұрын
This is the best lecture on KZbin! Thank you for the clear explanation. I wish you could delve deeper into the transformer architecture, though, as it was only covered in the last 15 minutes. Nevertheless, this is the most understandable video on the topic. I've watched nearly all of them, but this one stands out as the best! It would be great if you provided a more detailed explanation of transformers.
@peetprogressngoune3806
@peetprogressngoune3806 Жыл бұрын
I can't wait to watch
@carloscampo9119
@carloscampo9119 Жыл бұрын
Amazingly simple. Thanks for such a clear explanation.
MIT 6.S191 (2023): Convolutional Neural Networks
55:15
Alexander Amini
Рет қаралды 239 М.
MIT Introduction to Deep Learning (2023) | 6.S191
58:12
Alexander Amini
Рет қаралды 1,9 МЛН
Balloon Pop Racing Is INTENSE!!!
01:00
A4
Рет қаралды 16 МЛН
Sigma Girl Education #sigma #viral #comedy
00:16
CRAZY GREAPA
Рет қаралды 4,1 МЛН
Follow @karina-kola please 🙏🥺
00:21
Andrey Grechka
Рет қаралды 21 МЛН
The math behind Attention: Keys, Queries, and Values matrices
36:16
Serrano.Academy
Рет қаралды 196 М.
MIT 6.S191: Recurrent Neural Networks, Transformers, and Attention
1:01:31
Transforming AI | NVIDIA GTC 2024 Panel Hosted by Jensen Huang
53:48
NVIDIA Developer
Рет қаралды 90 М.
26. Chernobyl - How It Happened
54:24
MIT OpenCourseWare
Рет қаралды 2,8 МЛН
Geoffrey Hinton in conversation with Fei-Fei Li - Responsible AI development
1:48:12
Arts & Science - University of Toronto
Рет қаралды 96 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 200 М.
MIT 6.S191 (2023): Deep Generative Modeling
59:52
Alexander Amini
Рет қаралды 293 М.
What are Transformer Models and how do they work?
44:26
Serrano.Academy
Рет қаралды 93 М.
But what is a convolution?
23:01
3Blue1Brown
Рет қаралды 2,5 МЛН
Купите ЭТОТ БЮДЖЕТНИК вместо флагманов от Samsung, Xiaomi и Apple!
13:03
Thebox - о технике и гаджетах
Рет қаралды 81 М.
СЛОМАЛСЯ ПК ЗА 2000$🤬
0:59
Корнеич
Рет қаралды 2,5 МЛН
Samsung or iPhone
0:19
rishton vines😇
Рет қаралды 7 МЛН
Introducing GPT-4o
26:13
OpenAI
Рет қаралды 4,2 МЛН