MIT 6.S191: Recurrent Neural Networks, Transformers, and Attention

  Рет қаралды 239,637

Alexander Amini

Alexander Amini

Күн бұрын

Пікірлер: 121
@coplain
@coplain 5 күн бұрын
Twenty years ago, during my engineering days, even getting a course textbook from my college library was a challenging task. I had to pre-book it just to get my turn. On top of that, if the professor didn’t teach the material well in class, the course and grades would be at risk. The fact that I can now learn MIT content while sitting on my couch in India makes me feel that this generation is truly blessed to have such privileges. Thank you, MIT!
@marlhex6280
@marlhex6280 7 ай бұрын
Personally, I love the way Ava articulated each word and how she mapped the problem in her head. Great job
@samiragh63
@samiragh63 8 ай бұрын
Can't be waiting for another extraordinary lecture. Thank you Alex and Ava.
@daniyalkabir6527
@daniyalkabir6527 3 ай бұрын
These lectures are extremly high quality. Thank you :) for posting them online so that we can learn from one of the best universities in the world.
@ajithdevadiga9939
@ajithdevadiga9939 4 ай бұрын
This is a great summarization of sequence model. truly amazed at the aura of knowledge.
@shahriarahmadfahim6457
@shahriarahmadfahim6457 8 ай бұрын
Can't believe how amazingly the two lecturers squeeze so much content and explain with such clarity in an hour! Would be great if you published the lab with the preceding lecture coz the lecture ended setting up the mood for the lab haha. But not complaining, thanks again for such amazing stuffs!
@jamesgambrah58
@jamesgambrah58 8 ай бұрын
As I await the commencement of this lecture, I reflect fondly on my past experiences, which have been nothing short of excellent.
@DonG-1949
@DonG-1949 8 ай бұрын
Indeed.
@vampiresugarpapi
@vampiresugarpapi 7 ай бұрын
Indubitably
@frankhofmann5819
@frankhofmann5819 8 ай бұрын
I'm sitting here in wonderful Berlin at the beginning of May and looking at this incredibly clear presentation! Wunderbar! And thank you very much for the clarity of your logic!
@pavalep
@pavalep 8 ай бұрын
Thank you for being the pioneers in teaching Deep Learning to Common folks like me :) Thank you Alexander and Ava 👍
@kapardhikannekanti3544
@kapardhikannekanti3544 4 ай бұрын
This is one of the best and engaging sessions I've ever attended. The entire hour was incredibly smooth, and I was captivated the entire time.
@joban223
@joban223 4 ай бұрын
can a 11thgrade student understand this? i mean i tried but i am not able to understand what's going on?
@DanielHinjosGarcía
@DanielHinjosGarcía 6 ай бұрын
This was an amazing class and one of the clearest introductions to Sequence Models that I have ever seen. Great work!
@dr.rafiamumtaz1712
@dr.rafiamumtaz1712 7 ай бұрын
excellent way of explaining the deep learning concepts
@wolpumba4099
@wolpumba4099 8 ай бұрын
*Abstract* This lecture delves into the realm of sequence modeling, exploring how neural networks can effectively handle sequential data like text, audio, and time series. Beginning with the limitations of traditional feedforward models, the lecture introduces Recurrent Neural Networks (RNNs) and their ability to capture temporal dependencies through the concept of "state." The inner workings of RNNs, including their mathematical formulation and training using backpropagation through time, are explained. However, RNNs face challenges such as vanishing gradients and limited memory capacity. To address these limitations, Long Short-Term Memory (LSTM) networks with gating mechanisms are presented. The lecture further explores the powerful concept of "attention," which allows networks to focus on the most relevant parts of an input sequence. Self-attention and its role in Transformer architectures like GPT are discussed, highlighting their impact on natural language processing and other domains. The lecture concludes by emphasizing the versatility of attention mechanisms and their applications beyond text data, including biology and computer vision. *Sequence Modeling and Recurrent Neural Networks* - 0:01: This lecture introduces sequence modeling, a class of problems involving sequential data like audio, text, and time series. - 1:32: Predicting the trajectory of a moving ball exemplifies the concept of sequence modeling, where past information aids in predicting future states. - 2:42: Diverse applications of sequence modeling are discussed, spanning natural language processing, finance, and biology. *Neurons with Recurrence* - 5:30: The lecture delves into how neural networks can handle sequential data. - 6:26: Building upon the concept of perceptrons, the idea of recurrent neural networks (RNNs) is introduced. - 7:48: RNNs address the limitations of traditional feedforward models by incorporating a "state" that captures information from previous time steps, allowing the network to model temporal dependencies. - 10:07: The concept of "state" in RNNs is elaborated upon, representing the network's memory of past inputs. - 12:23: RNNs are presented as a foundational framework for sequence modeling tasks. *Recurrent Neural Networks* - 12:53: The mathematical formulation of RNNs is explained, highlighting the recurrent relation that updates the state at each time step based on the current input and previous state. - 14:11: The process of "unrolling" an RNN is illustrated, demonstrating how the network processes a sequence step-by-step. - 17:17: Visualizing RNNs as unrolled networks across time steps aids in understanding their operation. - 19:55: Implementing RNNs from scratch using TensorFlow is briefly discussed, showing how the core computations translate into code. *Design Criteria for Sequential Modeling* - 22:45: The lecture outlines key design criteria for effective sequence modeling, emphasizing the need for handling variable sequence lengths, maintaining memory, preserving order, and learning conserved parameters. - 24:28: The task of next-word prediction is used as a concrete example to illustrate the challenges and considerations involved in sequence modeling. - 25:56: The concept of "embedding" is introduced, which involves transforming language into numerical representations that neural networks can process. - 28:42: The challenge of long-term dependencies in sequence modeling is discussed, highlighting the need for networks to retain information from earlier time steps. *Backpropagation Through Time* - 31:51: The lecture explains how RNNs are trained using backpropagation through time (BPTT), which involves backpropagating gradients through both the network layers and time steps. - 33:41: Potential issues with BPTT, such as exploding and vanishing gradients, are discussed, along with strategies to mitigate them. *Long Short Term Memory (LSTM)* - 37:21: To address the limitations of standard RNNs, Long Short-Term Memory (LSTM) networks are introduced. - 37:35: LSTMs employ "gating" mechanisms that allow the network to selectively retain or discard information, enhancing its ability to handle long-term dependencies. *RNN Applications* - 40:03: Various applications of RNNs are explored, including music generation and sentiment classification. - 40:16: The lecture showcases a musical piece generated by an RNN trained on classical music. *Attention Fundamentals* - 44:00: The limitations of RNNs, such as limited memory capacity and computational inefficiency, motivate the exploration of alternative architectures. - 46:50: The concept of "attention" is introduced as a powerful mechanism for identifying and focusing on the most relevant parts of an input sequence. *Intuition of Attention* - 48:02: The core idea of attention is to extract the most important features from an input, similar to how humans selectively focus on specific aspects of visual scenes. - 49:18: The relationship between attention and search is illustrated using the analogy of searching for relevant videos on KZbin. *Learning Attention with Neural Networks* - 51:29: Applying self-attention to sequence modeling is discussed, where the network learns to attend to relevant parts of the input sequence itself. - 52:05: Positional encoding is explained as a way to preserve information about the order of elements in a sequence. - 53:15: The computation of query, key, and value matrices using neural network layers is detailed, forming the basis of the attention mechanism. *Scaling Attention and Applications* - 57:46: The concept of attention heads is introduced, where multiple attention mechanisms can be combined to capture different aspects of the input. - 58:38: Attention serves as the foundational building block for Transformer architectures, which have achieved remarkable success in various domains, including natural language processing with models like GPT. - 59:13: The broad applicability of attention beyond text data is highlighted, with examples in biology and computer vision. i summarized the transcript with gemini 1.5 pro
@_KillerRobots
@_KillerRobots 7 ай бұрын
Very nice Gemini summary. Single output or chain?
@wolpumba4099
@wolpumba4099 7 ай бұрын
@@_KillerRobots I used the following single prompt: Create abstract and summarize the following video transcript as a bullet list. Prepend each bullet point with starting timestamp. Don't show the ending timestamp. Also split the summary into sections and create section titles. `````` create abstract and summary
@pw7225
@pw7225 8 ай бұрын
Ava is such a talented teacher. (And Alex, too, of course.)
@beAstudentnooneelse
@beAstudentnooneelse 7 ай бұрын
It's a great place to apply all learning strategies for jetpack classes, love it, I just can't wait for more and in depth knowledge.
@clivedsouza6213
@clivedsouza6213 7 ай бұрын
The intuition building was stellar, really eye opening. Thanks!
@muralidhar40
@muralidhar40 Ай бұрын
RNN intuition @ 14:20 was helpful.
@hafsausman396
@hafsausman396 Ай бұрын
Just More Than Fantastic! Thank you so much!
@karanacharya18
@karanacharya18 6 ай бұрын
Mind = Blown. Ava, you're a fantastic teacher. This is the best intuitive + technical explanation of Sequence Modeling, RNNs and Attention on the internet. Period.
@delgaldo2
@delgaldo2 7 ай бұрын
excellent video series. Thanks for making them available online! A suggestion when explaining Q, K, V. I would start with a symmetric attention weighting matrix and go on with that at first. Then give an example which shows that the attention is not symmetric, as it is the case between the words "beautiful" and "painting" in the sentence "Alice noticed the beautiful painting". This motivates why we would want to train separate networks for Q and K.
@ObaroJohnson-q8v
@ObaroJohnson-q8v 6 ай бұрын
Very audible and confidently delivered the lecture perfectly. Thanks
@wuyanfeng42
@wuyanfeng42 2 ай бұрын
thank you so much. the explanation on self-attention is so clearly
@victortg0
@victortg0 8 ай бұрын
This was an extraordinary explanation of Transformers!
@weelianglien687
@weelianglien687 7 ай бұрын
This is not an easy topic to explain but you explained v well and with good presentation skills!
@workisimpossibletowork7415
@workisimpossibletowork7415 19 күн бұрын
I had a headache to learn the concept of RNN when I started off from zero and had only watched the prior video from Alaxander. Unitil now, after having systematically learned RNN from another more beginner -friendly course, I started to realize Ava actually did a great interpretation here. I found that the difficult I faced when first watching this video was precisely caused by the simplified schematic of RNN from 11:20, with h_t appearing an output of the current time step and simultaneously an input of the current time step, making it very confusing. I start to realize that this schematic should be plotted in 3D in order to avoid confusion. That is, the flow of h_t being rightwards-->upwards-->leftwards-->downwards-->rightwards is actually wrong, it should be rightwards-->inwards the screen-->leftwards-->outwards the screen-->rightwards.
@a0z9
@a0z9 8 ай бұрын
Ojalá todo el mundo fuera así de competente. Da gusto aprender de gente que tiene las ideas claras.
@otjeutjelekgoko9253
@otjeutjelekgoko9253 3 ай бұрын
Thank you for an amazing lecture, easy to follow a complex topic.
@ERalyGainulla
@ERalyGainulla Ай бұрын
Sequence Modeling and Recurrent Neural Networks 0:01 - Введение в моделирование последовательностей: работа с временными рядами, текстом, аудио. Пример: предсказание траектории движущегося мяча. 2:42 - Примеры применения: обработка естественного языка (NLP), финансы, биология. Neurons with Recurrence 5:30 - Как нейронные сети могут работать с последовательными данными. 6:26 - Введение рекуррентных нейронных сетей (RNN): почему их используют вместо традиционных сетей. 10:07 - Понятие состояния (state): память о предыдущих входах. Recurrent Neural Networks 12:53 - Математическая формулировка RNN: уравнения и принципы работы. 14:11 - Развёртка RNN во времени. 17:17 - Визуализация и понимание шагов обработки последовательности. Design Criteria for Sequential Modeling 22:45 - Основные критерии проектирования: переменная длина последовательностей, сохранение порядка, память. 24:28 - Пример: предсказание следующего слова в предложении. Backpropagation Through Time 31:51 - Как обучаются RNN: обратное распространение через время (BPTT). 33:41 - Проблемы BPTT: затухающие и взрывающиеся градиенты. Long Short Term Memory (LSTM) 37:21 - Введение LSTM для решения проблем стандартных RNN. 37:35 - Как работают гейты (входной, забывающий, выходной). RNN Applications 40:03 - Примеры применения RNN: генерация музыки, классификация настроений текста. Attention Fundamentals 44:00 - Ограничения RNN, которые мотивируют использование механизмов внимания. 46:50 - Концепция внимания: выбор ключевых частей последовательности. Intuition of Attention 48:02 - Основная идея: внимание выбирает важные признаки, аналогично человеческому восприятию. Learning Attention with Neural Networks 51:29 - Механизм self-attention: как сеть фокусируется на релевантных частях последовательности. 53:15 - Использование матриц Query, Key и Value для вычисления внимания. Scaling Attention and Applications 57:46 - Многоголовые механизмы внимания (attention heads). 58:38 - Внимание как основа архитектуры Transformer: NLP, биология, компьютерное зрение.
@ИванЛеонов-о3в
@ИванЛеонов-о3в Ай бұрын
спс
@pavin_good
@pavin_good 8 ай бұрын
Thankyou for uploading the Lectures. Its helpful for students all around the globe.
@mikapeltokorpi7671
@mikapeltokorpi7671 8 ай бұрын
Very good lecture. Also perfect timing in respect of my next academic and professional steps.
@sportzarena2727
@sportzarena2727 Ай бұрын
This is Golden!! Thanks for posting
@RealityRiddles
@RealityRiddles 29 күн бұрын
Absolutely amazing. Thank you for your teaching.
@nomthandazombatha2568
@nomthandazombatha2568 8 ай бұрын
love her energy
@DrJochenLeidner
@DrJochenLeidner 3 ай бұрын
Thanks, it's a great and intense/compact DL overvie, free and open from MIT. Personally, I'd introduce LSTMs a bit later (38 minutes into the 2nd lecture may leave many students behind) and say a bit more how things happened historically (Elman, Schmidhuber, Vaswani).
@kiranbhanushali7069
@kiranbhanushali7069 7 ай бұрын
Extraordinary explanation and teaching. Thank you!!
@hopeafloats
@hopeafloats 8 ай бұрын
Amazing stuff, thanks to every one associated with #AlexanderAmini channel.
@prestoX
@prestoX 5 ай бұрын
Great work guys looking forward to learn more from you guys in succeeding videos.
@samyakbharsakle
@samyakbharsakle 6 күн бұрын
articulation is on point
@TheSauravKokane
@TheSauravKokane 4 ай бұрын
1. Here we are taking "h" as previous history factor or hidden state, is it single dimensional or multidimensional? 2. What is the behavior of "h" - hidden state inside the NN or inside each layer of RNN? (in a single timestamp?) 3. How is mismatch between number of input features and number of out put features is maintained? For example consider image captioning. Here we are giving fixed number of input parameters, but what will determine how many words will be generated as a caption. Or for example consider generation of sentences related to given word, here we are giving one word as input, but what will decide length of output?
@baluandhavarapu
@baluandhavarapu Ай бұрын
1) Same as the number of neurons in the layer. Each neuron value is a single number 2) It is literally the values of the hidden layer of neurons. We take their previous values, and feed it back to itself to calculate its next value. 3) We use "encoder decoder" architectures. Here, the encoder reads each word one by one without outputting anything (no y). Then when we have the encoding (final h), the decoder takes that and generates the output sequence without taking any words as input (no x)
@elaina1002
@elaina1002 8 ай бұрын
I am currently studying deep learning and find it very encouraging. Thank you very much!
@srirajaniswarnalatha2306
@srirajaniswarnalatha2306 8 ай бұрын
Thanks for your detailed explanation
@sammyfrancisco9966
@sammyfrancisco9966 5 ай бұрын
More complex than the first but brilliantly explained
@danielberhane2559
@danielberhane2559 8 ай бұрын
Thank you for another great lecture, Alexander and Ava !!!
@TheCosmic_Chronicles
@TheCosmic_Chronicles 20 күн бұрын
Nice Abstract lecture!
@henryguy3722
@henryguy3722 7 ай бұрын
The first lecture was fairly interesting mainly because we started with an example.. i wish why the RNNs are needed for sequence model can also we explained with a more piratical example .. probably like next word prediction.. i am like 20 minutes into the lecture and feeling completely lost.. i think just too much math can be difficult to to understand user story a/ use case we are trying to solve..
@mrkshsbwiwow3734
@mrkshsbwiwow3734 8 ай бұрын
what an awesome lecture, thank you!
@shivangsingh603
@shivangsingh603 8 ай бұрын
That was explained very well! Thanks a lot Ava
@anlcanbulut3434
@anlcanbulut3434 7 ай бұрын
One of the best explanations of self attention! It was very intuitive. Thank you so much
@mailanbazhagan
@mailanbazhagan 4 ай бұрын
Simply superb!
@jessenyokabi4290
@jessenyokabi4290 8 ай бұрын
Another extraordinary lecture FULL of refreshing insights. Thank you, Alex and Ava.
@gmemon786
@gmemon786 8 ай бұрын
Great lecture, thank you! When will the labs be available?
@AleeEnt863
@AleeEnt863 8 ай бұрын
Thank you, Ava!
@Maria-yx4se
@Maria-yx4se 3 ай бұрын
been softmaxxing since this one
@anwaargh5204
@anwaargh5204 8 ай бұрын
mistake at the slide that appeared at moment (18:38), the last layer is layer t , it is not layer 3 (i.e., ... means that we have alt least one un-appeared one layer ).
@leesiheon8013
@leesiheon8013 6 ай бұрын
Thank you for your lecture!
@aierik
@aierik 5 ай бұрын
For me to not be a programmer, I did understand her.
@gustavodelgadillo7758
@gustavodelgadillo7758 7 ай бұрын
What a great content
@ikpesuemmanuel7359
@ikpesuemmanuel7359 8 ай бұрын
When will the labs be available, and how can one have access? It was a great session that improved my knowledge of sequential modeling and introduced me to Self-attention. Thank you, Alex and Ava.
@enisten
@enisten 8 ай бұрын
How do you predict the first word? Can you only start predicting after the first word has come in? Or can you assume a zero input to predict the first word?
@turhancan97
@turhancan97 8 ай бұрын
Initially, N-gram statistical models were commonly used for language processing. This was followed by vanilla neural networks, which were popular but not enough. The popularity then shifted to RNN and its variants, despite their own limitations discussed in the video. Currently, the transformer architecture is in use and has made a significant impact. This is evident in applications such as ChatGPT, Gemini, and other Language Models. I look forward to seeing more advanced models and their applications in the future.
@yameen3448
@yameen3448 21 күн бұрын
54:20 Dot product is not cosine similarity. This is mistake by Ava.
@sachinknight19
@sachinknight19 7 ай бұрын
I'm new ai Stu to listen you ❤❤
@dcgray2
@dcgray2 6 ай бұрын
@ 20:00 isn't h sub t acting as the bias for each step in the rnn?
@zahramanafi4793
@zahramanafi4793 6 ай бұрын
Brilliant!
@ps3301
@ps3301 8 ай бұрын
Is there any similar lessons on liquid neural network with some real number calculation ?
@WllArjun-v7s
@WllArjun-v7s 10 күн бұрын
@@ps3301 just look for projects
@Priyanshuc2425
@Priyanshuc2425 8 ай бұрын
Hey if possible please upload how you implement this things practically in labs. Theory is important so does practical work
@TheNewton
@TheNewton 8 ай бұрын
51:52 Position Encoding - isn't this just the same as giving everything a number/timestep? but with a different name (order,sequence,time,etc) ,so we're still kinda stuck with discrete steps. If everything is coded by position in a stream of data wont parts at the end of the stream be further and further away in a space from the beginning. So if a long sentence started with a pronoun but then ended with a noun the pronoun representing the noun would be harder and harder to relate the two: 'it woke me early this morning, time to walk the cat'
@19AKS58
@19AKS58 3 ай бұрын
It seems to me that the data comprising the KEY matrix introduces a large external bias on the QUERY matrix, or am I mistaken? thx
@THEAKLAKERS
@THEAKLAKERS 7 ай бұрын
This was awsome, thank you so much. Does someone knows if the lab or similar excersises are availables as well?
@enisten
@enisten 8 ай бұрын
How can we be sure that our predicted output vector will always correspond to a word? There are an infinite number of vectors in any vector space but only a finite number of words in the dictionary. We can always compute the training loss as long as every word is mapped to a vector, but what use is the resulting callibrated model if its predictions will not necessarily correspond to a word?
@leonegao8925
@leonegao8925 5 ай бұрын
Thanks very much
@wingsoftechnology5302
@wingsoftechnology5302 8 ай бұрын
can you please share the Lab session or codes as well to try out?
@DennisSimplifies
@DennisSimplifies 3 ай бұрын
Are they sibliings? Alex and Ava?
@saimahassan9230
@saimahassan9230 6 ай бұрын
so what would be the the past memory at time stamp 0, (Xo , h-1) ?
@ceeyjae
@ceeyjae Ай бұрын
thank youu
@aminmahfuz5278
@aminmahfuz5278 7 ай бұрын
Is this topic harder, or does Alexander teach better?
@vishnuprasadkorada1187
@vishnuprasadkorada1187 8 ай бұрын
Where can we find the software labs material ? As I am eager to implement the concepts practically 🙂 Btw I love these lectures as an ML student .... Thank you 😊
@abdelazizeabdullahelsouday8118
@abdelazizeabdullahelsouday8118 8 ай бұрын
Plz if you know that let know, thanks in advance
@AkkurtHakan
@AkkurtHakan 8 ай бұрын
@@abdelazizeabdullahelsouday8118 links in the syllabus, docs.google.com/document/d/1lHCUT_zDLD71Myy_ulfg7jaciCj1A7A3FY_-TFBO5l8/
@SandeepPawar1
@SandeepPawar1 8 ай бұрын
Fantastic 🎉 thank you
@chezhian4747
@chezhian4747 8 ай бұрын
Dear Alex and Ava, Thank you so much for the insightful sessions on deep learning which are the best I've come across in youtube. I've a query and would appreciate a response from you. In case if we want to translate a sentence from English to French and if we use an encoder decoder transformer architecture, based on the context vector generated from encoder, the decoder predicts the translated word one by one. My question is, for the logits generated by decoder output, does the transformer model provides weightage for all words available in French. For e.g. if we consider that there are N number of words in French, and if softmax function is applied to the logits generated by decoder, does softmax predicts the probability percentage for all those N number of words.
@TheViral_fyp
@TheViral_fyp 8 ай бұрын
Wow great 👍 job buddy i wanna your book suggestion for DSA!
@giovannimurru
@giovannimurru 8 ай бұрын
Great lecture as always! Can’t wait to start the software labs. Just curious why isn’t the website served over https? Is there any particular reason?
@draganostojic6297
@draganostojic6297 3 ай бұрын
It’s very much like a partial differential equation isn’t it?
@mdidris7719
@mdidris7719 8 ай бұрын
excellent so great idris italy
@SheTami-k8i
@SheTami-k8i 6 ай бұрын
very good I like
@futuretl1250
@futuretl1250 8 ай бұрын
Recurrent neural networks are easier to understand if we understand recursion😁
@abdelazizeabdullahelsouday8118
@abdelazizeabdullahelsouday8118 8 ай бұрын
Was waiting for it from the last one last week, Amazing ! Please i have send you an email asking for some quires, could you let me know how can i get the answers or if there is any channel to connect? thanks in advance
@Musikvidedo
@Musikvidedo Ай бұрын
45:12
@jessgeorgesaji6263
@jessgeorgesaji6263 6 ай бұрын
17:51
@lucasgandara4175
@lucasgandara4175 8 ай бұрын
Dude, How i'd love to be there sometime.
@AdamsOctavia-m2f
@AdamsOctavia-m2f 4 ай бұрын
Bode Divide
@henk_iii
@henk_iii 5 ай бұрын
Once again Ava's wearing a white shirt when talking RNNs
@HabtamuSamuel-lq8nu
@HabtamuSamuel-lq8nu 5 ай бұрын
❤❤
@aspartamexylitol
@aspartamexylitol 4 ай бұрын
not as clear as alexander's explanation of the technical details in the first lecture unfortunately, big picture slides are good though
@andrewign5806
@andrewign5806 4 ай бұрын
CatGPT? :D 58m:51s
@Mantra-x1d
@Mantra-x1d 5 ай бұрын
Testing
@roxymigurdia1
@roxymigurdia1 8 ай бұрын
thanks daddy
@piotrr5439
@piotrr5439 4 ай бұрын
Alex is so much better at presenting.
@SamsonBoicu
@SamsonBoicu 4 ай бұрын
Because he is a man.
@missmytime
@missmytime 2 ай бұрын
Totally disagree. They’re both excellent. This is a difficult topic to break down.
@LajuanaPudenz-w7f
@LajuanaPudenz-w7f 4 ай бұрын
Caesar Harbor
@01_abhijeet49
@01_abhijeet49 8 ай бұрын
Miss was stressed if she made the presentation complex
@Parveen-g3g
@Parveen-g3g 3 ай бұрын
✋🏻
MIT 6.S191: Convolutional Neural Networks
1:07:58
Alexander Amini
Рет қаралды 115 М.
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
How to treat Acne💉
00:31
ISSEI / いっせい
Рет қаралды 108 МЛН
She made herself an ear of corn from his marmalade candies🌽🌽🌽
00:38
Valja & Maxim Family
Рет қаралды 18 МЛН
Don’t Choose The Wrong Box 😱
00:41
Topper Guild
Рет қаралды 62 МЛН
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 62 МЛН
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,5 МЛН
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 865 М.
MIT 6.S191: Deep Generative Modeling
56:19
Alexander Amini
Рет қаралды 78 М.
Fool-proof RNN explanation | What are RNNs, how do they work?
16:05
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 581 М.
Transformers Explained | Simple Explanation of Transformers
57:31
MIT 6.S191: Reinforcement Learning
1:00:19
Alexander Amini
Рет қаралды 73 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 441 М.
Attention in transformers, step-by-step | DL6
26:10
3Blue1Brown
Рет қаралды 2,1 МЛН
How to treat Acne💉
00:31
ISSEI / いっせい
Рет қаралды 108 МЛН