Illustrated Guide to Transformers Neural Network: A step by step explanation

  Рет қаралды 995,593

The AI Hacker

The AI Hacker

Күн бұрын

Пікірлер: 623
@Leon-pn6rb
@Leon-pn6rb 4 жыл бұрын
this is great but would've loved if you could have taken a sample sentence as an input and show us how it transforms as it moves through the different parts of the transformer. Perhaps an idea for the next video!
@tunisitherapie3078
@tunisitherapie3078 2 жыл бұрын
@The A.I. Hacker - Michael Phi please do !
@aysesalihasunar9563
@aysesalihasunar9563 7 ай бұрын
The video actually led me to expect this example as well! It would be highly beneficial.
@MinhNguyen-ro6lm
@MinhNguyen-ro6lm 4 жыл бұрын
I must say you’ve given the best explanation on transformers that’ve saved me lots of time studying the original paper. Please produce more vids like this, I would recommend the BERT family and the GPT family as well 👏👍
@xtremechaos5771
@xtremechaos5771 4 жыл бұрын
I agree. I can't seem to find a good explanation on the BERT model
@ronnieadam1807
@ronnieadam1807 3 жыл бұрын
Sorry to be offtopic but does anyone know of a tool to log back into an instagram account?? I stupidly forgot the password. I appreciate any tips you can offer me.
@ronnieadam1807
@ronnieadam1807 3 жыл бұрын
@Matias Santino I really appreciate your reply. I found the site through google and Im trying it out atm. Takes a while so I will reply here later with my results.
@ronnieadam1807
@ronnieadam1807 3 жыл бұрын
@Matias Santino It worked and I actually got access to my account again. I'm so happy:D Thank you so much you saved my account!
@matiassantino4452
@matiassantino4452 3 жыл бұрын
@Ronnie Adam you are welcome :)
@sank_y
@sank_y 4 жыл бұрын
12:56 encoder has hiddens state of key-value pairs, and in the decoder, the previous output is compressed into a query. The next output is produced by mapping this query and the set of keys and values.
@mrexnx
@mrexnx 3 жыл бұрын
this is critical! I was pretty confused on this for awhile until I realized he swapped the Query and Values on accident.
@Random4Logic
@Random4Logic 2 жыл бұрын
ah someone else realised it. this comment should be pinned ^^
@MJJ12337
@MJJ12337 2 жыл бұрын
you are correct
@leif1075
@leif1075 Жыл бұрын
@@mrexnx Correct me if I'm wrong but the only reason you put the mask so it doesn't attend to "future" words in the sentence is cause of the nature of the Ebglish language..since English is written left tor ought unlike other languages. Otherwise you shouldn't have thst mask because you would need to attend to words on right or maybe left also?
@fahmidhossainSakib
@fahmidhossainSakib Жыл бұрын
@@leif1075 I also thought something like that, that means, in case of Arabic, this direction of masking should not work !
@Random4Logic
@Random4Logic 2 жыл бұрын
There is actually a small mistake at 12:56: The Ecnoders Output are the VALUES and keys! for the Decoders second self attention. So it is: Value and Key from Encoder with Query from Decoder are combined. from the "Self Attention is all you need" paper: "In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder."
@curiousseeker3784
@curiousseeker3784 Жыл бұрын
Ik that's true but still it doesn't make sense, What's the use of the masked attentions score that we just computed in that Multi-Head attention layer since it's not carried forward
@weibangzhang3994
@weibangzhang3994 Жыл бұрын
yes you are right
@vineetsrivastava4906
@vineetsrivastava4906 Жыл бұрын
I asked chatgpt (GPT4) for the same and look at the response: "Let's break down the process: Encoders in Transformers: They take the input sequence and process it through multiple layers of self-attention and feed-forward networks. The final output from the last encoder is used as the "Keys" (K) and "Values" (V) for the subsequent decoder. Decoders in Transformers: Decoders also have multiple layers. In the first self-attention mechanism of the decoder, the target sequence (for example, the partially generated translation) attends to itself. This is analogous to the self-attention in the encoder but has a slight difference. In the decoder's self-attention, future tokens are masked, so they can't be attended to. This ensures that the prediction for a particular word doesn’t depend on future words in the sequence. In the subsequent multi-head attention mechanism, the "Queries" (Q) come from the decoder's previous layer, and they attend to the "Keys" (K) and "Values" (V) from the encoder's output. This essentially means that the decoder is using the information from the encoder to help generate the next token in the sequence. So, your statement is correct: "Value and Key from Encoder with Query from Decoder are combined." In the Transformer's decoder, for every step in its layers, the Queries (Q) from the decoder attend to the Keys (K) and Values (V) from the encoder output."
@yugiblox3274
@yugiblox3274 Жыл бұрын
Unless it’s a decoder only transformer
@joachimheirbrant1559
@joachimheirbrant1559 7 ай бұрын
indeed it is like this as the dot product of the keys and querries construct the relation between the input and the already generated output if noy K and Q where from the encoder it wouldn't capture the relation between the input and already generated output
@NockyLucky
@NockyLucky 12 күн бұрын
This is incredible! You explained transformers so well!
@lakshman587
@lakshman587 3 ай бұрын
Man this guy made a video on Transformers 4 years ago!! You are awesome!! Perfect explanation!!!!! Thanks a lot for the video!
@TheUmaragu
@TheUmaragu 11 ай бұрын
A complex process- I need to listen to this multiple times to fully understand this.
@vision-unscripted
@vision-unscripted 6 ай бұрын
Same here
@abail7010
@abail7010 Жыл бұрын
I have been struggling with this architecture for an eternity now and this is the first time I really understood what's going on in this graphic. Thank you so much for this nice and clear explanation!
@leif1075
@leif1075 Жыл бұрын
What about the architecture made you struggle if I may ask?
@Dexter01
@Dexter01 4 жыл бұрын
This tutorial is absolute brilliant, I have to see it again and read the illustrated guide, there are so many infos!! Thank you!!!
@nicohambauer
@nicohambauer 3 жыл бұрын
Strongly agree!
@jenishah9825
@jenishah9825 4 жыл бұрын
This video marks an end to my search for one place explanation of Transformers. Thanks a lot for putting this up! :)
@lifewhimsy
@lifewhimsy Жыл бұрын
This is THE BEST transformer video I have encountered.
@shahzebhafeez1749
@shahzebhafeez1749 3 ай бұрын
This 15 minute video is the best explanation of transformers I have found.
@udbhavprasad3521
@udbhavprasad3521 4 жыл бұрын
Honestly this is the best explanation I've ever seen on transformers and attention
@yishaibasserabie5765
@yishaibasserabie5765 Жыл бұрын
This is by far the best explanation I’ve ever seen on Transformer Networks. Very very well done
@valentinfontanger4962
@valentinfontanger4962 2 жыл бұрын
I used multiple sources to learn about the transformer architecture. Regarding the decoder part, you really helped me understanding what was the input and how the different operations are performed ! Thanks a lot :)
@jaceju283
@jaceju283 10 ай бұрын
In 7:27, In the right the attention wieghts is a 4*4matrix while value matrix is 3*4, a 4*3 matrix for value will be more appropriate
@gudisamahesh
@gudisamahesh 8 ай бұрын
This seems to be one of the best videos on Transformers
@architkhare729
@architkhare729 4 жыл бұрын
Wow , this was great, I have watched a no of videos on the transformer models, and they have all contributed to my understanding, but this puts everything together so neatly. Amazing, please keep making more such videos.
@mitch7w
@mitch7w Жыл бұрын
Best explanation I've seen so far, thanks so much! 😃
@martian.07_
@martian.07_ 2 жыл бұрын
Best video ever on transformers, trust me I tried others, just positional encoding is missing, but rest is gold. Thank you.
@akhileshm8089
@akhileshm8089 3 жыл бұрын
This is the best explaination on transformers anywhere on the web
@karthikm1858
@karthikm1858 2 жыл бұрын
One word excellent , none of the explanation will match this in KZbin ....stunned
@cocoph
@cocoph 5 ай бұрын
This is the best explanation of transformers models, please keep going on this channel. There are lots of models still need to explain!
@toddwmac
@toddwmac Жыл бұрын
If you only knew how relevant this would be 2 years later. Thank you!
@alvinphantomhive3794
@alvinphantomhive3794 3 жыл бұрын
Now i have two great heroes that explain complex concept using mindblowin visualization, first is 3b1b for complex math topics, then Michael Phi for complex machine learning architecture! Just wow ... salute sir! thank you so much!
@BlueSky-b9o
@BlueSky-b9o Ай бұрын
I didn't see any other video explaining this concept this beautiful and clear. Also visual animations helped so much Thank you 🙏🏼
@danicarovo8818
@danicarovo8818 Жыл бұрын
Thank you for this amazing explanation. It really helped after an insufficient explanation from my DL lecture. The prof did not even mention that the final part is a classification over the vocabulary for an nlp task!
@Waterlmelon
@Waterlmelon 9 ай бұрын
amazing explanation, honestly this is the first time i understand how Transformers work.
@user-oq5ki8ml2r
@user-oq5ki8ml2r Ай бұрын
The animation of the matrices actually helped a lot! Quality explanation video!
@mauriciolopes8502
@mauriciolopes8502 Жыл бұрын
This was not just the best transformer explanation, but the best explanation in general that I've ever seen. You know what to abstract for a fluid and clear explanation. Congratulations. You should do more videos like this.
@AIPowered
@AIPowered Жыл бұрын
check our video on transformer as well and please provide feedback on our ai4u channel
@manikantansrinivasan5261
@manikantansrinivasan5261 Жыл бұрын
This is literally the best explanation of Transformers I have ever seen!
@-long-
@-long- 3 жыл бұрын
My first read about Micheal Phi was "Stop Installing Tensorflow using pip for performance sake!" in TowardDataScience blog (as I recall you was "Micheal Nguyen" at that time). My first impression was like "oh this guy was good at explanation". Then I read his several blogs, and now here I am. I never knew that you have a channel. You are one of the best educator I've ever known. Thanks so much.
@MahlerLab
@MahlerLab 2 жыл бұрын
Thank you so much for the step by step explanation. This is a good starting point for ML dummies like me to get a grasp on the transformer model architecture.
@dnaphysics
@dnaphysics Жыл бұрын
Good explanation. What boggles my mind is that this architecture can not only produce reasonable sentences, but there can be some logic going on behind the sequence of sentences as we've seen in chatGPT. It is mind-boggling that there must be some amount of deeper conceptualization represented in the embeddings too! amazing
@DS-nv2ni
@DS-nv2ni Жыл бұрын
No, and it's not even understandable how you got to such conclusion.
@usamasaddique7580
@usamasaddique7580 Жыл бұрын
I watched many videos, but I'm 100% sure that this video is only the best among those in explaining the transformers
@anshuljain2258
@anshuljain2258 2 жыл бұрын
Half of it went through my head. Just beautiful. I'll watch it many more times.. That's how I know the content is gooood.
@jeremyhofmann7034
@jeremyhofmann7034 3 жыл бұрын
This transformer tutorial is more than meets the eye
@legendfpv
@legendfpv 3 жыл бұрын
lmao
@mohitjoshi8818
@mohitjoshi8818 2 жыл бұрын
Your videos are the BEST, I understood RNNs, LSTMs, GRUs and Transformers in less than an hour. Thankyou.
@siddhanthhegde227
@siddhanthhegde227 4 жыл бұрын
Brooo you are seriously my god😭😭🙏🙏...thanks a lot for this video...no one... literally no one could teach me transformer and your video just got drilled into my mind...please make other videos like this for bert gpt xlnet xlm etc etc... I'm really thankful to you
@JulianHarris
@JulianHarris 11 ай бұрын
Amazing. I still don’t really understand how the Q K and V values are calculated but I learnt a lot more about this seminal paper than others provided - thank you! 🙏
@theaihacker777
@theaihacker777 4 жыл бұрын
Correction: The sine and cosine functions for the positional embedding are applied to the input embedding dimension, not the time steps! oof! For the readers check out the written version of an illustrated guide to Transformers here towardsdatascience.com/illustrated-guide-to-transformers-step-by-step-explanation-f74876522bc0
@salimbo4577
@salimbo4577 4 жыл бұрын
thanx man . i was gonna ask have you written a papper for it like you did for LSTMs
@Cat-vs7rc
@Cat-vs7rc 4 жыл бұрын
Also the positional embedding in the illustrated guide is incorrect. Its alternating between even and odd
@davidkhassias4876
@davidkhassias4876 4 жыл бұрын
Your video is the best explanation of Transformers I have ever seen
@ombelote8264
@ombelote8264 4 жыл бұрын
How do we actually split the queries and keys into multiple values for multiheaded attention?
@masterswag2870
@masterswag2870 3 жыл бұрын
haha thanks micheal, I was like wtf eveything i learned is wrong at 4:18 until i scrolled down
@mariosessa3828
@mariosessa3828 2 жыл бұрын
Thanks for your explanation, very clean and well built in every argument about transformers. I was so lucky to get this video randomly on KZbin. Good job!
@danilob2b2
@danilob2b2 4 жыл бұрын
I watched a second time not for better understand the video, but to appreciate it. It is very well done and pretty clear. Thank you.
@TeresaVentaja
@TeresaVentaja Жыл бұрын
I am not technically skilled on ML and I understood on a high level how this work. I feel so grateful for this video 🙏
@SriNiVi
@SriNiVi 4 жыл бұрын
One of the most clearest explanation of Transformers I have ever seen
@tabindahayat3492
@tabindahayat3492 5 ай бұрын
Woah! Exquisite, It's a 15 min video but I spent over an hour taking notes and understanding. You have done a great job, keep it up. Thank you so much! Such explanations are rare. ;)
@janeditchfield3976
@janeditchfield3976 Жыл бұрын
Your explanation of q k and v is the thing that finally did it for me, I get it!
@stevey7997
@stevey7997 2 жыл бұрын
This is by far the best explanation that I have seen.
@kolla_teja
@kolla_teja 3 жыл бұрын
This video clears all the doubts about transformers model. please make some more detailed videos on other topics waiting for your reply pls.
@shivanandroy5339
@shivanandroy5339 4 жыл бұрын
That's absolute genius ! You made it so easy to understand such a mighty concept.
@theaihacker777
@theaihacker777 4 жыл бұрын
Thank you. Appreciate the kind words 🙏
@InturnetHaetMachine
@InturnetHaetMachine 4 жыл бұрын
Wow, by far the best explanation. Teaching really is an art where even experts with their blind spots can be very bad at explaining (even if they know it well themselves). Well done. It's really too bad your previous video was a year ago. I really hope you make more videos explaining other deep learning papers such as Faster RCNN etc. Thanks for posting this.
@akarshrastogi3682
@akarshrastogi3682 2 жыл бұрын
Yes it was through this video that I finally got my head around Transformers. Now I can read through more resources and the Paper itself
@prathmeshdeval8773
@prathmeshdeval8773 2 ай бұрын
In "encoder-decoder attention" layers, the "queries "come from the previous decoder layer, and the" keys "and "values" come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as [38, 2, 9]. -- Attention is all you need
@frederickyan
@frederickyan 2 жыл бұрын
That's most clear and breif explanation about the key idea of 'transformer', especially on how transformer works and how self-attention self attended why self attention is self attention.
@garymail4393
@garymail4393 Жыл бұрын
This is the best video I have seen about transformers. Very articulate and concise. Great job
@ROBOT_Kding
@ROBOT_Kding Жыл бұрын
感谢! Thank you so much, I've looked up all the resources on the Internet but still messed up with the mechanism. It's really a clear and detailed explanation.
@helloWorld01010
@helloWorld01010 Жыл бұрын
You did an amazing job explaining the workflow … looked for more similar stuff… please continue … I hope you will be back to help people like me
@Auditor1337
@Auditor1337 Жыл бұрын
While I still have some questions, this is a pretty good explanation, I mean I actually have an idea of how this works! Gonna watch it like 2 more times.
@StratosFair
@StratosFair Жыл бұрын
Best video on transformers on KZbin, thank you so much
@atirrup3470
@atirrup3470 2 жыл бұрын
12:55 A small mistake: K and V should be encoder stack's output, and Q is the first Multi-headed Attention sublayer's output. Still, this guide is really awesome! Thanks for your effort bro!
@davefar2964
@davefar2964 Жыл бұрын
Thanks, I particularly liked that you went into as much detail for the decoder as for the encoder.
@MaptaGss
@MaptaGss 3 жыл бұрын
hands down the best explation for transformer models !
@DrMarcoArmenta
@DrMarcoArmenta Жыл бұрын
You literally solved ALL my doubts about transformers
@DiariesOfVishal
@DiariesOfVishal Жыл бұрын
Thanks for the effort you put into making the animation on the slide.
@7justfun
@7justfun 3 жыл бұрын
Quick Q : Say at 5:58, on the left side of the slide, QKV are inputs to linear layers, but on the right it looks like positional encodings are passed to the linear layers and Q,K,V are outputs of the layers. I guess the left is just a typo and right side is correct representation. Can you please help confirm. By the way this is the best explanation i have found, you are just amazing. Thank you so much brother.
@busuyiadebayo851
@busuyiadebayo851 Жыл бұрын
You are correct. That was a typo. Positional encodings matrix are passed to each of the three linear layers to produce Q, K and V respecetively.
@AISlopForHumans
@AISlopForHumans Жыл бұрын
Bro you need to post more nowadays, we need you!!
@Hooowwful
@Hooowwful 3 жыл бұрын
Favourite video on the topic! I'm reasonably knowledgeable on ML, but the other 5-10 videos I've tried so far all resulted in increased confusion. This is clear.Nice one 👍🏿
@lone0017
@lone0017 4 жыл бұрын
Brilliant explanation with visually intuitive animations ! I rarely comment or subscribe to anything but this time I instantly do both after watching the video. And how coincidental it is that this was uploaded on my birthday. Hope to see more videos from you.
@daesoolee1083
@daesoolee1083 3 жыл бұрын
The best video ever for Transformer.
@vivekmankar5823
@vivekmankar5823 2 жыл бұрын
This is the best explanation on transformers. Thank you so much for the video.
@mrowkenesser
@mrowkenesser 2 жыл бұрын
Man thanks for this video, reading a paper for newbie is super difficult, but such explanations like you've posted for key, value and query as well as reasoning for masking is very, very helpful. I subscribed to your channel and am looking forward for new stuff.
@Scranny
@Scranny 4 жыл бұрын
Wow Michael, this is a superb explanation of the transformer architecture. You even went into detail about the meaning of the Q,K,V vectors and masking concepts which were hard for me to grasp. I bounced around through 3-4 videos about the transformer arch, and for each one I claimed it was the best explanation on the topic. But your video takes the cake and explains it in half the time as the others. Thank you for sharing! Also, great job on the visuals which are on par with 3blue1brown's animations.
@theaihacker777
@theaihacker777 4 жыл бұрын
Thank you 😃
@Controllerhead
@Controllerhead Жыл бұрын
Incredible video! I hope you are doing well and find the time to make more, especially with the recent popularity explosion of AI.
@piyalikarmakar5979
@piyalikarmakar5979 3 жыл бұрын
I must say this is the best explanation I had ever seen...no confusion no doubt left in mind.. Thanks a lot sir.. It will be helpful if you kindly do a vedio on language models like IBM GPT BERT..
@MrMIB983
@MrMIB983 4 жыл бұрын
The best transformer video, nice summary. Just some matrix multiplications outcomes have wrong dimensions :/
@shaktisd
@shaktisd Жыл бұрын
One of the best explanation of attention mechanism
@Lolzor87a
@Lolzor87a 3 жыл бұрын
Wow. This is some really good explanation! I don't have much NLP background except RNN/LSTM and things before DL (N-gram), but wanted to know more about Attention mechanism for robotics application. (my field) Most other explanation either skimmed over the mathematics, or used NLP specific nomenclature/concepts that made it hard to understand for non-NLP people. This was some good stuff! Much appreciated and Keep up the good work!
@iskhwa
@iskhwa 2 жыл бұрын
I keep coming back to this video. It's great.
@weibangzhang3994
@weibangzhang3994 Жыл бұрын
In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder.
@mostafaibrahim2911
@mostafaibrahim2911 4 жыл бұрын
Best Transformers Explanation I have seen thank you very much, Liked the video and Subscribed !! Keep it up :))
@tingyizhou8736
@tingyizhou8736 2 жыл бұрын
This is one of the best introductory videos I've seen on this subject. Thank you!
@federicaf
@federicaf 4 жыл бұрын
Amazing! thank you so much - great quality of the video and content
@yangzou5124
@yangzou5124 2 жыл бұрын
Thanks a lot for the intuitive introduction. But at 6:29, shall we transpose the matrix of Q, K here? In the video, multipling a 3x4 matrix and a 4x3 matrix leads to a 3x3 score matrix instead of 4x4 one
@gkirangk4946
@gkirangk4946 3 жыл бұрын
Wow..one of the best videos I have watched on transformers...so simple to grasp. Please make more videos.
@manugond6280
@manugond6280 4 жыл бұрын
wow...really great explanation, now reading the original paper will be much easier. Thank you.
@sloanNYC
@sloanNYC Жыл бұрын
Incredibly interesting. It is amazing how much processing and storage is required to achieve this.
@varunsingh2103
@varunsingh2103 2 жыл бұрын
best explanation,brilliant
@sourabhyadav5716
@sourabhyadav5716 Жыл бұрын
Best illustration on transformer. Subscribed for many more to come.
@vitsujiaura6359
@vitsujiaura6359 Жыл бұрын
This is the best explanation of a transfomer or any architecture I have ever watched. Highly impressive how such a complex topic is dismantled into visually appealing and understandable explanations.
@eshneto
@eshneto Жыл бұрын
Feels like I can implement it now and I am a beginner in DL
@colinmaharaj50
@colinmaharaj50 Жыл бұрын
Trying to figure out where to start, like an academic approach. I have extensive experience in C/C++ so I want to use that as the driver to learn and apply this, and whatever I need to move ahead.
@princezuko7073
@princezuko7073 5 ай бұрын
On the same path here. How’s it going for you?
@tanveerulmustafa9232
@tanveerulmustafa9232 7 ай бұрын
This explanation is INCREDIBLE!!!
@TimothyParker1
@TimothyParker1 3 жыл бұрын
Great deep dive into transformers. Helped me understand this architecture.
@TheHirochima92
@TheHirochima92 3 жыл бұрын
A really good and nice video that helped me understand transformer quicker, thanks ! After reading a little more, I think there is small mistake about the roles of the encoder outputs and the decoder first Multi-Headed attention outputs. The encoder output is supposed to be the Key and the Value and the decoder first Multi-Headed attention output the Query (you say that the encoder output is Q and K and the decoder first Multi-Headed attention is V). A small error that got me to think a little bit too long on how the dimensions could match at the end of the Multi-Headed attention ^^ Thanks again for the video, it was clear as crystal !
@andrewblair2025
@andrewblair2025 2 жыл бұрын
I agree. This is confirmed in section 3.2.3 and bullet point 1 in (Vaswani, A. Attention Is All You Need. (2017)).
@鸩42斑
@鸩42斑 3 жыл бұрын
This is incredible. I've been watching videos and reading papers about transformer and attention for days, this is the best material so far.
@flwi
@flwi Жыл бұрын
That's a very good explanation imo! Thanks for taking the time to produce such a gem.
@luke-hk6st
@luke-hk6st 2 жыл бұрын
explains look ahead masking very well.
@revenantnox
@revenantnox 2 жыл бұрын
This was super helpful thank you. I read the original paper and absorbed like 70% of it but this clarified several things.
@learnwithfaisu5866
@learnwithfaisu5866 Жыл бұрын
sir u r superExplanation teacher ever
@alexanderblumin6659
@alexanderblumin6659 3 жыл бұрын
Very deep explanation, brilliant talent to give somebody an intuition
@nasheennur3210
@nasheennur3210 26 күн бұрын
Best explanation 😊
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 3,8 МЛН
FOREVER BUNNY
00:14
Natan por Aí
Рет қаралды 30 МЛН
If people acted like cats 🙀😹 LeoNata family #shorts
00:22
LeoNata Family
Рет қаралды 21 МЛН
Transformers explained | The architecture behind LLMs
19:48
AI Coffee Break with Letitia
Рет қаралды 27 М.
Illustrated Guide to LSTM's and GRU's: A step by step explanation
11:18
LSTM is dead. Long Live Transformers!
28:48
Seattle Applied Deep Learning
Рет қаралды 529 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 523 М.
Attention in transformers, visually explained | DL6
26:10
3Blue1Brown
Рет қаралды 1,8 МЛН
Transformer Neural Networks, ChatGPT's foundation, Clearly Explained!!!
36:15
StatQuest with Josh Starmer
Рет қаралды 751 М.
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 379 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,3 МЛН
The math behind Attention: Keys, Queries, and Values matrices
36:16
Serrano.Academy
Рет қаралды 262 М.