Self Attention in Transformer Neural Networks (with Code!)

  Рет қаралды 75,293

CodeEmporium

CodeEmporium

Күн бұрын

Let's understand the intuition, math and code of Self Attention in Transformer Neural Networks
ABOUT ME
⭕ Subscribe: kzbin.info...
📚 Medium Blog: / dataemporium
💻 Github: github.com/ajhalthor
👔 LinkedIn: / ajay-halthor-477974bb
RESOURCES
[ 1🔎] Code for video: github.com/ajhalthor/Transfor...
[2 🔎] Transformer Main Paper: arxiv.org/abs/1706.03762
[3 🔎] Bidirectional RNN Paper: deeplearning.cs.cmu.edu/F20/d...
PLAYLISTS FROM MY CHANNEL
⭕ ChatGPT Playlist of all other videos: • ChatGPT
⭕ Transformer Neural Networks: • Natural Language Proce...
⭕ Convolutional Neural Networks: • Convolution Neural Net...
⭕ The Math You Should Know : • The Math You Should Know
⭕ Probability Theory for Machine Learning: • Probability Theory for...
⭕ Coding Machine Learning: • Code Machine Learning
MATH COURSES (7 day free trial)
📕 Mathematics for Machine Learning: imp.i384100.net/MathML
📕 Calculus: imp.i384100.net/Calculus
📕 Statistics for Data Science: imp.i384100.net/AdvancedStati...
📕 Bayesian Statistics: imp.i384100.net/BayesianStati...
📕 Linear Algebra: imp.i384100.net/LinearAlgebra
📕 Probability: imp.i384100.net/Probability
OTHER RELATED COURSES (7 day free trial)
📕 ⭐ Deep Learning Specialization: imp.i384100.net/Deep-Learning
📕 Python for Everybody: imp.i384100.net/python
📕 MLOps Course: imp.i384100.net/MLOps
📕 Natural Language Processing (NLP): imp.i384100.net/NLP
📕 Machine Learning in Production: imp.i384100.net/MLProduction
📕 Data Science Specialization: imp.i384100.net/DataScience
📕 Tensorflow: imp.i384100.net/Tensorflow
TIMSTAMPS
0:00 Introduction
0:34 Recurrent Neural Networks Disadvantages
2:12 Motivating Self Attention
3:34 Transformer Overview
7:03 Self Attention in Transformers
7:32 Coding Self Attetion

Пікірлер: 149
@CodeEmporium
@CodeEmporium Жыл бұрын
If you think I deserve it, please consider liking the video and subscribing for more content like this :)
@meguellatiyounes8659
@meguellatiyounes8659 Жыл бұрын
do have any idea how transformers generates new data ?
@15jorada
@15jorada Жыл бұрын
You are amazing man! Of course you deserve it! You are building transformers from the ground up! That's insane!
@vipinsou3170
@vipinsou3170 8 ай бұрын
​@@meguellatiyounes8659using decoder 😮😮😊
@rainmaker5199
@rainmaker5199 11 ай бұрын
This is great! I've been trying to learn attention but it's hard to get past the abstraction in a lot of the papers that mention it, much clearer this way!
@nikkilin4396
@nikkilin4396 2 ай бұрын
It's one of the best videos I have watched. The concepts are explained very much, specially with codes.
@marktahu2932
@marktahu2932 11 ай бұрын
I have learnt so much between yourself, ChatGPT, and Alexander & Ava Amini iat MIT 6.S191. Thank you all.
@tonywang7933
@tonywang7933 Жыл бұрын
Thank you so much, I searched so many places, this is the first place finally have a nice person willing to spend time really dig in step by step. I'm going to value this channel as good as Fireship now.
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks for the compliments and glad you are sticking around!
@ChrisCowherd
@ChrisCowherd 7 ай бұрын
Fantastic explanation! Wow! You have a new subscriber. :) Keep up the great work
@jeffrey5602
@jeffrey5602 Жыл бұрын
What's important is that for every token generation step we always feed the whole sequence of previously generated tokens into the decoder, not just the last one. So you start with the token and generate now new token, then feed + into the decoder, so basically just appending the generated token to the sequence of decoder inputs. That might have not been clear in the video. Otherwise great work. Love your channel!
@simonebonato5881
@simonebonato5881 8 ай бұрын
One video to understand them all! Dude thanks I've tried to watch like 10 other videos on transformers and attention, yours was really super clear and much more intuitive!
@CodeEmporium
@CodeEmporium 8 ай бұрын
Thanks so much for this compliment! Means a lot :)
@pocco8388
@pocco8388 9 ай бұрын
Best contents ever I've seen. Thanks for this video.
@user-gq5rl1kb7n
@user-gq5rl1kb7n Жыл бұрын
I usually don't write comments, but this channel really deserves one! Thank you so much for such a great tutorial. I watched your first video about Transformers and the Attention mechanism, which was really informative, but this one is even more detailed and useful.
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much for the compliments! This is the first in a series of videos called “Transformers from scratch “. Hope you’ll check the rest of the playlist out
@SOFTWAREMASTER
@SOFTWAREMASTER Жыл бұрын
I was legit searching for self attention concept vids and thinking that it sucked that you didn't cover it yet. And voila here we are. Thankyou so much for uploading!!
@CodeEmporium
@CodeEmporium Жыл бұрын
Glad I could deliver. Will be uploading more such content shortly :)
@rajpulapakura001
@rajpulapakura001 5 ай бұрын
This is exactly what I needed! Can't believe self-attention is that simple!
@kotcraftchannelukraine6118
@kotcraftchannelukraine6118 5 ай бұрын
I still not understand how to perform a backward pass on the self-attention
@prashantlawhatre7007
@prashantlawhatre7007 Жыл бұрын
waiting for your future videos. This was amazing. especially the masked attention part.
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much! Will be making more over the coming weeks
@noahcasarotto-dinning1575
@noahcasarotto-dinning1575 5 ай бұрын
Best video explaining this that ive seen by far
@dataflex4440
@dataflex4440 Жыл бұрын
This Has been a most wonderful series on this channel so far
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks a ton! Super glad you enjoyed the series :D
@srijeetful
@srijeetful 2 ай бұрын
Extremely well explained. Kudos !!!!
@PraveenHN-zj3ny
@PraveenHN-zj3ny Ай бұрын
very happy to see kannada here Great 😍Love from kannadigas
@MahirDaiyan7
@MahirDaiyan7 Жыл бұрын
Great! This is exactly what I was looking for in all of the other videos of yours
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks for the comment! There is more to come :)
@deepalisharma1327
@deepalisharma1327 8 ай бұрын
Thank you for making this concept so easy to understand. Can’t thank you enough 😊
@CodeEmporium
@CodeEmporium 8 ай бұрын
My pleasure. Thank you for watching
@lawrencemacquarienousagi789
@lawrencemacquarienousagi789 Жыл бұрын
Wonderful works you've done! I really love your video and have studied twice. Thank you so much!
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much for watching! More to come :)
@ayoghes2277
@ayoghes2277 Жыл бұрын
Thanks a lot for making the video!! This deserves more views.
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks for watching. Hope you enjoy the rest of the playlist as I code the entire transformer out !
@muskanmahajan04
@muskanmahajan04 10 ай бұрын
The best explaination on the internet, thank you!
@CodeEmporium
@CodeEmporium 10 ай бұрын
Thanks so much for the comment. Glad you liked it :)
@becayebalde3820
@becayebalde3820 7 ай бұрын
This is pure gold man! Transformers are complex but this video really gives me hope.
@pratyushrao7979
@pratyushrao7979 4 ай бұрын
What are the prerequisites for this video? Do we need to know about encoder decoder architecture before hand? The video feels like I jumped right in the middle of something without any context. I'm confused
@VadimChes
@VadimChes Ай бұрын
​@pratyushrao7979 there are Playlists for different topics
@bradyshaffer3302
@bradyshaffer3302 Жыл бұрын
Thank you for this very clear and helpful demonstration!
@CodeEmporium
@CodeEmporium Жыл бұрын
You are so welcome! And be on the lookout for more :)
@chrisogonas
@chrisogonas Жыл бұрын
Awesome! Well illustrated. Thanks
@shivamkaushik6637
@shivamkaushik6637 Жыл бұрын
With all my heart, you deserve a lot of respect Thanks for the content. Damn I missed my metro station because of you.
@CodeEmporium
@CodeEmporium Жыл бұрын
Hahahaha your words are too kind! Please check the rest of the Transformers from scratch” playlist for more (it’s fine to miss the metro for education lol)
@shailajashukla5841
@shailajashukla5841 2 ай бұрын
Excellent , how well you explained. NO other video on youtube explained like this , Really done good job.
@debjanidas5786
@debjanidas5786 9 күн бұрын
search CampusX
@JBoy340a
@JBoy340a Жыл бұрын
Great walkthrough of the theory and then relating it to the code.
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much! Will be making more of these over the coming weeks
@chessfreak8813
@chessfreak8813 5 ай бұрын
Thanks! U r very deserved and underdog!
@amiralioghli8622
@amiralioghli8622 7 ай бұрын
Thank you so much for taking the time to code and explain the transformer model in such detail, I followed your series from zeros to heros. You are amazing and, if possible please do a series on how transformers can be used for time series anomaly detection and forecasting. it is extremly necessary on yotube for somone! Thanks in advance.
@sockmonkeyadam5414
@sockmonkeyadam5414 11 ай бұрын
u have saved me. thank u.
@softwine91
@softwine91 Жыл бұрын
What can I say, dude! God bless you This is the only content on the whole youtube that really explain the self-attention mechanism in a brilliant way. Thank you very much. I'd like to know if the key, query, and value matrixes are updated via backpropagation during the training phase.
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks for the kind words. These matrices I mentioned in the code represent the actual data. So no. However, the 3 weight matrices that map a word vector to Q,K,V are indeed updated via backprop. Hope that lil nuance makes sense
@picassoofai4061
@picassoofai4061 Жыл бұрын
I definitely agree.
@shaktisd
@shaktisd 4 ай бұрын
Excellent video . If you can please make a hello world on self attention like first showing pca representation before self attention and after self attention to show how context impacts the overall embedding
@junior14536
@junior14536 Жыл бұрын
My god, that was amazing, you have a gift my friend; Love from Brazil :D
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks a ton :) Hope you enjoy the channel
@pulkitmehta1795
@pulkitmehta1795 Жыл бұрын
Simply wow..
@PaulKinlan
@PaulKinlan Жыл бұрын
This is brilliant, I've been looking for a bit more hands on demonstration of how the process is structured.
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much! Happy you liked it :)
@maximilianschlegel3216
@maximilianschlegel3216 Жыл бұрын
This is an incredible video, thank you!
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much for watching and commenting!
@SIADSrikanthB
@SIADSrikanthB Ай бұрын
I really like how you use Kannada language examples in your explanations.
@mamo987
@mamo987 Жыл бұрын
Amazing work! Very glad I subscribed
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much for commenting!
@rajv4509
@rajv4509 Жыл бұрын
Absolutely brilliant! Thumba chennagidhay :)
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks a ton! Super glad you like this. I hope you like the rest of this series :)
@AI-xe4fg
@AI-xe4fg Жыл бұрын
Good video Bro. Studying Transformer this week but still a little confused before I met your video. Thanks
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks for the kind words. I really appreciate it :)
@nandiniloomba
@nandiniloomba Жыл бұрын
Thank you for teaching this.❤
@CodeEmporium
@CodeEmporium Жыл бұрын
My pleasure! Hope you enjoy the series
@Slayer-dan
@Slayer-dan Жыл бұрын
Huge respect ❤️
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much!
@faiazahsan6774
@faiazahsan6774 Жыл бұрын
Thank you for explaining in such an easy way. It would be great if you could upload some codes on GCN algorithm.
@CodeEmporium
@CodeEmporium Жыл бұрын
I shall explore that possibility!
@paull923
@paull923 Жыл бұрын
Thx for your efforts!
@CodeEmporium
@CodeEmporium Жыл бұрын
Super welcome :)
@sriramayeshwanth9789
@sriramayeshwanth9789 8 ай бұрын
you made me cry brother
@yonahcitron226
@yonahcitron226 9 ай бұрын
this is amazing!
@CodeEmporium
@CodeEmporium 9 ай бұрын
Thanks a lot!
@jamesjang8389
@jamesjang8389 6 ай бұрын
Amazing video! Thank you😊😊
@CodeEmporium
@CodeEmporium 6 ай бұрын
You are very welcome
@jazonsamillano
@jazonsamillano Жыл бұрын
Great video. Thank you very much.
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much!
@FelLoss0
@FelLoss0 10 ай бұрын
Dear Ajay. Thank you so much for your videos! I have a quick question here. Why did you transpose the values in the softmax function? Also... why did you specify axis=-1? I'm a newbie at this and I'd like to have strong and clear foundations. have a lovely weekend :D
@yijingcui7736
@yijingcui7736 5 ай бұрын
this is very helpful
@CodeEmporium
@CodeEmporium 5 ай бұрын
Glad! And thank you!
@varungowtham3002
@varungowtham3002 Жыл бұрын
ನಮಸ್ಕಾರ ಅಜಯ್, ನೀವು ಕನ್ನಡಿಗ ಎಂದು ತಿಳಿದು ತುಂಬ ಸಂತೋಷವಾಯಿತು! ನಿಮ್ಮ ವಿಡಿಯೋಗಳು ತುಂಬ ಚನ್ನಾಗಿ ಮೂಡಿಬರುತ್ತಿವೆ.
@CodeEmporium
@CodeEmporium Жыл бұрын
Glad you liked this and thanks for watching! :)
@dataflex4440
@dataflex4440 Жыл бұрын
Brilliant Mate
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks a ton! :)
@gabrielnilo6101
@gabrielnilo6101 11 ай бұрын
I stop the video sometimes and roll it back some seconds to hear you explaining something again and I am like: "No way that this works, this is insane", some explanations on AI techniques are not enough and yours are truly simple and easy to understand, thank you. Do you collab with anyone when making these videos, or is it done all by yourself?
@CodeEmporium
@CodeEmporium 11 ай бұрын
Haha yea. Things aren’t actually super complicated. :) I make these videos on my own. Scripting, coding, research, editing. Fun stuff
@naziadana7885
@naziadana7885 Жыл бұрын
Thank you very much for this great video! Can you please upload a video on Self Attention code using Graph Convolutional Network (GCN)?!
@CodeEmporium
@CodeEmporium Жыл бұрын
I’ll look into this at some point. Thanks for the tips.
@imagiro1
@imagiro1 8 ай бұрын
Got it, thank you very much, but one question: What I still don't understand: We are talking about neural networks, and they are trained. So all the math you show here, how do we (know|make sure) that it actually happens inside the network? You don't train specific regions of the NN to specific tasks (like calculating a dot product), right?
@picassoofai4061
@picassoofai4061 Жыл бұрын
Mashallah, man you are a rocket.
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks for the kind words :)
@li-pingho1441
@li-pingho1441 Жыл бұрын
you save my life!!!!!
@CodeEmporium
@CodeEmporium Жыл бұрын
It’s what I do best :)
@pranayrungta
@pranayrungta 10 ай бұрын
Your videos are way better than Stanford lecture cs224n
@CodeEmporium
@CodeEmporium 10 ай бұрын
Words I am not worthy of. Thank you :)
@rujutaawate5412
@rujutaawate5412 9 ай бұрын
Thanks, @CodeEmporium / Ajay for the great explanation! One quick question- can you please explain how the true values of Q, K, and V are actually computed? I understand that we start with random initialization but do these get updated through something like backpropagation? If you already have a video of this then would be great if you can state the name/redirect! Thanks once again for helping me speed up my AI journey! :)
@CodeEmporium
@CodeEmporium 9 ай бұрын
That's correct back prop will update these weights. For exact details, you can continue watching this playlist "Transformers From Scratch" where we will build a working transformer. This video was the first in that series. Hope you enjoy it :)
@dickewurstfinger9093
@dickewurstfinger9093 4 ай бұрын
really great video, but why have the Q, K, V Vektors dim 8? i know its random in this video but what does the values in the vektors say about the word? or is it just to "identify" a word in a certain room like in word embeddings and give it a certain "id" ?
@CodeEmporium
@CodeEmporium 4 ай бұрын
The choice of 8 heads in multi head attention is simple the choice of a hyper parameter in the main paper. This might be the number they experimented with that got reasonable results. That said, I am confident you shouldn’t see drastic differences with small fluctuations of this number. Further, I feel like powers of 2 (such as 1,2,4,8,16,32) are usually tried out as these hyper parameters. But as mentioned before, numbers in between may work just as well. I think it’s about having enough heads to capture complexity but not too many for slow processing
@arunganesan1559
@arunganesan1559 Жыл бұрын
Thanks!
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks for the donation! And you are very welcome!
@virtualphilosophyjourney8897
@virtualphilosophyjourney8897 4 ай бұрын
which phase does the model take the pretrianed info to decide the output?
@govindkatyura7485
@govindkatyura7485 Жыл бұрын
I have a few doubts 1. Do we use multiple ffnn after the attention layer? So suppose we have 100 input words for the encoder then 100 ffnn will get trained ? One for each of the word, i checked the source code but they were using only one, so I'm confused how one FFNN can handle multiple embedding specially with batch size. 2. In decoder do we pass multiple input also, just like encoder layer specially in training part?
@ritviktyagi9221
@ritviktyagi9221 Жыл бұрын
How did we get the values of q, k and v vectors after initializing them as randoms. Great video btw. Waiting for more such videos.
@CodeEmporium
@CodeEmporium Жыл бұрын
The weight matrices that map the original word vectors to these 3 vectors are trainable parameters. So they would be updated by back propagation during training
@ritviktyagi9221
@ritviktyagi9221 Жыл бұрын
@@CodeEmporium Thanks for clarification
@creativeuser9086
@creativeuser9086 Жыл бұрын
how do we actually choose the dimensions of Q, K and V? Also, are they parameters that are fixed for each word in the English language, and do we get them from training the model? That part is a little confusing since you just mentioned that Q, V and K are initialized at random, so I assume they have to change in the training of the model.
@kotcraftchannelukraine6118
@kotcraftchannelukraine6118 5 ай бұрын
Q - query, V - value and K - key
@bhavyageethika4560
@bhavyageethika4560 7 ай бұрын
why is it d_k in both Q and K in the np.random.randn ?
@klam77
@klam77 Жыл бұрын
"query" , "key" , and "value" terms come from the world of databases! So how do individual words in "My name is Ajay" each map to their own query and key and value semantically? that remains a bit foggy. i know you've shown random numbers in the example, but is there any semantic meaning to it? is this the "embeddings" of the LLM?
@Slayer-dan
@Slayer-dan Жыл бұрын
Ustad 🙏
@CodeEmporium
@CodeEmporium Жыл бұрын
too kind :)
@7_bairapraveen928
@7_bairapraveen928 Жыл бұрын
why we need to stabilise the variance of attention vector with query and key vectors.
@McMurchie
@McMurchie Жыл бұрын
Hi I noticed this has been added to the transformer playlist, but there are 2 unavailable tracks - do i need them in order to get the full end to end grasp?
@CodeEmporium
@CodeEmporium Жыл бұрын
You can follow the order of “transformers from scratch” playlist. This should be the first video in the series. Hope this helps and thanks for watching ! (It’s still being created so you can follow along :) )
@wishIKnewHowToLove
@wishIKnewHowToLove Жыл бұрын
thx
@CodeEmporium
@CodeEmporium Жыл бұрын
My pleasure :)
@philhamilton3946
@philhamilton3946 Жыл бұрын
What is the name of the text book you are using?
@klam77
@klam77 Жыл бұрын
if u watch the vid carefully, the url shows the books are "online" free access bibles of the field.
@anwarulislam6823
@anwarulislam6823 Жыл бұрын
How could someone hack my brain wave and convoluted this by evaluate inner voice? May I know this procedure? #Thanks
@SOFTWAREMASTER
@SOFTWAREMASTER Жыл бұрын
Haha ikr. I felt the same. Was looking for a good Self attention video.
@ayush_stha
@ayush_stha Жыл бұрын
In the demonstration, you generated the q, k & v vectors randomly, but in reality, what will the actual source of those values be?
@CodeEmporium
@CodeEmporium Жыл бұрын
Each of the q,k,v vectors will be a function of each word (or byte pair encoding) in the sentences. I say a “function” of the sentences since to the word vectors, we add position encoding and then convert into q,k,v vectors via feed forward layers. Some of the later videos in this “Transformers from scratch”playlist show some code on exactly how it’s created. So you can check those out for more intel :)
@josephpark2093
@josephpark2093 10 ай бұрын
I watched the video around 3 times but I still don't understand. Why are these awesome videos so unknown?
@ajaytaneja111
@ajaytaneja111 11 ай бұрын
Ajay, I don't think the point of capturing the context in terms of words 'after' has a significance in language modelling. In language modelling you are predicting only the next word. Yes, for a task like machine translation, yes. Thus I don't think Bi-directional RNNs have anything better to offer for language modelling than the regular (one-way) RNNs. . Let me know what you think
@jonfe
@jonfe Жыл бұрын
i still dont understand the difference between Q K V, can someone explain?
@sometimesdchordstrikes...7876
@sometimesdchordstrikes...7876 Ай бұрын
@1:41 here you have said that you want the context of the words that will be coming in the future but in masking part of the video you have said that it will be cheating know the context of the words that will be coming in the future
@SnehaSharma-nl9do
@SnehaSharma-nl9do 2 ай бұрын
Kannada Represent!! 🖐
@CodeEmporium
@CodeEmporium 2 ай бұрын
Haha! Yes 🙌
@NK-ju6ns
@NK-ju6ns Жыл бұрын
I felt the q, k, v parameter is not explained very well.. similar search analogy would be better to get a intuition of these parameter then explaining as what I can offer, what I actual offer
@ChethanaSomeone
@ChethanaSomeone 11 ай бұрын
Seriously, are u from karnataka ? your accent is so different dude.
@bkuls
@bkuls Жыл бұрын
Guru aarama? Nanu kooda Kannada ne!
@CodeEmporium
@CodeEmporium Жыл бұрын
Doin super well ma guy. Thanks for watching and commenting! :)
@kotcraftchannelukraine6118
@kotcraftchannelukraine6118 5 ай бұрын
You forgot to show the most important thing, how to train self-attention with backpropagation? You forgot about backward pass
@CodeEmporium
@CodeEmporium 5 ай бұрын
This is the first video in a series of videos called “Transformers from scratch”. Later videos show how the entire architecture is training. Hope you enjoy the videos
@kotcraftchannelukraine6118
@kotcraftchannelukraine6118 5 ай бұрын
@@CodeEmporium thank you, i subscribe
@thepresistence5935
@thepresistence5935 Жыл бұрын
Bro it's 100% better than your ppt vides
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much! Just exploring different styles :)
@azursmile
@azursmile Ай бұрын
Lots of time on the mask, but none on training the attention matrix 🤔
@venkatsahith6795
@venkatsahith6795 7 ай бұрын
Bro why can't you encounter an example while explaining
@thechoosen4240
@thechoosen4240 7 ай бұрын
Good job bro, JESUS IS COMING BACK VERY SOON; WATCH AND PREPARE
Multi Head Attention in Transformer Neural Networks with Code!
15:59
The Attention Mechanism in Large Language Models
21:02
Serrano.Academy
Рет қаралды 73 М.
蜘蛛侠这操作也太坏了吧#蜘蛛侠#超人#超凡蜘蛛
00:47
超凡蜘蛛
Рет қаралды 45 МЛН
Glow Stick Secret 😱 #shorts
00:37
Mr DegrEE
Рет қаралды 70 МЛН
Buy Feastables, Win Unlimited Money
00:51
MrBeast 2
Рет қаралды 63 МЛН
Blowing up the Transformer Encoder!
20:58
CodeEmporium
Рет қаралды 15 М.
Positional Encoding in Transformer Neural Networks Explained
11:54
CodeEmporium
Рет қаралды 34 М.
The math behind Attention: Keys, Queries, and Values matrices
36:16
Serrano.Academy
Рет қаралды 192 М.
Layer Normalization - EXPLAINED (in Transformer Neural Networks)
13:34
Transformers, explained: Understand the model behind GPT, BERT, and T5
9:11
Transformer Encoder in 100 lines of code!
49:54
CodeEmporium
Рет қаралды 14 М.
What are Transformer Models and how do they work?
44:26
Serrano.Academy
Рет қаралды 91 М.
Attention in Neural Networks
11:19
CodeEmporium
Рет қаралды 199 М.
Transformer Neural Networks, ChatGPT's foundation, Clearly Explained!!!
36:15
StatQuest with Josh Starmer
Рет қаралды 544 М.
蜘蛛侠这操作也太坏了吧#蜘蛛侠#超人#超凡蜘蛛
00:47
超凡蜘蛛
Рет қаралды 45 МЛН