Attention Is All You Need

  Рет қаралды 665,203

Yannic Kilcher

Yannic Kilcher

Күн бұрын

Пікірлер: 314
@tanmayjain6791
@tanmayjain6791 10 ай бұрын
Nobody knew this paper would change the world
@blitz1867
@blitz1867 7 ай бұрын
Did it?
@yanniammari1491
@yanniammari1491 7 ай бұрын
@@blitz1867 hell yeah it did its cited 124k times so it definetly did
@JohnDoe-pq8yw
@JohnDoe-pq8yw 7 ай бұрын
@@yanniammari1491 And we're just getting started.
@AmanSingh-xk2lv
@AmanSingh-xk2lv 7 ай бұрын
@@blitz1867 yes, definitely.
@MM-by6qq
@MM-by6qq 6 ай бұрын
very true
@finlayl2505
@finlayl2505 4 жыл бұрын
Friendship ended with LSTM, transformer is now my best friend.
@EvgenSuit
@EvgenSuit 2 жыл бұрын
As far as i know there's a little amount of transformers for audio problems
@electric_mind
@electric_mind 2 жыл бұрын
LSTMS generally perform better when it comes to short sequences, and remember LSTM is the revolution that led to the birth of The Transformer. I love both of them!
@klam77
@klam77 Жыл бұрын
Lstm sequentialization is kludged inside transformers. Pay attention
@jamesbedwell4715
@jamesbedwell4715 Жыл бұрын
Same but with GRU
@st0a
@st0a Жыл бұрын
Friendship ended with Transformers, Retentive Networks are now my best friend.
@RobotProctor
@RobotProctor 3 жыл бұрын
I've watched this maybe 5 times over 1 year, each time getting more and more from it. I think I finally intuitively understand how this works. Thanks for your work and your time!
@niedas3426
@niedas3426 2 жыл бұрын
This has been my experience with ML in general: I have to re-read papers and books over and over again, and each time I understand more. It's hard, but it pays off to finally get the grasp of such an almost mystical cocnept.
@StoutProper
@StoutProper Жыл бұрын
It’s a little bit more complicated than just predicting the next word based on the last, which is the take a lot of people have on it.
@electric_sand
@electric_sand Жыл бұрын
@@niedas3426 How's it going...honestly this is how I feel sometimes, having to go through multiple videos and blogposts just to grasp concepts.
@niedas3426
@niedas3426 Жыл бұрын
@@electric_sand Honestly, still making steady progress. I am now at a place where I am much, much further. I've mainly been preoccupied with datasets (e.g. reducing file storage size, faster reading and calculations, pytorch iterdatapipes) and realised it'd help me to go back more to the fundamentals (linear algebra, calculus, probability, pandas, numpy, data structures, builtin methods etc). It's been fun, overall. :)
@electric_sand
@electric_sand Жыл бұрын
@@niedas3426 Thanks for your response. Same here, I decided to go back to the fundamentals as well...I simply got tired of struggling through papers. Wish you the best mate.
@herp_derpingson
@herp_derpingson 5 жыл бұрын
I was searching for a channel like "Two minute papers" but not two mins in length and goes in depth. I think I found it! Subbed!
@TimKaseyMythHealer
@TimKaseyMythHealer Жыл бұрын
Finally, someone is drawing vectors to describe what is meant by encoding with vectors, and how the vectors relate to one another. So many talk about this, but barely understand the details.
@kema8628
@kema8628 6 жыл бұрын
The explanation of querying a key-value pair is really nice
@gorgolyt
@gorgolyt 3 жыл бұрын
I recommend looking at the paper, because they use exactly this analogy. I found their description very helpful: "An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key."
@dariodemattiesreyes3788
@dariodemattiesreyes3788 5 жыл бұрын
Really good explanation. You know how to provide the essence without getting lost into details. Details might be important later but the most important thing at first is the very main nature of the strategy and you provided it crystal clear. Thanks!!!
@languagemodeler
@languagemodeler Жыл бұрын
It's amazing to have this explanation of the paper that is responsible for all of the AI interest and innovation happening now--- described as 'interesting' shortly after it came out. I love it.
@jugsma6676
@jugsma6676 7 жыл бұрын
By far the best explanation about the paper "Attention Is All You Need". well explained. Thanks Yannic Kilcher
@jugsma6676
@jugsma6676 4 жыл бұрын
@bunch of nerds , BERT is also a transformer but with bidirectional (forward and backward) movement in the input sentence. And, GPT is a generative (autoregressive/ randomness) version of Transformer. Both are a language mode able to predict and understand the input sentence/s.
@jugsma6676
@jugsma6676 4 жыл бұрын
@bunch of nerds , It's much better to use inbuild stable tools than doing from scratch.
@asmadali-
@asmadali- 4 жыл бұрын
@@jugsma6676 is the most important thing to remember remember that you are the most powerful in the life of the tomb and your children are the most important thing to remember to be a a good friend friend of of a family who has been in in the last few months for a a long term relationship and you have to make sure that you have a good relationship with your your life insurance company in the world world world and of course you will be able and can can afford you for your business in the world and you you want to be a part of your happy moments and happiness in your lives by the time you get to the best of my abilities and I hope that you will be able and willing and willing to help help me in this situation as I am in the process and I am very very grateful for the the opportunity to work with my company in the field and I am am very grateful for the opportunity to work work with with my my partner in this process of learning from a very high level level of customer support for my future goals for the the industry in which I have have the opportunity to work with my business partner in the future of my company as well as a professional professional customer and and I am very confident confident in my ability to work work and to work with you in the process and I look forward to to speak with you you may be able to help me with this project as soon as I have a job in mind for my career and I will try and make sure I have the best of my abilities and I will try and get it to work for the best and I will try to the best of my abilities and I can make it to my end of of this project as soon as I get a job in a a good good place for my future goals for the next few weeks so I am going back to the best of my abilities and will let you know if I have any further ideas or comments about this opportunity and and if you have any further questions or concerns please feel free to contact us via email or or phone number and we can call or text or call or text me if you have any other ideas for us to discuss or if you have any other ideas for future projects and if you you need any help with any of the tomb or any other information you may have or if you have have any questions questions questions please feel free to contact me or you can call me me or my cell phone at any of those places or or at my cell number below are my my cell phone numbers and I will call them and get them to send me the info for your new home home and I can give you a few of the tomb I am looking at on Friday so I will be able and able to make some work if you want me there for a few hours and I can get you some info for the late night or early next weekend if you want me there there is no way I could be of any assistance or if you need need to get me a hotel for the night night and I will be in the office tomorrow morning to pick up my check for you to pick it 2017 and send it to me so I can send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I I I I I I I I I can come pick it out from your house and pick it out on the way ye if you want it for you or you could pick it up tomorrow morning and pick it out on the the phone or on the way to the office or at home or on the way to to get you a new phone or a new one for your phone and send it it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of my abilities and I will send it to the best of of my address address so I can send it to the best of my abilities as I am not sure if I will be able to make the payment today because I have not received any response from the seller and and I have not received any reply yet again for my response and I am not sure if I will be able to make this appointment today due to the fact fact that I am not able and I I will not be able to make it to the meeting today because I am unable to make it to class on Friday and I am not sure if if you you will be able or able help me out at this point I am not sure if I will be able to make this work or if I need this to work out or if I need this to work on my end of the semester and I will try to the best of the best of my abilities abilities as well and will let you know if I have any other openings for me I will try to the best of my abilities and will let you know if I have any further further information regarding the job offer or if you would like me help in any of the positions I am interested to apply to your position and I look forward forward to hearing from you in the near the end of next month to to see if I have any further information on the job opportunity that I am looking to move to a few years later this year as I have been working for the last last month of of the month of the month and I have a few things I need to do to get the job completed ASAP and I need a few things things to go over with my parents 243 4444444444443444 4w4424w4w4w442 2 I 4242 the 3 44442 of the country to make sure they have a 2 and they will 2 1 for their 3rd 11 in their 3rd party party 2 2 and they will not 32424242243 the world 44 34 25 2 44442 243 4444444444443444 and 2 2 is not a good place 4444 243 is the only 2 in the floor that that has to do with it in in 2nd century with 24 of a wide 20 and the 55 is the only 2nd largest of all 3 of all time in 2nd year of the tomb year and a long period of 20th 20th of the 4242 24 and a few 554444 243 554444 243 and a few of them were in the floor 3 of a few of them were the ones 2 and I 2 the ones that are in a good shape for a 3rd floor 44 to be the only one who is in a good mood and they will 2 their love and the love of 20 for the first day and a few more than a few days of their own lives in
@clray123
@clray123 4 жыл бұрын
It's still like listening to a bad student struggling to explain something they don't themselves understand. And shame on the Google researchers for doing such a shit job explaining themselves, but I guess that's typical of the majority research papers out there - these people just don't care to teach their ideas to others (except maybe a very narrow circle with whom they have already communicated via other channels).
@rednas195
@rednas195 Жыл бұрын
@@clray123 What parts do you think are explained poorly? To me it feels like Yannic understands the paper quite well, but i'm interested in what you think he might not understand all too well.
@deleteme924
@deleteme924 7 жыл бұрын
You have the best videos about machine learning I've seen, comparable to only perhaps 3blue1brown, but his videos aren't about as advanced topics. It would be really nice if you can make more!
@snippletrap
@snippletrap 4 жыл бұрын
Arxiv Insights and Henry AI are pretty good too
@ambujmittal6824
@ambujmittal6824 4 жыл бұрын
@@snippletrap Arxiv sadly stopped posting a long time back and I personally find Henry AI's discussion to be superficial. Try Chris McCormik and reading groups by Rachael though. :)
@peterhojnos6705
@peterhojnos6705 3 жыл бұрын
Who do you mean by “Rachael reading groups”?
@48956l
@48956l 3 жыл бұрын
Definitely both great channels but the comparison doesn't do justice to just how good 3b1b's animations are. Here Yannic writes on a tablet lol. Not really comparable.
@mdnayemuddin5595
@mdnayemuddin5595 3 жыл бұрын
I just got a clear understanding of how the positional encoder works here. Kudos to you. Great Explanation!
@owenmarkley446
@owenmarkley446 Жыл бұрын
This is by far the best explanation I've seen of this paper. I'm writing a review of this paper for a class and wouldn't have been able to do it without your video! Immensely grateful!
@prashanthkurella4500
@prashanthkurella4500 Жыл бұрын
Who knew this paper would change how we look at sequences forever
@akhilvenkataraju7791
@akhilvenkataraju7791 4 жыл бұрын
Thank you so much Yannic Kilcher, the paper seemed complex but you "encoded", performed "multi-head attention" and "decoded" it in such a simple way (: An amazing job! Undoubtedly the best explanation
@Luxcium
@Luxcium 10 ай бұрын
This sounds like someone who was reading a paper without realizing that it was to be the third largest thing to happen onward to humanity after a pandemic and from my own perspective an invasive war in Europe then the spark of AI that would happen with ChatGPT and the expansion of generative imaging like Stable Diffusion and Midjourney 😅🎉🎉🎉🎉 I would love to know how many subscribers you got from back then to just before ChatGPT and from ChatGPT up to nowadays 😅😅😅😅 You are such an amazing communicator ❤
@YtongT
@YtongT 4 жыл бұрын
an amazing explanation. truly amazing. I cant say how much I appreciate you putting dot product and soft max into intuitive and easy to understand words. very grateful
@rommeltito123
@rommeltito123 3 жыл бұрын
Good that you were interrupted at 17:15. I had to strain my ears and go full volume to hear you. After that it was better.
@BrettHannigan
@BrettHannigan 2 жыл бұрын
Excellent explanation of Transformers. Clear, easy to follow, and great information. Thanks!
@TijsZwinkels
@TijsZwinkels Жыл бұрын
Yeah, I'm late to the party, but I'd say that this video is still very relevant. I've read the paper several times and watching multiple blog posts and videos, but especially the Q,K,V mechanism never really clicked until watching this. Using dot-products between Q and K as a lookup mechanism. Ingenious! - Thanks for this video!
@shandou5276
@shandou5276 5 жыл бұрын
Very well done! I agree with the other comments that this is the clearest explanation I have seen so far. Thanks for the great work!
@vijeta268
@vijeta268 4 жыл бұрын
You have done an excellent job in explaining attention method in simple words. Thanks so much!
@tassoskat8623
@tassoskat8623 4 жыл бұрын
Great video and very unique amongst most machine learning videos on youtube. Thank you!
@magnuswiklander8204
@magnuswiklander8204 2 жыл бұрын
Fun to see this today after all the recent successful transformer results! (June 2022) Thanks Yannic, keep it up!!
@fahds2583
@fahds2583 4 жыл бұрын
you have such a cool state of mind ... really adds to making your teaching style more interesting
@lleger
@lleger 6 ай бұрын
i cant believei. just learned the intuition behind softmax, Yannic ur videos are pure gold, i hope life is treating u well !
@deaths1l3nce
@deaths1l3nce 4 жыл бұрын
Thank you very much! This has helped me a lot. All I could find on this specific paper was confusing and hard to understand, I think it was explained extremely well in your video! Please make more of these, I think you might help lots of people :D
@renehaas7866
@renehaas7866 4 жыл бұрын
I really appreciate that you are making these videos.
@patpearce8221
@patpearce8221 2 жыл бұрын
So the words are converted into vector embeddings, than positionally encoded using the sine and cosine function. 2. This vector is copied with one copy to be passed through the multihead attention layer to be contextualised by splitting each word into a key, query and value which are learned. There are a separate key, value and query vector for each word. 3. The key is matrix multiplied by the query after being passed through a linear layer, divided by the squareroot of the dimensionality of the key, multiplied by a softmax function and than matrix multiplied by the value. 4. It is than added to the other copy of the positionally encoded vector. 5. It is than normalised. 6. It than passes into the ffnn which has three layers. 7. In the dense layers it is matrix multiplied with the weight and than has a bias added, both of which are learnt. 8. It has the ReLU function applied to each word. 9. It is added with the residual(the vector before it has passed through the ffnn) 10. It is than normalised again... Than voile... what Am I missing here?
@Don-gk9ss
@Don-gk9ss 6 жыл бұрын
the best transformer video I have watched. Well explained
@aidangomez6004
@aidangomez6004 4 жыл бұрын
This is a great summary, thanks for making this!!
@spinner4
@spinner4 Жыл бұрын
Why could there not be such a KZbin explanation from authors of the paper: would be very helpful for humanity right now. But this is quite helpful.
@kevinelezi7089
@kevinelezi7089 2 ай бұрын
because publishing a paper doesn't directly prove the skill of being able to explain it in this way
@RobotProctor
@RobotProctor 4 жыл бұрын
Question: if there is a finite max length of an input/output sequence, why do you need a positional encoding? Wouldn't the network have a static place for the 1st word, 2nd word, ..., nth word in it's inputs? I'm struggling to understand the need for the positional encoder.
@RobotProctor
@RobotProctor 4 жыл бұрын
nevermind, I think I understand. Since the multi-head attention mechanism uses the same scaled dot product computation for each word in the sequence (and not different parts of a NN for example), the positional encoding is necessary in order to get different answers for the same word at different locations in the sequence.
@YannicKilcher
@YannicKilcher 4 жыл бұрын
you got it :)
@RealStonedApe
@RealStonedApe Жыл бұрын
Love how 'All you need is attention' also applies to me in terms of understanding this video. Time to chug down some Adderall and take notes!! Also, probably not a good start when I have no idea what a vector even is... Take it in bit by bit. Robert Pirsig reading Thoreau style, ya dig?! Anyways, plz pray for me. Any God, Joseph Smith, Allah, Bill Murray, etc. And he will do. Pray for my attention, pray for my soul, pray for Bill Murray. Love❤🎉
@sophiaxia3240
@sophiaxia3240 3 жыл бұрын
by far the most intuitive explanation. Thanks!
@carlkenner4581
@carlkenner4581 Жыл бұрын
This will never catch on. (Kidding)
@PaulFidika
@PaulFidika Жыл бұрын
I'm watching the history of AGI being built right here
@chandlerclement1365
@chandlerclement1365 6 жыл бұрын
Excellent video, thank you so much for illustrating these concepts so clearly.
@Julian-tf8nj
@Julian-tf8nj 5 жыл бұрын
VERY helpful, thanks! I'd love to see a "part 2" ...
@qidichen1756
@qidichen1756 4 жыл бұрын
One of the best explanation !!!
@astrobearmusic1977
@astrobearmusic1977 3 жыл бұрын
I had to revisit this video several times, but I think transformers finally clicked for me. Thank you!
@alexandrostsagkaropoulos
@alexandrostsagkaropoulos Жыл бұрын
Just exceptional explanation. You clear things up so much!
@MrChristian331
@MrChristian331 3 жыл бұрын
What's "Add and Norm" mean at each step of the network in the architecture??
@lsqshr
@lsqshr 7 жыл бұрын
Really awesome job! I was puzzling about what the key, value pairs are. Thanks a lot!
@revanthvejju3727
@revanthvejju3727 Жыл бұрын
the paper that changed everything
@Darthvanger
@Darthvanger 3 жыл бұрын
21:30 - thanks for the great softmax explanation! I've had the "aha" moment :)
@teddy5474
@teddy5474 2 жыл бұрын
Best explanation I've seen on this topic!
@anisakhlyan8581
@anisakhlyan8581 4 жыл бұрын
Thank you! This is a very good explanation which I actually used in presenting this paper. Cheers man!
@ostensibly531
@ostensibly531 2 жыл бұрын
Love the relaxing voice. Way better than reading the paper myself. Now I can be on the elliptical and still ingest the gist of papers. Thank you for making this!
@jsphyan
@jsphyan Жыл бұрын
This is beautiful, I really appreciate your work! Thank you
@VinBhaskara_
@VinBhaskara_ 7 жыл бұрын
great explanation. please keep posting such summaries of great papers thanks!
@MadhavanSureshRobos
@MadhavanSureshRobos Жыл бұрын
How far we've come in 5 years!
@arslanali900
@arslanali900 10 ай бұрын
You are amazing!
@WaylonFlinn
@WaylonFlinn 2 жыл бұрын
You need to remake this video. You've gotten so much better at doing this since you made this video and this topic is so foundational.
@julinamaharjan6987
@julinamaharjan6987 4 жыл бұрын
Very intuitive explanation. Thank you!
@thearianrobben
@thearianrobben 4 жыл бұрын
so Representation in one Natural Language into the Universal Language of Math into anther Natural Language.
@nchahine
@nchahine 3 жыл бұрын
I always thought about doing a youtube channel like this, but I guess I don't need to because you are so good at this thanks!
@starlite5097
@starlite5097 Жыл бұрын
Thanks, nice video. You've come a long way since then, I'm sure, especially with the open assistant stuff.
@olegshpynov
@olegshpynov 5 жыл бұрын
Great explanation of the transformer model. Thank a lot!
@simons6512
@simons6512 3 жыл бұрын
Super Erklärig, bis jetzt eini vo de beste woni gfunde ha. Findes cool zgseh das mal öpper us de schwiz sich so uf dere plattform engagiert. Witer so!
@ziku8910
@ziku8910 Жыл бұрын
Very intuitive explanation here, thank you!
@ankitbhardwaj1956
@ankitbhardwaj1956 5 жыл бұрын
Thanks a lot for this explanation video!!
@10x_discovery
@10x_discovery 9 ай бұрын
Man. I just found your channel. All the best insh'Allah
@bayesianlee6447
@bayesianlee6447 Жыл бұрын
This 6 years after attention is all you need is just crazy rapid growth
@fisherh9111
@fisherh9111 Жыл бұрын
this is excellent. Thank you so much for sharing!
@prasitamukherjee5864
@prasitamukherjee5864 3 жыл бұрын
Thank you for the super neat explanation- Cleared a lot of stuff.
@이효건-o4o
@이효건-o4o 6 жыл бұрын
Thank you so much. Your videos are so much helpful.
@michaelmuller136
@michaelmuller136 5 жыл бұрын
Thank you, that was very informative and explained well!
@hamedgholami261
@hamedgholami261 2 жыл бұрын
I really understood the subject, thanks for your clear explanation.
@xingyubian5654
@xingyubian5654 2 жыл бұрын
Always wondered wht keys, values, queries are. Thank you for the clear explanation!
@vikaskumarjha9
@vikaskumarjha9 5 жыл бұрын
Thank you so much. You explain it so well in very simple terms.
@arunantony3207
@arunantony3207 4 жыл бұрын
Great explanation !
@kevind.shabahang
@kevind.shabahang 4 жыл бұрын
Thank you. Very clear.
@goelnikhils
@goelnikhils Жыл бұрын
Such a clear explanation of Attention. I was struggling to understand attention and I would have watched over 20 videos for this but got no clarity.
@bpolat
@bpolat 8 ай бұрын
This paper changed the world. AI revolution is started after this paper.
@suyashshrivastava8317
@suyashshrivastava8317 3 жыл бұрын
Thank you so much for this. Excellent explanation
@sebchap24
@sebchap24 Жыл бұрын
Quite an amazing explanation! thanks a lot
@sahanagk4011
@sahanagk4011 3 жыл бұрын
This explanation is amazing!! Thank you for this
@rupjitchakraborty8012
@rupjitchakraborty8012 4 жыл бұрын
This is great video. Please make a video on Hierarchial Neural Networks.
@danecchio6621
@danecchio6621 3 жыл бұрын
Grazie tantissimo per la spiegazione.
@aminzaiwardak6750
@aminzaiwardak6750 5 жыл бұрын
Thanks a lot you explained very well.
@dailygrowth7967
@dailygrowth7967 3 жыл бұрын
Thanks, i really enjoy your content!
@protikkumarbiswas324
@protikkumarbiswas324 5 жыл бұрын
What is the pdf reader you are using? BTW thanks for the explanation. I was really confused about what are these keys and you made that very clear to me. Thanks Yannic Kitcher.
@YannicKilcher
@YannicKilcher 5 жыл бұрын
Thanks for the feedback. I'm using OneNote.
@protikkumarbiswas324
@protikkumarbiswas324 5 жыл бұрын
@@YannicKilcher Hello can u help me with a project i am currently working on. Its about distractor generator without context. All I have is a train set with question, answer and distractors(three). And The task is to predict the distractors for the given question and answer pair. Please, Give me some suggestion where to start at least.
@YannicKilcher
@YannicKilcher 5 жыл бұрын
I'd look into seq2seq models, for example current NMT systems and then look at it as a "translation" task. That assumes you have enough data.
@protikkumarbiswas324
@protikkumarbiswas324 5 жыл бұрын
@@YannicKilcher Well I thought about that and went through some NMT papers but my test data (in which distractors are not provided) it has many important words which are out of the scope of the train set vocab. Is there a way to use pre-trained embedding (like glove) here. (BERT and other Q/A techniques will also not help here as we don't have a context) And most of the translation is done with a single sentence. Will you advise me to concatenate both the sentences or do have any other solution? My Findings: I was thinking to train two seq2seq for Q and A then concat them then use another seq2seq on top of it to get a distractor(there will be 3 such seq2seq on top of this concatenated layer). And as I was thinking to train based on glove embedding I will set batch size to the length of the sentence and input a glove embedded vector at a time. Please tell me is this possible and as soon as possible as I am running out of time. (Please excuse me if I wrote something rubbish as I am a noob in deep learning :) )
@YannicKilcher
@YannicKilcher 5 жыл бұрын
I don't think your solution is going to run. I would concatenate Q and A, then input that into a single seq2seq model (the initial layer can be pre-trained glove embeddings or something like that). The output of that model will again be a sequence. Train that sequence to match one of the distractors (choose one at random during training). As for batch size, that usually refers to how many of those sequences you train in parallel (one batch), not how long these sequences are. (for some implementation, you'll have to pad all sequences of a batch to the same length).
@sarahpanda1167
@sarahpanda1167 5 жыл бұрын
Very helpful !! Thank you !
@AlimamiHD
@AlimamiHD 6 ай бұрын
Bro reminded me of that one dude who posted a video in 2011 begging people to buy 1$ of bitcoin
@nguyentrung1452
@nguyentrung1452 3 жыл бұрын
great explanation. Thank you, master
@jackiekuen
@jackiekuen 5 ай бұрын
Thank you sir, very clear explanation
@halehdamirchi146
@halehdamirchi146 4 жыл бұрын
This was really helpful, thank you!
@bernhardvoggenberger9850
@bernhardvoggenberger9850 3 жыл бұрын
As a student your videos are very helpful!
@goatpepperherbaltea7895
@goatpepperherbaltea7895 Жыл бұрын
5 years lasted I’m out here like damn
@marjansherafati6913
@marjansherafati6913 4 жыл бұрын
Thank you very much, amazing explanation! 🙏🏼🙏🏼
@jabusch24
@jabusch24 5 жыл бұрын
very good intro. many videos don't focus on visual explanation which you definitely did cover. I'ld be thrilled to see a video that goes more into depth, also how exactly the decoding is done once it's trained and how embeddings could be obtained for other tasks. But other than that, very very very well done!
@YtongT
@YtongT 4 жыл бұрын
"that, that looks ugly" 23:53 with such an innocent voice, that software did you dirty
@tyfoodsforthought
@tyfoodsforthought 4 жыл бұрын
This was wonderful. Thank you!!!
@LightFykki
@LightFykki 6 жыл бұрын
Great explanation, thanks!
@rupaksarkar3715
@rupaksarkar3715 5 жыл бұрын
Can you do one for the paper 'Residual Attention Network for Image Classification'? There they've tried to use this concept for CNNs ?
@ЗакировМарат-в5щ
@ЗакировМарат-в5щ 4 жыл бұрын
I do not understand how output probabilities goes to output embeding 12:05
@alexanderyaroshevich7509
@alexanderyaroshevich7509 4 жыл бұрын
It seems like the network outputs are the probabilities of a next word and the embeddings
@ЗакировМарат-в5щ
@ЗакировМарат-в5щ 4 жыл бұрын
@@alexanderyaroshevich7509 So actually it is used as regular RNN with sliding window?
@alexanderyaroshevich7509
@alexanderyaroshevich7509 4 жыл бұрын
@@ЗакировМарат-в5щ Not really, there is neither state, nor recurrency. I find this article much more explaining jalammar.github.io/illustrated-transformer/
@ЗакировМарат-в5щ
@ЗакировМарат-в5щ 4 жыл бұрын
No recurrency?!!
@shrikanthsingh8243
@shrikanthsingh8243 5 жыл бұрын
Thank you so much it was a very good explanation
@garrettosborne4364
@garrettosborne4364 3 жыл бұрын
Thanks Yannic, great videos.
@alfred17686
@alfred17686 5 жыл бұрын
This was such a good explanation. I've been trying to really understand these, but until now I haven't found a good resource. Cheers man!
@eaglesofmai
@eaglesofmai 3 жыл бұрын
its very well explained at 9 mins already got the answer about Attention vs LSTMs. i wss sesrching long for it on google
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 429 М.
1% vs 100% #beatbox #tiktok
01:10
BeatboxJCOP
Рет қаралды 67 МЛН
It works #beatbox #tiktok
00:34
BeatboxJCOP
Рет қаралды 41 МЛН
Were RNNs All We Needed? (Paper Explained)
27:48
Yannic Kilcher
Рет қаралды 54 М.
The Attention Mechanism in Large Language Models
21:02
Serrano.Academy
Рет қаралды 108 М.
Transformer Neural Networks - EXPLAINED! (Attention is all you need)
13:05
Attention in transformers, step-by-step | DL6
26:10
3Blue1Brown
Рет қаралды 2 МЛН
The math behind Attention: Keys, Queries, and Values matrices
36:16
Serrano.Academy
Рет қаралды 276 М.
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 1 МЛН
1% vs 100% #beatbox #tiktok
01:10
BeatboxJCOP
Рет қаралды 67 МЛН