What are Transformer Models and how do they work?

  Рет қаралды 132,374

Serrano.Academy

Serrano.Academy

Күн бұрын

Пікірлер: 170
@zafersahinoglu5913
@zafersahinoglu5913 Жыл бұрын
Luis Serrano, this set of 3 videos to explain how LLMs and transformers works is truly the best explanation available. Appreciate your contribution to the literature.
@anipacify1163
@anipacify1163 10 ай бұрын
Best playlist on transformers and attention. Period . There is nothing better on KZbin. Goated playlist. Thank you soo much !
@tonym4953
@tonym4953 7 ай бұрын
This is what you get when the teacher is concerned with actually imparting knowledge and learning as opposed to showing off how sophisticated their own knowledge is. I am incredibly appreciative of his humility and passion. An example to follow in my own life.
@RuliManurung
@RuliManurung 8 ай бұрын
What an awesome series of lectures. I spent 10 years teaching undergraduate-level Artificial Intelligence and NLP courses, so I can really appreciate the skill in breaking down and demystifying these concepts. Great job! I would say the only thing missing from these videos is that you don't really cover how the learning/training process works in detail, but presumably that would detract from the focus of these videos, and you cover it elsewhere.
@BunnyLaden
@BunnyLaden 14 күн бұрын
Thank you Luis! This series is THE BEST explanation I've seen on attention and transformers. Your gift of clear explanation is truly a gift to those of us seeking knowledge.
@anupamjain345
@anupamjain345 8 ай бұрын
Thanks! Never came across anyone explaining anything in such a great detail, you are amazing !!!
@SerranoAcademy
@SerranoAcademy 8 ай бұрын
@anupamjain345, thank you so much for your really kind contribution, and for your nice words!
@YannickBurky
@YannickBurky 10 ай бұрын
This video is incredible. I've been looking for material to help me understand this mess for quite some time now, but everything about this video is perfect: the tone, the speed of speech, the explanations, the hierarchy of knowledge... I'm screaming with joy!
@lightninghell4
@lightninghell4 Жыл бұрын
I've seen many videos on transformers but this series os the first where I understood the topic at a deep enough level to appreciate it.
@rudyfigaro1861
@rudyfigaro1861 7 ай бұрын
Luis has a talent for breaking down complex problems into simple steps and then build the whole thing back so that ordinary people can understand.
@Aleks-ng3pp
@Aleks-ng3pp 11 ай бұрын
In the previous video you said you would explain how to compute Q, K, and V matrices in this one. But I don't see.
@Glomly
@Glomly 11 ай бұрын
This is the BEST explanation ever you can find on the internet. I'm serious
@analagunapradas3840
@analagunapradas3840 10 ай бұрын
Agree, definitly the BEST, as always ;) Luis Serrano
@SourjyaMukherjee
@SourjyaMukherjee 9 күн бұрын
Hard to believe that someone came up with better content on transformers than 3B1B
@matin2021
@matin2021 7 ай бұрын
All my friends and I have watched every single one of your videos Great content, appropriate sentences and... really great Keep making great videos 🙌
@nc2581
@nc2581 7 ай бұрын
Thank you for this fantastic video series on Transformers! The first two videos were particularly enlightening. I'm fascinated by how the query, key, and value vectors evolve before each attention module. It would be wonderful to gain a deeper understanding of the encoder-decoder architecture, particularly why the first attention belongs to the encoder while the subsequent ones are part of the decoder. Also, I'm intrigued by the visualization of linear transformations at each step during training, especially when outputs are recycled back into the decoder. Eagerly awaiting more insights!
@jazznomad
@jazznomad Жыл бұрын
Very clear. You are a natural teacher
@sandhiyar8763
@sandhiyar8763 Жыл бұрын
Absolutely worth the watch! The clarity in Luis's explanation truly reflects his solid grasp on the content.👍
@MohamedHassan-pv1xl
@MohamedHassan-pv1xl Жыл бұрын
I feel super lucky to come across your videos, normally I don't comment, but I saw the 3 videos of the series and I'm amazed on how you explain complicated topics. You're efforts are highly appreciated.
@zihaoli962
@zihaoli962 5 ай бұрын
such a wonderful video! it lays a quite good foundation and during watching the 3 videos a lot of questions occur to my mind e.g., 1) number-of-heads vs. accuracy/computation/space, 2) what's the trainable size of QKV, 3) for the whole framework what are the data shapes of all intermediate steps, and how do we perform back-propagation and so on. then after watching the video, you could proactively go search n find those answers then your level of understanding could be further boosted! that's the good thing about this video - it takes decent amount of effort to understand while it does not act like a show-stopper (I'd say the paper "attention is all you need"). it starts you off and you could pick up more along the learning journey. a big thx to Serrano!
@narasimmanramiah3062
@narasimmanramiah3062 5 ай бұрын
The set of 3 videos is excellent. I could understand the transformer architecture very well with visuals. Kudos to Luis Serrano !!
@alidanish6303
@alidanish6303 Жыл бұрын
Finally the 3rd video of the series and as usual with the same clarity of concepts as expected from Serrano. The way you have perceived these esoteric concepts have produced pure gold. I am following you and jay from Udacity and you guys have made real contribution in explaining a lot of black magic. Any plans to update grokking series...?
@jaeen7665
@jaeen7665 4 күн бұрын
This is a fantastic explanation of transformers!!!
@SkyRiderJavelin
@SkyRiderJavelin Жыл бұрын
What an excellent series on Transformers, really did the trick !!! The penny has finally dropped. Thanks very much for posting this is very useful content. I wish I came across this channel before spending 8 hours doing a course and still not understanding what happens under the hood.
@luizabarguil5214
@luizabarguil5214 8 ай бұрын
A clear, concise and conclusive way of explaining Transformers! Congrats and Thank you so much for sharing it!
@shaktisd
@shaktisd Жыл бұрын
Amazing series on Transformer. Never ever imagined the true rationale behind Q,K,V ... it is actually clear after watching your video. Thanks a lot.
@patriciasilvaoliveira6130
@patriciasilvaoliveira6130 8 ай бұрын
Amazingly clear and encouraging to learn more. Thanks, maestro!
@vishalmishra3046
@vishalmishra3046 4 ай бұрын
25:17 Hello Luis, *Positional Encoding* enables a Transformer to handle multiple words in parallel since ordering of words are built-in due to PE. Before transformers, the Reinforcement Learning using Long Short-Term Memory (LSTM) models were slow because words had to be processed serially since order and sequence matters. In addition to Attention, PE was a big innovation in Transformer models making them highly performant and so successful today.
@johnny1966m
@johnny1966m Жыл бұрын
Thank you Mr. Serrano, it was very educated and lectured in very good way. In relation to Positioning, the example with arrows gave me the idea that the purpose of this stage is to make sure that only the correct positions of words in the sentence cluster together, while the incorrect ones diverge them, thus the neural network distinguishes their position in the sentence during training.
@GameDevCoffeeBreak
@GameDevCoffeeBreak 6 ай бұрын
This is the best video I have watched about the subject; it explains the concepts in such a way everyone can understand them! Thumbs up and subscribing right now!
@aminemharzi7222
@aminemharzi7222 8 ай бұрын
The best explanation I found
@ehsanpartovi8279
@ehsanpartovi8279 4 ай бұрын
This was such a thorough and precise explanation. And the visualizations! Great job. We would like to see more videos like this, particularly about other generative models such as those used for image and video generation, i.e. Canva.
@aljebraschool
@aljebraschool 3 ай бұрын
Delighted to learn Transformer from my mentor today! Thanks for a very clear overview!!!
@nileshkikle8112
@nileshkikle8112 11 ай бұрын
Dr. Luis - Thank you for taking all the effort for creating these #3 videos. Explaining complex things in the simplest way is an art! And you have that knack! Great job! Been following you ML videos for years now and I always enjoy them. PS- Funny enough, for typing these comments, I'm being prompted for selecting the next prediction word! 🙂
@vimalshrivastava6586
@vimalshrivastava6586 Ай бұрын
Thanks for such a wonderful explanation of a complex model. If possible, please make videos on Vision Transformer as well, if possible.
@johnschut164
@johnschut164 Жыл бұрын
Your explanations are truly great! You have even understood that you sometimes have to ‘lie’ first to be able to explain things better. My sincere compliments! 👊
@JohnDeBrittoL-n6x
@JohnDeBrittoL-n6x Жыл бұрын
Thanks!
@SerranoAcademy
@SerranoAcademy Жыл бұрын
Thank you so much for your kindness! Very appreciated. :)
@BhuvanDwarasila-y8x
@BhuvanDwarasila-y8x 3 ай бұрын
That''s fire! One quick thought I had about finetuning is a good model should be able to respond to human behaviors really well and in the way the business wants! For example, the model should not simply respond to someone sad and say oh thats great, keep crying!, but you never know what the majority of the web context may be like! However at the same time this is something to be very cautious about from a user perspective because businesses can easily manipulate the model to encourage nation, business, other motives that may be unclear to us!
@KC-dr4gj
@KC-dr4gj 5 ай бұрын
For word2vec, my understanding is that the embeddings are the weights from the input layer to the hidden layer, not the hidden layer itself as you menioned in 24:57?
@anuragdh
@anuragdh 11 ай бұрын
Thanks.. for the first time (not that I've gone through a lot of them :)), I was able to appreciate how the different layers of a Neural Network fit together with their weights. Thanks for making this video with the example used
@karunamudliyar5625
@karunamudliyar5625 Жыл бұрын
The best video, that I watched on Transformer. Very clear explanation
@Tuscani2005GT
@Tuscani2005GT 11 ай бұрын
Seriously some of the best videos on the topic. Thank you!
@faraazmohammed3693
@faraazmohammed3693 11 ай бұрын
Liked the video while watching at 7:15. crystal clear explanation. Good job, thank you Serrano and I appreciate your work.
@nazmulhaque8533
@nazmulhaque8533 Жыл бұрын
Excellent presentation. Waiting to see more videos like this. I would request you to make a series about aspect based sentiment analysis. Best wishes...
@AmanBansil
@AmanBansil 8 ай бұрын
Incredible - just found this channel and I am about to pour over all videos. Thank you so much for your effort.
@incognito3k
@incognito3k 11 ай бұрын
As always amazing content. We need a book from Luis on GenAI!
@GiovanneAfonso
@GiovanneAfonso 9 ай бұрын
The best explanation available
@markuskaukonen3903
@markuskaukonen3903 Жыл бұрын
Very nice stuff!, This first time somebody explained clearly what large language models are. Especially the second video was very valuable for me!
@jamesgeller9975
@jamesgeller9975 4 ай бұрын
This is VERY GOOD. I also watched several videos and this helped most. The one thing I don't understand is how the system decides that there are two apples. I felt that one Apple will be pulled back and forth between phone and orange.
@utkarshkapil
@utkarshkapil Жыл бұрын
Beautifully explained. Loved how you went ahead to also teach a bit of the pre-requisites!
@AboutOliver
@AboutOliver Жыл бұрын
You have a skill for teaching! Thanks so much for this series.
@Nerdimo
@Nerdimo 9 ай бұрын
8:53 do you think it’s a plain feed forward neural network, or something like an RNN, LSTM to be specific? Just a thought.
@lakshminarayanan5486
@lakshminarayanan5486 Жыл бұрын
Hi Luis, Excellent material and you know how to deliver it to perfection. Thanks a lot. Could you please explain a bit more on positional encoding and how the residual connections & layer normalization, encoder-decoder components fit into the very same example.
@samirelzein1095
@samirelzein1095 Жыл бұрын
the great Luis! i am recommending you in my job posts, your content is a prerequisite before working for us
@SerranoAcademy
@SerranoAcademy Жыл бұрын
Wow thanks Samir, what an honor! And great to hear from you! I hope all is well on your end!
@samirelzein1095
@samirelzein1095 Жыл бұрын
@@SerranoAcademy honor is mine! you are the artist of going inside it, seeing the wiring and connections and delivering them as seen to all people. that s the job of prophets and saints. bless you. i am doing great, plenty of text and image processing currently :) digitizing the undigitized!
@ikheiri
@ikheiri Жыл бұрын
Best video i've come across that explains concepts simply. Helped tremendously in my learning endeavor to create a mental model for neural networks (there's a joke there somewhere)
@SerranoAcademy
@SerranoAcademy Жыл бұрын
Thanks! Lol, I see what you did there! :)
@edwinma9933
@edwinma9933 Жыл бұрын
this is amazing, it deserves 10M views.
@poussinet2
@poussinet2 Жыл бұрын
Thank you for these really high quality videos and explanations.
@jukebox419
@jukebox419 Жыл бұрын
You're the greatest teacher ever lived in the history of mankind. Can you please do more videos regularly?
@SerranoAcademy
@SerranoAcademy Жыл бұрын
Thank you so much! Yes I'm definitely working hard at it. Some videos take me quite a while to make, but I really enjoy the process. :) If you have suggestions for topics, please let me know!
@harithummaluru3343
@harithummaluru3343 11 ай бұрын
great explanation. perhaps one of the best videos
@skbHerath
@skbHerath 10 ай бұрын
finally, i managed to understand the concept clearly, Thanks
@timothyjoubert8543
@timothyjoubert8543 11 ай бұрын
thank you for this series - wonderfully explained. 💯
@sohamlakhote9822
@sohamlakhote9822 8 ай бұрын
Thanks a lot man!!! You did a fantastic job explaining these concepts 🙂
@usefbob
@usefbob Жыл бұрын
This series was great! Appreciate all the time and effort you've put into them, and laid out the concepts so clearly 🙏🙏
@dragolov
@dragolov Жыл бұрын
Deep respect, Luis Serrano! Thank you so much!
@abdelrhmanshoeeb7159
@abdelrhmanshoeeb7159 Жыл бұрын
Finally ,i am waiting it from a month. Thank you alot.
@stephengibert4722
@stephengibert4722 4 ай бұрын
Emmy Noether did not "invent" abstract algebra. It is very disconcerting that when I googled that question as a test I came up with, in great big capital letters, "NOETHER..." etc. Once these models are relied upon, we are in big trouble, because such errors will be embedded in the embeddings, and so on. Of course this is really peripheral to the topic at hand, and I very much appreciate your excellent job at teaching the basic ideas of transformers. Thanks very much!
@wanggogo1979
@wanggogo1979 Жыл бұрын
Finally, I waited until this video was released.
@19AKS58
@19AKS58 Ай бұрын
Clearly the best & clearest explanations. One question: after the "write a story." prompt, when the Attention block selects possible words such as Once or There, does it go through the ENTIRE vocabulary to do so? Doesn't that take a very long time? Are there shortcuts to make it more efficient? Thx
@SerranoAcademy
@SerranoAcademy Ай бұрын
Great question! Yes, transformers do go through all the words, but in a really efficient way. First of all, matrix multiplications are very optimized and parallelized, so these are done very fast. Also, sometimes shortcuts are applied, like taking topk max, in order to not look at the whole vector of all words each time. You may lose a word here and there, but it goes much faster.
@chrisogonas
@chrisogonas Жыл бұрын
Well illustrated! Thanks for sharing.
@blueberryml
@blueberryml Жыл бұрын
excellent -- clear and concise explanations
@jorovifi89
@jorovifi89 Жыл бұрын
Great work as always, thank you keep them coming
@vasimshaikh9857
@vasimshaikh9857 Жыл бұрын
Finally 3rd video is here 😮😅 thank you sir I have been waiting for this video since last month , everyday I check your channel for the 3rd video sir , thank you so much sir You're doing great work 👍
@VerdonTrigance
@VerdonTrigance Жыл бұрын
24:59 - again... who and how defines these layers and network to set words vectors? All it comes to it. How do we know that cherry and apple has a similar 'properties' ?
@danieltiema
@danieltiema 9 ай бұрын
Thank you for explaining this so well.
@silvera1109
@silvera1109 9 ай бұрын
Great video, hugely appreciated, thank you Luis! 🙏
@leenabhandari5949
@leenabhandari5949 8 ай бұрын
Your videos are gold.
@dayobanjo3870
@dayobanjo3870 10 ай бұрын
Great video, speaking from Abuja capital of Nigeria
@SerranoAcademy
@SerranoAcademy 9 ай бұрын
ohhhh greetings to Abuja!!! Nigerians are the kindest people, I hope to visit sometime!
@joaomontenegro
@joaomontenegro Жыл бұрын
These videos are great!! I would love to see one about the intuition of cross attention in, for example, the context of translation between two languages.
@SerranoAcademy
@SerranoAcademy Жыл бұрын
Thanks, great suggestion!
@mikelCold
@mikelCold 7 ай бұрын
Where does context length come in? Why can some models be longer than others?
@amrapalisamanta5085
@amrapalisamanta5085 7 ай бұрын
Very good playlist
@SatyaRao-fh4ny
@SatyaRao-fh4ny Жыл бұрын
This is a great video, clarifying a number of concepts. However, I am still not finding any answer to some of my questions. e.g in this video, when the user enters "Write a story.", these are 4 tokens. But the "model" spits out a NEW word "Once". Where is this NEW word coming from? How does the "model" even "KNOW" about such a word? Is it saved in some database/file? Is there a dictionary of ALL the words (or tokens) that the "model" has access to? And I guess the other question what does "training a model" actually mean- on the ground- not just conceptually? After training, is the end result some data/words/tokens/embeddings that are save in some file that the "model" "reads/processes" when it is used later on? What are parameters? I have watched several hours of videos, but have not found answers to these questions! Thanks for any help for experts!
@SerranoAcademy
@SerranoAcademy Жыл бұрын
Thanks, great questions! Yes, there is a database of tokens, and what the model does is output a list of probabilities, for each token. The ones with high probability are the ones that are very likely to be the next in the sentence. So then one can pick a token at random based on this probabilities, and very likely you'll pick one that has a high probability (and that way, the model will not always answer the questions in the exact same way, but it'll have variety). The training part is very similar to a neural network. It consists on updating the weights so that the model does a better job. So for example, if the next word in a sentence should be "apple", and the model gives "apple" a very low probability, then the backpropagation process updates the weights so that the probability of "apple" increases, and all the other ones decrease. The parameters are the parameters of the neural network + the parameters of the attention matrices. If you'd like to learn more about neural networks and the training process, check out this video: kzbin.info/www/bejne/eIOcmWdtf9mkr9k
@khaledbouzaiene3959
@khaledbouzaiene3959 Жыл бұрын
nice wow but please i still have a question, you didn’t mentioned how the words with similarities are placed close in embedding, i know after we assign the mechanism attention score but don’t get do the embedding is a separate neural network as in video
@tantzer6113
@tantzer6113 Жыл бұрын
That closeness is achieved automatically in the end result because it’s more efficient. It isn’t something that the human designer plans for.
@SerranoAcademy
@SerranoAcademy Жыл бұрын
Yes, great question! The idea is to train a neural network to learn the neighboring words to a particular word. So in principle, words with similar neighbors will be close in the embedding, because the neural network sees them similarly. Then the embedding comes from looking at the penultimate layer in the neural network, which has a pretty good description of the words. So for example, the word 'apple' and the word 'pear' have similar neighboring words, so the neural network would output similar things. Therefore, at the penultimate layer, we'd imagine that the neural network must be carrying similar numbers for each of the words. The embeddings come out of here, so that's why the embeddings for 'apple' and 'pear' would be similar.
@khaledbouzaiene3959
@khaledbouzaiene3959 Жыл бұрын
@@SerranoAcademythanks for clarifying i got confused coz i just want this huge neural network composed of multiple layers of smaller neural networks where the first one is the embedding layer not separate one , but generally everything now make sense now no matter the design
@MohamedKeddache-r1o
@MohamedKeddache-r1o 7 ай бұрын
what is the true value of V,K,Q matrix ?
@alexanderzikal7244
@alexanderzikal7244 8 ай бұрын
Thank You very much for all your videos! Whats Software do You use for your presentations? All looks really nice, all pictures…
@panoskolyvakis4075
@panoskolyvakis4075 9 ай бұрын
you re an absolute legend
@sathyanukala3409
@sathyanukala3409 10 ай бұрын
Excellent explanation. Thanks you.
@nafisanawrin2901
@nafisanawrin2901 9 ай бұрын
While training model if it shows wrong answer how it is corrected?
@karstenhannes9628
@karstenhannes9628 9 ай бұрын
Thanks for the video! I particularly liked the previous video about attention, super nice explanation! However, I thought most transformers simply use a linear layer that is also trained to create the embedding instead of using a pre-trained network like word2vec.
@vankram1552
@vankram1552 Жыл бұрын
This is a fantasitc video, by far the best on youtube. My only feedback would be the guitar music you use between chapters is a little abrasive and can you take you out of the learning process. Maybe some calmer more thought provocing music along with more interesting title cards would be better.
@jamesgeller9975
@jamesgeller9975 4 ай бұрын
Also the third video shoes the attention blocks in the neural network but I don't see how that layer implements K Q V.
@techchanx
@techchanx Жыл бұрын
Excellent video, will be good to indicate what encoder - decoder model is in transformers. Couldnt figure that out here.
@SerranoAcademy
@SerranoAcademy Жыл бұрын
Thanks! Yes that's something I'm trying to make sense of, perhaps in a future video. In the meantime, this blog post is the best place to go for that: jalammar.github.io/illustrated-transformer/
@TemporaryForstudy
@TemporaryForstudy Жыл бұрын
Your videos are rocking as always. Hey, do you have any remote internship opportunities in your team or in your organisation? I would love to learn and work with you guys.
@SerranoAcademy
@SerranoAcademy Жыл бұрын
Thank you so much! Yes we have internships, check them out here! jobs.lever.co/cohere
@samarthseksaria2587
@samarthseksaria2587 3 ай бұрын
Could you describe the feedforward step more, with respect to the variable input length? Is it an RNN?
@william_8844
@william_8844 Жыл бұрын
I like the attention explanation
@k.i.a7240
@k.i.a7240 Күн бұрын
Again, I am speechless with the awesomeness of this video, There are many who understands these concepts but there are actually very few who can convey them in such a clear and precise way. With that said, I have another question(:D): In the Word2Vec Method, this nueral netwrok which helps us find the similarity and put the words which are more similar in closer coordinates, how it is trained? I understood that it is pre-trained, but at some point for calculation the error it should have had a True Y label right to compare the result with right ? So they need to have a reference Y labels for pre-training this nueral network ?
@SerranoAcademy
@SerranoAcademy 22 сағат бұрын
@@k.i.a7240 thank you! Great question! There are many ways to train word2vec, but I’m a nutshell, both the features and the label come from the text. For example, if the sentence is “hello how are you”, you could use as features “hello how are”, and as label “you”, since the NN is meant to find the next word in a sentence. (You could also reuse this sentence to use as features “hello how”, and as label “are”, and continue the rolling window. Other ways to train word2vec, is to remove a word from the sentence, and guess it using the other ones, so for example in the sentence “hello how are you”, you have four possible words to remove, say, you can use as features “hello, are, you”, and make the NN predict the label “how”. Or even viceversa, from the feature “how”, you can train the network to predict the labels “hello, how, are”. This way, the neural network gets used to words that are similar (since for example, guessing the neighbors of “Apple” would be similar than the neighbors of “pear”, thus, the neural network treats them similarly, implying that the penultimate layer of the NN would fire similar numbers with “Apple” and with “pear”. Hope that helped, lemme know if there are any doubts! :)
@k.i.a7240
@k.i.a7240 15 сағат бұрын
​@SerranoAcademy, thank you again,It is cristal clear now and I appreciate the time you put for reading and answering our questions, many thanks 🙏🙏
@ksprdk
@ksprdk 23 күн бұрын
Great video, thanks
@htchtc203
@htchtc203 11 ай бұрын
Sir, thank you for very clear and informative series of presentations. Excellent job? May I ask something about embedding or word2vectors. How is a NN trained for words in order to cluster words for some kind of similarity grouos in multidimensional vector space? Is this training proceas guided or is it like self organizing map or process?
@maethu
@maethu Жыл бұрын
I am happy like a zygote about this video! Great work, thanks a lot!
@maneeshajainz
@maneeshajainz Жыл бұрын
I like your videos. Can you post a quiz after each of your videos?
@sriharsha580
@sriharsha580 Жыл бұрын
Thanks for the wonderful presentation. In the previous video of the series while discussing the relationship between query and key(building the context relationship between words), it is mentioned the relationship QK and v(predicting next word) will be covered in this video, may I know whether it will be covered in another video or not.
@rasmusnordstrom9947
@rasmusnordstrom9947 Жыл бұрын
Is there any good explanation out there for the "second" input into the transformer structure: Outputs (shifted right) in the original paper?
@SerranoAcademy
@SerranoAcademy Жыл бұрын
Thanks for the question! Not sure exactly what second input. The one coming out of the attention mechanism and into the transformer? I would say that that's a 'enhanced' vector for the input text. Namely, one that carries context on it. Lemme know if that's what you meant, or if it was a different one.
@rasmusnordstrom9947
@rasmusnordstrom9947 Жыл бұрын
​@@SerranoAcademy If I understand correctly when looking at Figure 1 in the original paper (attention is all you need), the initial prompt is first fed into the encoder and then inserted halfway into the decoder, which finally yields the first token. As far as I understand, to generate the next token, we don't simply append the first token to the initial prompt and run it through both the encoder and decoder. Instead, we insert the newly generated token directly into the decoder (or runs it through a different encoder?). I'm somewhat confused about this part.
@ColinTimmins
@ColinTimmins Жыл бұрын
Very nice! Love your work and visuals. =]
@atriplehero
@atriplehero Жыл бұрын
Does the whole "Once upon a time" already built gets fed again into the whole process again as a 'input' in order to get attached its next word/token? In order words, is it like a cycling again and again untill a "seemengly complete" answer is generated? If this is the case, it would be a whole lot of inefficiency and explains why so much electricity is consumed!! Please answer this crucial detail.
Proximal Policy Optimization (PPO) - How to train Large Language Models
38:24
The Attention Mechanism in Large Language Models
21:02
Serrano.Academy
Рет қаралды 106 М.
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 62 МЛН
Une nouvelle voiture pour Noël 🥹
00:28
Nicocapone
Рет қаралды 9 МЛН
To Brawl AND BEYOND!
00:51
Brawl Stars
Рет қаралды 17 МЛН
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,1 МЛН
The math behind Attention: Keys, Queries, and Values matrices
36:16
Serrano.Academy
Рет қаралды 271 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 403 М.
How might LLMs store facts | DL7
22:43
3Blue1Brown
Рет қаралды 870 М.
Attention in transformers, visually explained | DL6
26:10
3Blue1Brown
Рет қаралды 1,9 МЛН
A Hackers' Guide to Language Models
1:31:13
Jeremy Howard
Рет қаралды 538 М.
Visual Guide to Transformer Neural Networks - (Episode 2) Multi-Head & Self-Attention
15:25
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 551 М.
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 62 МЛН