Attention is all you need || Transformers Explained || Quick Explained

  Рет қаралды 21,151

Developers Hutt

Developers Hutt

Күн бұрын

Пікірлер: 43
@LoneXeaglE
@LoneXeaglE Жыл бұрын
Everytime I need to remember key concepts in transformers I go back to this video. God bless you bro.
@DevelopersHutt
@DevelopersHutt Жыл бұрын
Thanks bro ❤️
@akankshaaggarwal394
@akankshaaggarwal394 Ай бұрын
A very good and simplied video. Thank you.
@Gank-SquadUK
@Gank-SquadUK Ай бұрын
This is really good man - Nice video and you've done a fantastic job explaining some very complex ideas.
@jessicahoffmann6278
@jessicahoffmann6278 2 жыл бұрын
Impressively clear and easy to follow while still being precise!
@vil9386
@vil9386 8 ай бұрын
This is very clear. I understood a lot through this video about transformers. Thank you.
@DevelopersHutt
@DevelopersHutt 8 ай бұрын
Glad you liked it!
@nicklam3594
@nicklam3594 Жыл бұрын
I cannot like this enough!!!!!! This channel is SOOO underrated
@DevelopersHutt
@DevelopersHutt Жыл бұрын
I'm glad you liked it ❤️
@shravanshravan4402
@shravanshravan4402 Ай бұрын
Beautiful explanation brother!
@adokoka
@adokoka Жыл бұрын
@8:47 should be: the query (Q) comes from the decoder. The key (K) and value (V) come from the encoder.
@ivant_true
@ivant_true 11 ай бұрын
Yep
@adscript4713
@adscript4713 5 ай бұрын
So at 7:47, the encoder output should actually read Ke and Ve, right?
@adokoka
@adokoka 5 ай бұрын
@@adscript4713, that is correct.
@ihsanhariadi1056
@ihsanhariadi1056 Жыл бұрын
Thankyou, very clear visualization on the processes involved in AIAYN (Attention-Is-All-You-Need) Concept ...
@natureClipsHimalayan
@natureClipsHimalayan Жыл бұрын
best explanation of transformers ever, thank you 👍
@DevelopersHutt
@DevelopersHutt Жыл бұрын
Glad you liked it!
@fazelamirvahedi9911
@fazelamirvahedi9911 11 ай бұрын
Thank you for the informative video. I was wondering where can I find the image captioning video that you mentioned at the the of your video?
@DevelopersHutt
@DevelopersHutt 11 ай бұрын
kzbin.info/www/bejne/aZaXmaZ-gal7kKMsi=9zsaiRaCKkmRLJEF
@AliHamza-le1of
@AliHamza-le1of 2 жыл бұрын
Very Nice Explation. Absolutely Excellent !!! Thanks.
@23232323rdurian
@23232323rdurian Жыл бұрын
? how about: ? or maybe ?
@ibrahimahmethan586
@ibrahimahmethan586 Жыл бұрын
You saved my day! Thank you so much!
@hansolo7516
@hansolo7516 2 жыл бұрын
very easy to understand, nice job!
@edelweiss7680
@edelweiss7680 Жыл бұрын
Hi there ! Excellent work thanks ! But : at 5:06 should the PE vectors be APPENDING to the word embeddings and not ADDING ?
@DevelopersHutt
@DevelopersHutt Жыл бұрын
Yes, correct. I will put a note in the description. Thanks❤
@ummara2020
@ummara2020 9 ай бұрын
What tool you use for animations?
@DevelopersHutt
@DevelopersHutt 9 ай бұрын
After effects
@drzobekenobe
@drzobekenobe Жыл бұрын
Brilliant 👏
@JAINHEMENDRAH123
@JAINHEMENDRAH123 2 жыл бұрын
Very well explained
@junaidiqbal5018
@junaidiqbal5018 2 жыл бұрын
wow, well explained!
@AlirezaGolabvand
@AlirezaGolabvand Жыл бұрын
Very Nice Job. can you please tell me what software you are using to make such amazing animated presentation? it will be very helpful.
@DevelopersHutt
@DevelopersHutt Жыл бұрын
After effects
@LoneXeaglE
@LoneXeaglE Жыл бұрын
Such amazing work! thank you sir!
@karthikeyasharmaparunandi
@karthikeyasharmaparunandi 10 ай бұрын
Thanks!
@DevelopersHutt
@DevelopersHutt 10 ай бұрын
Thank you so much 🙏
@karthikeyasharmaparunandi
@karthikeyasharmaparunandi 10 ай бұрын
@@DevelopersHuttKeep up the good work. Your short and to-the-point videos are unique and amazing!
@FreeMind-00x
@FreeMind-00x Жыл бұрын
In Attention Filter example, the dot product between query "WHY" and key "WHY" is 0, this seems not possible unless embedding of "Why" is 0.
@DevelopersHutt
@DevelopersHutt Жыл бұрын
Let me check
@sudhansubaladas2322
@sudhansubaladas2322 2 жыл бұрын
Nice explanation... But, how embedding layer convert tokens into vectors representation? I guess if you could take an example and explain this then it will be helpful for those who are watching this video...
@DevelopersHutt
@DevelopersHutt 2 жыл бұрын
Embedding layer learns the relationship between words. For more info: read about word2vec model. I'll definitely make videos about how it works in near future.
@sudhansubaladas2322
@sudhansubaladas2322 2 жыл бұрын
@@DevelopersHutt Ok waiting for your video... My suggestion for you is that the description of the Transformer which you are giving is already available in writeup as well as videos. But viewers want to see an explanation using examples.. Like taking any sentence and then how things are happening.. but nice start..
@Dhirajkumar-ls1ws
@Dhirajkumar-ls1ws Жыл бұрын
👌
@pontinhonoceu611
@pontinhonoceu611 Жыл бұрын
You are a badass
Exploding Gradient Problem | Gradient Clipping | Quickly Explained
2:41
What are Transformer Models and how do they work?
44:26
Serrano.Academy
Рет қаралды 127 М.
МЕНЯ УКУСИЛ ПАУК #shorts
00:23
Паша Осадчий
Рет қаралды 3,6 МЛН
Family Love #funny #sigma
00:16
CRAZY GREAPA
Рет қаралды 53 МЛН
Self-Attention Using Scaled Dot-Product Approach
16:09
Machine Learning Studio
Рет қаралды 16 М.
The math behind Attention: Keys, Queries, and Values matrices
36:16
Serrano.Academy
Рет қаралды 260 М.
Attention is all you need explained
13:56
Lucidate
Рет қаралды 85 М.
The Attention Mechanism in Large Language Models
21:02
Serrano.Academy
Рет қаралды 101 М.
Attention Is All You Need - Paper Explained
36:44
Halfling Wizard
Рет қаралды 110 М.
Attention in transformers, visually explained | Chapter 6, Deep Learning
26:10
Vision Transformer Basics
30:49
Samuel Albanie
Рет қаралды 30 М.