This is gold I hope it gets the ATTENTION it deserves
@avb_fj10 ай бұрын
Thanks!! More attention will surely TRANSFORM this channel! 😂
@TP-ct7qm11 ай бұрын
Awesome video! This (together with the last two videos) is one of the best explanations of Transformers I've seen. Thanks and keep it up!
@karthikdulam983013 күн бұрын
This video needs to be more popular!
@AI_ML_DL_LLM11 ай бұрын
the gist of this video: 4:29, a great job, thanks
@JasFox42011 ай бұрын
Dude, you are a treasure, keep it up!
@amoghjain11 ай бұрын
wowww! what a great explanation! helps knit so many individual concepts together in one cohesive knowledge base!! thanks a lot for making this video and all the animations!
@avb_fj11 ай бұрын
Thanks!!
@ZenTheSapoАй бұрын
Incredible series, helped me a lot.
@IdPreferNot16 ай бұрын
Came here after interest from one blue three brown. It's clear you've got a great explanation style... plus you were earlier ;). Hope your channel following builds to match your outstanding quality.
@avb_fj6 ай бұрын
Welcome! Thanks a lot for the shoutout!
@matiasalonso643011 ай бұрын
Congrats !! Awesome channel !
@avb_fj11 ай бұрын
Thanks!
@SuebpongPruttipattanapong27 күн бұрын
Thank you so much!
@GabrielAnguitaVeas7 ай бұрын
Thank you!
@user-yr9sf2yr3nАй бұрын
great, great video.
@sahhaf123410 ай бұрын
I really dont like to leave you a like. Instead, I want to leave you one hundred likes.. Unfortunately google limits me to one...
@avb_fj10 ай бұрын
Thanks!! Super appreciated!
@user-wm8xr4bz3b5 ай бұрын
at 2:33, you mentioned that self-attention is more biased, but at 2:54 you also mentioned that self-attention reduces inductive bias?? Sorry but i'm a bit confused.
@avb_fj5 ай бұрын
Self-Attention indeed reduces inductive bias and adopts a more general learning framework. At 2:33, I am asking a question: "IS Self-Attention more general or more biased?" And then I continue with "I'll argue that Self-Attention is not only more general than CNNs and RNNs but even more general than MLP layers".
@axe863Ай бұрын
Should have a hybrid structure ...... 😅
@avb_fjАй бұрын
This is true. Many computer vision research has now moved into adding elements of CNNs/inductive bias into the Vision Transformer architecture, for example - Swin Transformers.