Self Attention in Transformers | Deep Learning | Simple Explanation with Code!

  Рет қаралды 54,760

CampusX

CampusX

Күн бұрын

Пікірлер: 427
@ankitbhatia6736
@ankitbhatia6736 10 ай бұрын
This is a gem, don't know even if the authors of "Attention is all you need" would be able to explain this concept with such fluidity. Hats off 😊.
@ajinkyamohite3836
@ajinkyamohite3836 10 ай бұрын
Par sir initially word embeding ki value kya Gigi
@ShubhamSingh-iq5kj
@ShubhamSingh-iq5kj 9 ай бұрын
​@@ajinkyamohite3836it can be calculated with word2vec or glove . Read about it you will understand
@Findout1882
@Findout1882 7 ай бұрын
@@ajinkyamohite3836 random
@mehulsuthar7554
@mehulsuthar7554 6 ай бұрын
@@ajinkyamohite3836 that will be any static embedding like GloVe, Fasttext, or your own trained embedding
@SyedAkrama-k4x
@SyedAkrama-k4x 3 ай бұрын
​@@ajinkyamohite3836When training BERT, the initial token embeddings (which are mapped from tokens to vectors) are not static but are learned and updated during the training process. These embeddings, along with positional encodings, are trained simultaneously with the self-attention mechanisms of the model. Therefore, BERT does not use pre-trained static embeddings for the initial representation; instead, it learns these embeddings as part of the overall training, which includes learning contextual relationships through self-attention layers
@rohitdahiya6697
@rohitdahiya6697 10 ай бұрын
The self attention mechanism best explained by the best teacher in the world. This makes you unique from other tutors around the world ❤
@saipoojalakkoju6009
@saipoojalakkoju6009 10 ай бұрын
This Lecture is an absolute Mater piece
@S_Adhikari-oo7
@S_Adhikari-oo7 10 ай бұрын
Pata nehi sir, apko 14 din kyun laga... Topic to bohot simple tha..❤❤
@dnyaneshsable1512
@dnyaneshsable1512 9 ай бұрын
are bhai uske bahut part he 1 video banaya he so uske wajase bol rahe the or waise bhi sir batate he to konsabhi topic easy hi lagata he.
@vidhikumar1664
@vidhikumar1664 9 ай бұрын
bhai have some patience, itna aasan laga tumhe toh khud seekhlo. Let Nitish do the work. It is by the efforts of Nitish sir that i'm able to understand these topics so well.
@vidhikumar1664
@vidhikumar1664 9 ай бұрын
acha i got your point XD. thanksss Maine abhi dekha jab sir bole😂😂
@MaithilGuy
@MaithilGuy 4 ай бұрын
@@dnyaneshsable1512 you definitely didn't watch the video 😂😂
@mohammadriyaz5586
@mohammadriyaz5586 3 ай бұрын
@@MaithilGuyI think he started from 3:00
@d1pranjal
@d1pranjal 6 ай бұрын
This video is hands down, THE BEST video on self attention in transformers. I have been following you and learning from you since long. We all are truly blessed to have a mentor like you. 🙏
@sangram7153
@sangram7153 10 ай бұрын
Best Teacher ever ! 😊
@arri5812
@arri5812 10 ай бұрын
🌟 **Nitish Singh, You're a Beacon of Clarity!** 🌟 Dear Nitish Singh, I am utterly captivated by your enlightening video on self-attention in transformers! Your ability to distill complex concepts into digestible nuggets of wisdom is nothing short of magical. 🪄✨ As I watched your tutorial, I felt like I was sipping from the fountain of knowledge itself. Your lucid explanations flowed seamlessly, unraveling the intricacies of self-attention with grace. 📚🔍 The way you effortlessly wove together the threads of mathematics, intuition, and practical application left me in awe. Itna asaan tha yeh! 🤯🙌 Nitish, you're not just an educator; you're a sorcerer who conjures understanding out of thin air. Your passion for teaching radiates through the screen, igniting curiosity in every viewer. 🌟🔥 Thank you, from the depths of my neural network, for sharing your brilliance. You've made self-attention feel like a warm hug from an old friend. 🤗🧠 So here's to you, Nitish Singh, the maestro of clarity, the guru of transformers, and the architect of "itna asaan tha yeh." 🎩👏 May your tutorials continue to illuminate minds, one self-attention head at a time! 🌟🤓 With heartfelt gratitude, Your Devoted Learner 🙏
@shubham..1372
@shubham..1372 2 ай бұрын
bhai to the principle likh diya😁 apne to
@preetysingh7672
@preetysingh7672 42 минут бұрын
No doubt, the way you explain is awesome but what I love most is that you include the intuition WHY (most difficult for self learners), which you could skip & only explain WHAT is done. Your super story-telling approach keeps me hooked to your hour long videos & yet remember it. Any word of gratitude would be an understatement to your service. Hats off to you sir..🙏
@ariousvinx
@ariousvinx 10 ай бұрын
Last videos were amazing! Have been waiting for the next one. The hard work you do and explain in such a depth ... Thank you ❤
@cool12345687
@cool12345687 10 ай бұрын
WoW.. God bless you. Have never seen someone who could explain this process so well.
@upskillwithchetan
@upskillwithchetan 3 күн бұрын
Amazing explanation, Nitish sir! Even Stanford University professors can not explain this, but you did it with a nice approach. I have been trying to understand these "Transformers" since 2021; today, only I got it. I have watched all the blogs and videos on KZbin, but every time, I get lost in the middle of the theory. A huge salute to you. Thank you so much.
@akashprabhakar6353
@akashprabhakar6353 2 ай бұрын
I think every teacher first watches your video to gain clarity and then make their videos on YT. Awesome clarity sir.
@NabidAlam360
@NabidAlam360 10 ай бұрын
Thank you so much for the video! You teach so well that I feel that your tutorial video does not end! Please do not change this teaching style! Please continue uploading the other parts! And we really appreciate the efforts you gave to make the tutorial!
@somdubey5436
@somdubey5436 10 ай бұрын
very nice way to make anyone understand what self-attention is. Your examples make it really simple and intuitive to grasp the core concepts. Thanks lot :)
@khushipatel2574
@khushipatel2574 10 ай бұрын
I am deeply grateful for the incredible deep-learning video you shared with us. It brought me immense joy and excitement to watch, and I can't thank you enough for taking the time to create and share such valuable content. The clarity and depth of your explanations have greatly enriched my understanding and passion for this subject.
@nileshgupta543
@nileshgupta543 10 ай бұрын
Outstanding explanation of such a concept!!!
@myself4024
@myself4024 4 ай бұрын
🎯 Key points for quick navigation: 00:00 *🎬 Introduction to Self Attention* - Introduction to the topic of self-attention, - Importance of the video in understanding generative AI, - Explanation of why self-attention is central to transformer architecture. 02:29 *📝 Recap of Previous Video* - Summary of the previous video on self-attention, - Explanation of word representation techniques in NLP, - Discussion on the limitations of static word embeddings. 04:31 *🔍Contextual Embeddings and Their Importance* - Introduction to the concept of contextual embeddings, - Problem with static embeddings using examples, - Need for dynamic, contextual embeddings for accurate word representation. 05:24 *🔄 How Self-Attention Solves the Problem* - Explanation of how self-attention converts static embeddings into dynamic, contextual embeddings, - Step-by-step process of self-attention using an example sentence, - Overview of the calculations involved in generating contextual embeddings. 06:27 *🧩 Overview of Contextual Embeddings* - Introduction to the importance of contextual embeddings, - Comparison of static vs. contextual embeddings, - Explanation of how contextual embeddings change based on context. 07:30 *💡 Detailed Example of Contextual Embeddings* - Use of example sentences to illustrate contextual embeddings, - Explanation of how the same word can have different meanings in different contexts, - Introduction to the concept of breaking down words into parts to understand their contextual meaning. 09:52 *🧮 Mathematical Representation of Contextual Embeddings* - Transition from word representations to embeddings, - Explanation of embedding vectors and their combinations, - Introduction to the mathematical formulas representing the new embeddings based on context. 12:36 *🔢 Calculation and Weighting in Contextual Embeddings* - Explanation of the weighting system in contextual embeddings, - How similarity scores are used in embedding calculations, - Detailed breakdown of the formula and its components. 14:05 *🧠 Concept of Similarity in Embeddings* - Explanation of similarity between word embeddings, - Description of how similarity scores (dot products) represent relationships between words, - Calculation of similarity using dot products with examples. 17:25 *🔍 Visual Representation of Embeddings* - Process of visualizing contextual embeddings, - Example of generating new embeddings using dot products, - Steps to represent embeddings visually and calculate weights. 19:39 *🎛️ Normalization and Softmax in Embeddings* - Importance of normalization in machine learning, - Explanation of the softmax function for normalizing dot products, - Detailed process of applying softmax to obtain normalized weights. 21:53 *📊 Generating New Contextual Embeddings* - Process of generating new embeddings by multiplying vectors, - Explanation of combining embeddings for new contextual embeddings, - Calculation of weighted sums for new word embeddings. 24:29 *⚙️ Reviewing and Parallelizing the Approach* - Review of the contextual embedding generation method, - Discussion on the parallel nature of operations, - Explanation of parallel computation using linear algebra. 26:15 *🖥️ Matrix Operations for Efficiency* - Introduction of matrix representation for embedding calculations, - Detailed explanation of matrix multiplication for parallel computation, - Benefits of parallelizing computations for efficiency in generating embeddings. 28:47 *⏩ Advantages and Disadvantages of Parallel Operations* - Speed advantage of parallel operations in generating contextual embeddings, - Potential loss of sequential information, - Faster training despite the drawback. 29:43 *🔍 No Learning Parameters in Basic Approach* - Absence of learning parameters in the first-principle approach, - Explanation of operations (dot product and softmax) without parameters, - Lack of task-specific learning in the model. 31:07 *🌐 Example: Machine Translation Task* - Description of a machine translation task (English to Hindi), - Limitations of general contextual embeddings in specific tasks, - Illustration with the phrase "piece of cake" translating to "cake's piece" instead of "very easy." 33:36 *🧩 Importance of Task-Specific Contextual Embeddings* - Importance of task-specific contextual embeddings, - Issues with general embeddings in idiomatic translations, - Example of translating "break a leg" correctly with task-specific embeddings. 37:03 *📚 Need for Learnable Parameters* - Importance of learnable parameters in improving contextual embeddings, - Explanation of how learnable parameters help in adapting to data, - The benefit of specific learning for better task performance. 39:14 *🔄 Summary of Discussion* - Recap of the problem and the solution approach, - The role of embeddings in capturing semantic meaning, - The need for contextual embeddings to capture dynamic sentence usage. 41:01 *🎯 Task-Specific Contextual Embeddings* - Importance of task-specific contextual embeddings over general ones, - Realization that simple self-attention models lack learnable parameters, - Plan to introduce learnable parameters to generate better contextual embeddings. 42:22 *🧠 Introducing Learnable Parameters* - Identifying stages in the process where weights and biases can be introduced, - Focusing on logical places for adding learnable parameters, - Emphasis on not forcing changes but making logical improvements. 43:44 *🌐 Roles of Word Embeddings* - Explanation of different roles played by word embeddings, - Examples of how each word embedding (money, bank, gross) serves multiple roles, - Description of contextual embeddings in terms of query, key, and value. 47:19 *📖 Analogy with Dictionary in Computer Science* - Use of dictionary analogy to explain query, key, and value, - Comparison of embeddings with dictionary keys and values, - Explanation of the concept using Python dictionary example. 50:10 *🧐 Drawbacks and Ideal Approach* - Discussion on the limitations of using the same vector for query, key, and value, - Suggestion for having separate vectors for each role, - Argument for the separation of concerns to improve the model's effectiveness. 51:27 *📈 Transforming Vectors for Better Roles* - Explanation on the need for transforming a single embedding vector into three distinct vectors, - Discussion on why separate vectors for query, key, and value improve performance, - Introduction to the analogy for better understanding. 53:14 *📚 Real-Life Analogy with a Writer* - Illustration using the example of an author on a marriage website, - Comparison of the author’s autobiography with embedding vectors, - Explanation of the parallel processes of creating a profile, searching, and matching on the website. 55:29 *🔍 Drawing Parallels with Self-Attention Model* - Connecting the marriage website process to the self-attention model, - Explanation of query, key, and value roles using the example, - Clarification on why a single embedding for all roles is less effective compared to distinct vectors. 58:29 *🔍 Detailed Example of Using Autobiography* - The author discusses why uploading a full autobiography is not practical for searching or matching, - Emphasizes the importance of providing concise, relevant information, - Illustrates how providing too much information can be counterproductive. 01:00:05 *📄 Optimizing Embedding Vectors* - Explanation on why a single embedding vector for all roles is inefficient, - Advocates for creating separate vectors optimized for specific tasks: profile, search, and match, - Uses the example of tailoring an autobiography for different uses to explain optimizing embedding vectors. 01:03:06 *📊 Using Data to Optimize Profiles* - Demonstrates how data can help refine and optimize profiles, - Example of modifying profile information based on feedback and data analysis, - Emphasizes continuous improvement using data-driven decisions. 01:04:57 *📊 Data-Driven Profile Optimization* - Importance of using data to refine and optimize the profile, - Example of modifying profile details based on received responses to attract more suitable matches, - Continuous learning and adjustment based on data feedback. 01:07:11 *🔄 Deriving Vectors from Embeddings* - Explanation of transforming word embeddings into three vectors: query, key, and value, - Process of creating contextual embeddings from these vectors, - Example of how each word in a sentence is processed to derive query, key, and value vectors. 01:10:20 *🧮 Mathematical Operations for Vector Creation* - Overview of creating new vectors from a given embedding vector, - Introduction to the mathematical operations involved, - Discussion on the methods used in linear algebra to derive new vectors from an existing one.
@pavangoyal6840
@pavangoyal6840 9 ай бұрын
You amazing teacher who is taking ownership to teach your students. World needs more teachers like you :)
@spandanmaity8352
@spandanmaity8352 Ай бұрын
That Jeevansathi example was out of the world!! At first, I thought it was a brand collaboration. BTW, I cannot thank you enough for these videos.
@sudhanvasavyasachi2525
@sudhanvasavyasachi2525 Ай бұрын
This is and will always be the best lecture on self attention. I am sure no one can give such a intuitive and lucid explanation to this topic. Thank you sir
@awe-nir-baan
@awe-nir-baan 2 ай бұрын
I'm honestly at a loss for words. How is it even possible to simplify concepts like this? You're a genius! Kudos!
@rahulgaikwad5058
@rahulgaikwad5058 10 ай бұрын
It was amazing video learnt a lot from this video. Keep it up the good work God bless you. Thank you
@sowmyaraoch
@sowmyaraoch 4 ай бұрын
Nitish sir, I cannot explain how grateful I am for your videos. The amount of effort you truly put to maintain the quality of information you provide is commendable. Huge respect for everything you did. I come from a Non-CS background who just started learning python. Then I chose to do my masters in Data Science and came across your ML playlist 2 years ago. That playlist had the best intuition and foundations a person need to get into the field of Data Science. I've learned more from your channel than my masters here in US. I now finished your deep learning playlist twice. I constantly go through your videos. I even enrolled in your computer vision course from your website. Extremely grateful to have a person like you explaining everything in depth and make it so easy. You set the foundation so right we never had to refer to other content. Please keep sharing your knowledge. Your level of knowledge and understanding is what every student in the data science community needs. If I got my interest in the field of Data Science, it is only because of you. And a small request! You've mentioned you'll release NLP playlist soon and it would be taught by you. Really waiting for that playlist for a while now. Please release it. Lots of Respect!!!
@mohittiwari7379
@mohittiwari7379 12 күн бұрын
every video i see and i dont have a word to say thanks. thank you sir for such video and playlist.
@harshdewangan725
@harshdewangan725 17 сағат бұрын
The best explanation I have seen in the internet and offline. Thanks a lot.
@Tb-e4u
@Tb-e4u 9 ай бұрын
Bhai, 2-3 saal lag gye, samjhne me... finally, finally that eureka moment... thanks a lot bro... keep going....
@Nithya-r8l
@Nithya-r8l Ай бұрын
hai ,is this algorithm? can uh tell me wt is mechansim because we have to put algorithm in report for llm can uh tell me what should i put in the algorithm place
@mentalgaming2739
@mentalgaming2739 9 ай бұрын
Your explanations on self-attention mechanism are truly remarkable. I came across your channel randomly when I started my data science journey in August 2023, and since then, I have been learning from your content. Your ability to articulate complex topics is exceptional, and your explanations are incredibly insightful. I greatly appreciate the depth of context you provide in your tutorials, which sets your content apart from others. Your dedication to helping learners understand difficult concepts is admirable. Much love and gratitude from Pakistan. You are a true gem in the field of data science education. Whenever someone needs help with any topic in data science, I always recommend your channel for the best intuition and explanations. Also one important think " pata nahi sir apko 14 din kyun lage ... Topic to bht simple tha "
@shivombhargava2166
@shivombhargava2166 10 ай бұрын
No words for this video. This content is unmatched, no one even comes close to this quality of content. Keep it up, Sir !!
@kishanpandey2912
@kishanpandey2912 4 ай бұрын
This is some crazy level of simplification. First principles thinking at its peak. It is very hard to find someone do such simplifications of concepts and build the understanding piece by piece like you always do in your videos. I request you to please take a masterclass on how you learn any new concept and how we can modify our learning methodology and approach inspired from your approach.
@umangchaudhary9901
@umangchaudhary9901 Ай бұрын
Sir I think you are underrated, people should watch your videos to gain the understanding the concepts from the sratch. really amazing. Hats off to you... Thanks for amazing video with excellent explanation
@AbhijeetKumar-cj4fe
@AbhijeetKumar-cj4fe 10 ай бұрын
i am at the 2:29 time stamp and i am pretty sure that you gave us best explanation .that's how our faith on you sir
@SipahEScience
@SipahEScience 10 ай бұрын
Hi Nithis, I am from Pakistan and studying in US and I should say that you are THE BEST teacher in the world. Thanks a lot brother. God bless you.
@ankanmazumdar5000
@ankanmazumdar5000 10 ай бұрын
Where are you, I'm in Chicago
@pushkarmandot4426
@pushkarmandot4426 26 күн бұрын
This is truly exceptional. I studied in Kota for IIT prep back in the days when Kota was the only city popular for IIT prep. After watching these videos, I remember many teachers who used to teach with such a strong dedication and simple explanation. I would love to partner with you for one of the big tech projects I am working on.
@harsh07g-h3m
@harsh07g-h3m 10 ай бұрын
Sir ji tu si great ho, ❤❤❤ jai ho aapki
@VASUNDHARABHATI-r2l
@VASUNDHARABHATI-r2l 11 күн бұрын
Best Teacher for Deep Learning
@AsfarDataScientist
@AsfarDataScientist 3 ай бұрын
Oh yad aa gya, Sir ne bola tha ke agar samaj acha sa a jai to ye bolna hai, Pata nehi sir, apko 14 din kyun laga... Topic to bohot simple tha THanks sir G for providing this much of detail in free of cost Love from Pakistan
@sagarbhagwani7193
@sagarbhagwani7193 10 ай бұрын
Appreciate all your efforts sir❤
@anshkapoor4990
@anshkapoor4990 5 ай бұрын
14 days damnn !! seriously Thank you sir, idhar mene 1 mahine me ye playlist khtm kr hi di hai bs... thank so so much itne ache se pdhane ke liye 🙏🏻:)
@T3NS0R
@T3NS0R 10 ай бұрын
I know enough of transformers to handle stuffs I needed, but I never understand the self attention and all in this detail.... such a good video 🛐
@abhishek171278
@abhishek171278 2 ай бұрын
sir aapke efforts apke video me dikhte hai itte tough concept ko aap itte aasani se samjha lete ho... Milions of likes from my side.
@oo_wais
@oo_wais 8 ай бұрын
i have been trying to clarify the attention concept for the last few days and i have to admit this is one of the most valuable and informative videos that i found on the topic on youtube.
@haseebmohammed3728
@haseebmohammed3728 2 ай бұрын
This is next level, its like explaining Theory of relativity to 5th call students with utmost clarity and at the end student also understand it.. People will appreciate this once they read the attention is all you need paper. I am out of words
@DeyozRayamajhi
@DeyozRayamajhi 7 ай бұрын
Bhaisaab you are one of the greatest data science teachers.
@nikunjdeeep
@nikunjdeeep 5 ай бұрын
i have just started this video but i trust my mentor , this will be the best thing on transformer.....
@nagendraharish6026
@nagendraharish6026 9 ай бұрын
Hidden gem - Nitish sir, you have no idea how many students have benefited from your videos. Hats off, love to subscribe if there are any courses related to Deep learning or NLP
@ankitgupta1806
@ankitgupta1806 9 ай бұрын
To be honest , i have studies same with other videos but here now i got proper understanding , hats off nitish
@shamshadhussainsaifi6476
@shamshadhussainsaifi6476 9 күн бұрын
You are gem sir ,isse best explaination aaj tak manie kahi se nahi padha ,Thanks for your effort Sir🤘👏
@ansh-t8e
@ansh-t8e 3 ай бұрын
Hello nitish sir. I was going and still going through a tough phase in my life. But still after completing the ML playlist and now at DL playlist, I can say that your content is GOLD. In another 3-4 years, this content will be bible for ML and DL for students in INDIA. Thankyou sir. Ap kaafi acha padhate ho. Bade bhai jaise ho ap sir hamare lie. Thankyou so much.
@harshitasingh902
@harshitasingh902 10 ай бұрын
I really can't believe itni complex topic ko itni asani se samjha ke bhi padha sakte hai....sir is not only a good Data Scientist but also an excellent teacher. Lots of respect for you sir
@WIN_1306
@WIN_1306 5 ай бұрын
r u single?
@harshitasingh902
@harshitasingh902 5 ай бұрын
@@WIN_1306 damn single ... That's why i am going through this course 😂
@ali75988
@ali75988 9 ай бұрын
Your words regarding demand of GAN's are so true. My background is in mechanical and at this very stage, i have seen 4-5 graduates working on both MS & PhD thesis on GAN's in one of the most top rank universities in korea.
@i-FaizanulHaq
@i-FaizanulHaq 2 күн бұрын
sir aap king ho, literally aaj pehli baar samjh aya self attention kya hai
@bharatarora2006
@bharatarora2006 4 ай бұрын
In past, I had gone through this topic multiple times but never got the insight about "why" part...but after this video I understood it conceptually and now I can say its easy. Superb lecture.. Thanks a ton CampusX 👍
@sagemaker
@sagemaker 10 ай бұрын
I am paying full Attention to your video. Very well made :)
@rutvikkapuriya2033
@rutvikkapuriya2033 9 ай бұрын
Hats off for this type of explanation 😍😍😍
@zeronpz
@zeronpz 20 күн бұрын
Great explanation of Query, Key and Value. Never understood with this much clarity. Kudos 🙌🙌
@haz5248
@haz5248 10 ай бұрын
Handsdown the most simple and comprehensible explanation.
@ravindrasinghkushwaha1803
@ravindrasinghkushwaha1803 3 ай бұрын
I am struggling to understand this from the last 1 year right now, I really really understand the self attention mechanism completely. Thank you🙏 😊
@adityanagodra
@adityanagodra 10 ай бұрын
It was a most simple and great explanation it was better than the blog of jay alammar and other KZbin videos... you should write a book as there are none as simple as this
@nikhileshnarkhede7784
@nikhileshnarkhede7784 6 ай бұрын
The way you explain the topic is Epic. Keep it up. And never stop because the way you explain really help in building the intuition. I believe not only for me for other also and that will definitely will bring change. Thank you.🙏
@swarnpriyaswarn
@swarnpriyaswarn 8 ай бұрын
Goodness!!!! Till now I have not seen such a good explanation of this... I have no words to express my gratitude. Thanks a tonn.
@bimalpatrapatra7742
@bimalpatrapatra7742 10 ай бұрын
Super mind-blowing class sir its really heart touching 😮
@AlAmin-xy5ff
@AlAmin-xy5ff 10 ай бұрын
Best video on self-attention mechanism that I've ever seen before 💌💌💌💌💌
@ShubhamGupta-zt6me
@ShubhamGupta-zt6me 5 ай бұрын
Just one word, WOW WOW WOW !!! So intuitive and explanation is top-notch. Hats off to you Nitish.
@akshitnaranje7426
@akshitnaranje7426 10 ай бұрын
very good lecture waiting for other parts.. please release as soon as possible.
@amitabhraj6283
@amitabhraj6283 2 ай бұрын
Awesome explanation! Clear, concise and super helpful. Thanks for breaking it down so well!
@ravikanur
@ravikanur 6 ай бұрын
One of the best explanation by Nitish sir. We are lucky to have teachers like you sir
@electricalengineer5540
@electricalengineer5540 2 ай бұрын
what a lecture. truly mesmerized
@basabsaha6370
@basabsaha6370 8 ай бұрын
im literally spell bound after this explanation....i have watched 100s of foreign youtube channel for this attention mechanism ....they are no where closer to you.....Sir you are really amazing....please please please continue this series....and teach up to LLMS , gpt , llama2 all stuffs😊,....I'm very very eager to learn from you
@sachink9102
@sachink9102 6 ай бұрын
OMG.... THE BEST video on self attention in transformers
@tejassahu9859
@tejassahu9859 8 ай бұрын
arre bhai bhai bhai kya video bana di wah maza gaya. Main toh baadh baadh ho gaya. Best youtube video I have ever seen. Kya aadmi ho aap. Nahi aap aadmi ke upar nikal gaye. WAAAHHHHHHH
@vimalshrivastava6586
@vimalshrivastava6586 10 ай бұрын
This lecture is a demonstration of how a great teacher can make a very complicated topic so easy. Thank you so much..!!!
@kayyalapavankumar2711
@kayyalapavankumar2711 10 ай бұрын
Sir Please Keep Uploading further videos also as early as possible.
@mallemoinagurudarpanyadav4937
@mallemoinagurudarpanyadav4937 10 ай бұрын
Great lecture 🎉
@darshanv1748
@darshanv1748 9 ай бұрын
You are truly one of the most amazing teacher / data scientist You truly have an amazing grasp over the concepts
@UnbeaX
@UnbeaX 7 ай бұрын
What a smooth learning experience!! I got lots of value from this....one suggestion when I searched to learn this i searched transformer so this video did not ranked so sir it's a suggestion u see what keywords are most used and then add them to the title...so easy approach just see what your competitors are using as titles.
@mohitrathod2067
@mohitrathod2067 2 ай бұрын
Thanks for this amazing explanation! You made self-attention so easy to understand in just one watch. Really appreciate your clarity and effort!🙌🙌♥
@dipankarroy3524
@dipankarroy3524 2 ай бұрын
I tried to understand the Self Attention mechanism from other resources but didn't get it properly. After watching this video now it's clear like just a piece of cake. Thanks sir 🙏
@ankitkumargupta-iitb
@ankitkumargupta-iitb 9 ай бұрын
One of the best content on Self attention so far.
@user-hv2uu8vp9m
@user-hv2uu8vp9m 6 ай бұрын
Great video and What a great example at 52:24 👏👏
@rajsuds
@rajsuds 7 ай бұрын
You deserve a national category award... truly spellbound! 🙏🙏
@aagamshah3773
@aagamshah3773 2 ай бұрын
Sir, you are more than a gem...Thank you for explaining us in such a simpler manner.🙏😃
@mimansamaheshwari4664
@mimansamaheshwari4664 7 ай бұрын
Lectures I can never get enough off. The best content I discovered so far. Kudos to you, Nitish Sir. Life is easy when you are the mentor.♥♥♥
@aienthu2071
@aienthu2071 8 ай бұрын
I had a huge doubt in Q, K, V part, now it is clarified. Please continue the transformers architecture especially the mathematical parts. e.g. positional embedding, multi headed attention etc. You are the beacon of Education in Data Science.
@akashdutta6211
@akashdutta6211 10 ай бұрын
you explain such a difficult concept so simply. even after watching nptel or here and there, i have so many doubts. you have cleared those in one video.
@subhankarghosh1233
@subhankarghosh1233 8 ай бұрын
Sir, dil se... You are a blessing to the community. I really respect your hard work and dedication. You have the aura to change someone's life journey and perspective. Pranam app ko and your complete team.
@lfjv
@lfjv 13 күн бұрын
I could 100% relate to the shaadi wale vectors 🤣🤣🤣. You are an excellent teacher.
@planetforengineers7176
@planetforengineers7176 Ай бұрын
Amazing sir thank you so much ❣❤❤❤
@himanshukale9684
@himanshukale9684 10 ай бұрын
Best explanation till date ❤
@advaitdanade7538
@advaitdanade7538 4 ай бұрын
At the start of video only I can say that I will understand it as Nitish sir is teaching it.🤩
@ggggdyeye
@ggggdyeye 10 ай бұрын
you made self attention model very simple to learn for all of us. Thankyou sir. Appriciate your work.
@drrabiairfan993
@drrabiairfan993 4 ай бұрын
Best explanation of self-attention ever given by someone.....must salute your dedication
@Deepak-ip1se
@Deepak-ip1se 6 ай бұрын
One of the best examples to make us understand such a hard concept!! Hats off to you sir👏👏
@apnaadda73
@apnaadda73 9 ай бұрын
i dont know ,ap ise acknowledge kroge ya nahi ,East Aur West you are the BEST❤❤❤
@neilansh
@neilansh 5 ай бұрын
I have watched a lot of videos on this topic, but this one is the best Sir 💯 Thanks a lot Sir
@MALIKXGAMING804
@MALIKXGAMING804 10 ай бұрын
Perfect: whats a masterpiece, A high level of understanding and wonderful explanation
@chandank5266
@chandank5266 10 ай бұрын
I was knowing that you will make it simple.........but at this level....man I wasn't expecting that.....bawaal explaination sir🙇‍♂
@fact_ful
@fact_ful 10 ай бұрын
Teaching at its peak.
@izainonline
@izainonline 10 ай бұрын
Much awaited Video thumbs up for your work. Want to clear my concepts. 1.How the probability assign on sentence for context means why 0.7 why not 0.6. 2.Dynamic context can it check the database like River Bank use as single word not River and Bank separated word
@samirchauhan6219
@samirchauhan6219 10 ай бұрын
This is the best video i have ever watched 🎉🎉🎉
@FOR_YOU-19-b7g
@FOR_YOU-19-b7g 4 ай бұрын
Teaching is an art and Nitish sir is an artist.
@AbdulMajeeth2502
@AbdulMajeeth2502 7 ай бұрын
Thanks a lot
Introduction to Transformers | Transformers Part 1
1:00:05
CampusX
Рет қаралды 80 М.
Don’t Choose The Wrong Box 😱
00:41
Topper Guild
Рет қаралды 60 МЛН
VIP ACCESS
00:47
Natan por Aí
Рет қаралды 29 МЛН
How many people are in the changing room? #devil #lilith #funny #shorts
00:39
What is Self Attention | Transformers Part 2 | CampusX
23:21
The math behind Attention: Keys, Queries, and Values matrices
36:16
Serrano.Academy
Рет қаралды 268 М.
Transformer Neural Networks Derived from Scratch
18:08
Algorithmic Simplicity
Рет қаралды 151 М.
Reality of my Google Salary after TAXES
6:54
Fraz
Рет қаралды 300 М.
Attention in transformers, visually explained | DL6
26:10
3Blue1Brown
Рет қаралды 1,9 МЛН
Complete Deep Learning Roadmap | CampusX
53:57
CampusX
Рет қаралды 22 М.
Don’t Choose The Wrong Box 😱
00:41
Topper Guild
Рет қаралды 60 МЛН