Scaled Dot Product Attention | Why do we scale Self Attention?

  Рет қаралды 32,882

CampusX

CampusX

Күн бұрын

Пікірлер: 303
@PavanK1234
@PavanK1234 11 ай бұрын
In future This playlist will be the most viewed playlist for deep learning
@mentalgaming2739
@mentalgaming2739 10 ай бұрын
Yes
@ashutoshpatidar3288
@ashutoshpatidar3288 9 ай бұрын
INDEED
@Amanullah-wy3ur
@Amanullah-wy3ur 7 ай бұрын
yes
@BJCR4HVJR4-sc8xj
@BJCR4HVJR4-sc8xj 5 ай бұрын
Easily,waiting for that time
@shubhamgattani5357
@shubhamgattani5357 3 ай бұрын
No doubt
@KumR
@KumR 11 ай бұрын
No one else can explain this concept this way. And if anyone does... he/she follows you. Please don't shorten content. We need this level.
@SamyukthaKaditham
@SamyukthaKaditham 4 ай бұрын
you are like a research book, covering every single detail with lot of patience. I always end up your videos.. I felt like each of your video is a combination of many books.. thank you so much to sharing your knowledge sir..
@mayank7275
@mayank7275 4 ай бұрын
jese jese apki video dekh rha hu bas dil se dua nikal rhi hai , sir kabhi comment nhi krta but apki video pe ruk nhi paata this is hardwork
@gopeshsahu
@gopeshsahu 11 ай бұрын
Nitish!!! .. truly awesome!! .. outstanding!! ... remarkable !! ....rare to find a gem like you who illuminates the intricate world of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) but also makes it accessible to both novices and professionals alike is rare. Explanation of transformers with self-attention mechanisms is a standout. It's a concept that lies at the heart of many modern AI breakthroughs. This video is out of the world explanation .. why to divid QKV by root of d_k .. no one can explain this level of details .. truly truly truly hat-off ... outstanding beyond the expectation ... Keep doing Nitish and Keep this momentum !!!
@prabhutvakakkar3272
@prabhutvakakkar3272 Күн бұрын
sir aap aise hi padao ,in depth tareke se jab aap padhate ho to uske wajah se mujhe har ek cheez samaj me aa jaati hai,if you will not teach in depth than kuch na kuch aisa reh jaata hai jo kuch sense nahi banata .Thank you very much sir ,aapke wajah se data science ki journey kaafi enjoyable aur asaan ho gayi hai
@aj_ai
@aj_ai 11 ай бұрын
In our NIT everyone following your channel for Data Science content even professors also Thanks, sir🖤
@ashutoshpatidar3288
@ashutoshpatidar3288 9 ай бұрын
which nit are you from btw
@linuxbee
@linuxbee Ай бұрын
All Indians are blessed and fortunate to have a teacher like you. Your teaching in Hindi, especially, is a saviour for those who get stuck with the jargons used in the field of AI/ML.
@LMessii10
@LMessii10 6 ай бұрын
By far the best explanation anywhere. I can't believe how great you are as a teacher, teaching things from such a fundamental level with this astonishing clarity is god gifted. And Nitish sir, you are a god's gift to people like us. I am utterly in awe of your acumen and more so your teaching skills. Because not every great mind is a good teacher, you are a great mind AND a great teacher. Thank you for everything 🙏
@_AshutoshRanjan
@_AshutoshRanjan 3 ай бұрын
so in-depth, i almost feel no need of any other books to have deeper knowledge. Thank-you so much for this content.
@pasunurukumar
@pasunurukumar Ай бұрын
i study masters in USA in AI, your way of explanation is far better than any university lecture.
@fainted_world
@fainted_world 11 ай бұрын
I always wanted a teacher who could explain the hard concepts simply and in detail. I have that kind of teacher like you. Thank you sir for this awesome and detailed video.
@ankushmaheshwari6915
@ankushmaheshwari6915 2 ай бұрын
Thank you so much for this playlist and making it for free!! Even my Professor in my Master's doesnt teach this well. The amount of research you do and the time you take out is really commendable.
@Sam-nn3en
@Sam-nn3en 3 ай бұрын
Every single thing needs to be visited in this much detail. It becomes so simple to understand. Please keep on making very much detailed videos where will learn everything by first principles. I could watch these videosfor hours and hours
@myself4024
@myself4024 6 ай бұрын
🎯 Key points for quick navigation: 00:00 *🎥 Introduction and Overview* - Introduction to the video series and continuation of the self-attention concept. - Emphasis on explaining the scaling concept in self-attention. - Mention of the conceptual depth and importance of understanding this concept. 01:05 *🧠 Recap of Previous Video* - Summary of the previous video on creating self-attention from first principles. - Explanation of generating embeddings for words and creating query, key, and value matrices. - Description of the dot product operations to obtain query, key, and value vectors. 03:18 *🔍 Applying Self-Attention* - Steps to apply self-attention using query, key, and value matrices. - Detailed process of dot product operations and applying softmax. - Final calculation of the contextual embedding. 04:46 *📊 Mathematical Formulation* - Compact mathematical representation of the self-attention process. - Explanation of transforming key matrices and applying softmax. - Final formula summarizing the attention calculation. 05:01 *📝 Comparison with Original Paper* - Discussion of the formulation developed versus the original "Attention is All You Need" paper. - Highlighting the difference in scaling the operation with the square root of the key dimension (Dk). - Introduction to the concept of scaled dot product attention and its importance. 06:49 *🔄 Need for Scaling in Attention* - Explanation of the need to scale in attention to avoid unstable gradients. - Introduction to the concept of Dk (dimension of the key vector). 09:55 *📐 Dimension Calculation* - Detailed explanation of calculating the dimension of key vectors. - Example scenarios to simplify understanding: dimensions could be 3, 10, or 512. - How embedding dimensions and matrix shapes affect the resulting dimensions. 11:06 *🧮 Dot Product Nature* - Explanation of why scaling by the square root of Dk is necessary, linked to the nature of the dot product. - Discussion on how the dot product operates between multiple vectors behind the scenes. - How matrix dot products consist of multiple vector dot products. 13:45 *📊 Variance in Dot Product* - Explanation of the variance in dot products and its dependence on vector dimensions. - Calculation of mean and variance with multiple dot products. - Low-dimensional vectors produce low variance; high-dimensional vectors produce high variance. 16:06 *🧮 Practical Examples* - Comparison of variance in low-dimensional vs. high-dimensional vectors. - Example of 2D and 3D vectors demonstrating variance differences. - High-dimensional vectors show greater variance, leading to potential issues. 18:23 *🔬 Experimental Proof* - Experiment demonstrating variance in dot products with varying vector dimensions. - Histogram plots showing variance spread for different dimensional vectors. - Higher dimension vectors result in larger variance, illustrating the scaling necessity. 22:00 *📈 High Variance Problem* - Explaining why high variance in dot product calculations is problematic. - High variance leads to significant differences in softmax outputs, creating large probability gaps. - Larger numbers get much higher probabilities, while smaller ones get very low probabilities, affecting training focus. 25:04 *🧮 Training Issues* - High variance affects backpropagation in neural networks. - Training focuses on correcting larger numbers, ignoring smaller numbers, leading to vanishing gradient problems. - Small gradients mean parameters do not update, hampering the training process. 26:15 *🏫 Classroom Analogy* - Analogy of a classroom with students of varying heights to explain training issues. - Taller students get more attention from the teacher, similar to larger numbers in training. - A class with similar height students leads to better overall learning, just like balanced variance leads to better training. 28:18 *🔢 Reducing Variance* - Discussing the importance of reducing variance in high-dimensional vectors for better training. - High variance in vectors leads to extreme probabilities in softmax, causing focus on large values and ignoring small ones. - The goal is to reduce variance so the training process distributes focus evenly. 30:21 *📏 Scaling for Variance Reduction* - Describing the technique of scaling to reduce variance in matrices. - Scaling the numbers in a matrix by a factor can reduce variance effectively. - The key challenge is determining the appropriate scaling factor for optimal variance reduction. 32:35 *🔍 Understanding Scaling Factor* - Introducing the concept of a scaling factor to control variance. - Explaining that the scaling factor needs careful consideration and mathematical understanding. - Focusing on the first row of the matrix to simplify the problem and then applying the solution to the entire matrix. 35:52 *📊 Calculating Population Variance* - Explanation on the need to calculate population variance instead of sample variance for accuracy. - Describing expected variance for potential new vector values. - Emphasizing the importance of considering all possible values in variance calculation. 38:02 *🧮 Variance with Increased Dimensions* - Exploring the effects of increasing vector dimensions on variance. - Demonstrating how adding dimensions increases variance. - Establishing that variance increases linearly with dimensions. 42:02 *📈 Linear Relationship of Variance and Dimensions* - Summarizing the linear relationship between dimension increase and variance. - Showing that as dimensions increase, variance also increases proportionally. - Confirming the mathematical quantification of variance growth with dimension expansion. 43:09 *📉 Maintaining Constant Variance* - Explanation on maintaining variance constant across dimensions. - Use of division by a specific factor to achieve consistent variance. - Introduction of a mathematical rule to support the variance adjustment. 44:43 *🔢 Mathematical Rule Application* - Detailed explanation of using a constant to scale and adjust variance. - Calculations showing how dividing by the square root of the dimension maintains variance. - Examples of applying the rule to different dimensions. 48:03 *🤔 Summary and Practical Application* - Summary of the scaling process to maintain variance. - Integration of the scaling step into the self-attention model. - Final formula for calculating attention in transformers using the scaling factor. Made with HARPA AI
@HarisKid-b3b
@HarisKid-b3b 24 күн бұрын
your teaching method and high explantions in deatils i never see in my life, and i am class 9th student in pakistan i have learn the deep learning in the age of 14 because of your lovely and easiest way teaching
@shamshadhussainsaifi6476
@shamshadhussainsaifi6476 Ай бұрын
I can't explain in the words how nicely he explained .Thank you very much sir for putting so much effort to simply the things and that too free of cost
@rahul-khichar-iitgn
@rahul-khichar-iitgn 9 күн бұрын
Salute the level of efforts you are putting for us.
@preetysingh7672
@preetysingh7672 Ай бұрын
I know, it takes you loads of effort to concretely illustrate even these little hidden concepts that too with examples & in detail. To be honest, our mind used to ask such details when we were kids, but the education system & "jitna padhaya pehle utna padho", "faltu ke sawal mt puchho", "topic divert ni karo", "jyada hoshiyar mt bano", "ye exam me ni aayega" jaise response se curiosity coma me chali jati hai. Topic ki clarity & confidence dono km ho jata h. You make us able to think again & open our mind, setting an epitome of TRUE TEACHING. Aap jitna online mode me deliver krte ho utna offline me bhi kahi aur ni milta. I know I'm being selfish to greatly appreciate such detailed content, most of us can do w/o it, but with this you'll create legends like you..
@muhammadshafay7754
@muhammadshafay7754 4 ай бұрын
This is the pinnacle of teaching. Lots of respect from the other side of the border.
@awe-nir-baan
@awe-nir-baan 3 ай бұрын
Your teaching delves deep into the root of each concept, offering detailed analysis that significantly broadens my understanding and knowledge. Thanks a lot for creating these videos meticulously and sharing the knowledge!
@Nova_Elementum
@Nova_Elementum Ай бұрын
Just Mind-boggling! Bow down for your dedication to teach us this superb way.
@imran_0_1_2
@imran_0_1_2 Ай бұрын
I am from Bangladesh.Love You sir.I didn't any video like this and it is very helpful to understand everyting.Keep doing with such type of this explanation and details.
@priyanshusingh9737
@priyanshusingh9737 Ай бұрын
sir yeh jada hi aapne explain kr diya..................pr concept clear ho gaya app hamesa apna best dete hai har topic ke explanation pr. Thank you so much sir.
@1111Shahad
@1111Shahad 7 ай бұрын
I have seen many tutorials and explanation's of transformers and its architecture. I have never seen a detailed explanation like this in crisp and precise way. Thanks Nitish
@jaiminjariwala5
@jaiminjariwala5 10 ай бұрын
WOW WOW WOW, BEST EXPLAINED VIDEO I HAVE EVER WATCHED ON KZbin! Honestly, Your way of explanation is D best Sir! ♾🌟
@Shisuiii69
@Shisuiii69 5 ай бұрын
Bht he zbrdst sir ek apky lectures he hai jo detailed me maths ky sth prhne me enjoy krta hu .... Is trh ki detailed video sir bnaty rhe hr wo topic me jo apko lgta hai prhana chahiye ab future ky millions of Ai Engineer ap pe dependent hai Love from 🇵🇰
@renoy29985
@renoy29985 10 ай бұрын
Waw!!! No words!! I can imagine the amount of effort, day in day out, you have put to have as much knowledge. You are an inspiration!!
@asifafridi8654
@asifafridi8654 12 күн бұрын
This is really one of the unique explanation sir , and I am really proud of you sir that you explain all these difficult topic in hindi and detail explanation as well. Thankyou so much sir
@DevamRajput07
@DevamRajput07 2 ай бұрын
Learned through your ML playlist and now Deep Learning is going to finish. Thankyou....
@JVSatyanarayana-n2o
@JVSatyanarayana-n2o 4 ай бұрын
Can see the passion of the teacher in teaching, particularly a concept which other ignore
@vinayprasadtamta2019
@vinayprasadtamta2019 2 ай бұрын
Really amazed with your knowledge, effort, patience and teaching style. You explained this concept so well that it looks so easy. I use to be annoyed with transformer equation everytime I use to see it, now thanks to you it will look so easy. With you choice of choosing Hindi as medium I think you are a true blessing to Indian students. Your explaination is world level probably you can think of dubbing these videos in English for better outreach. Your passion for teaching is amzing being a teacher myself I can understand how much effort you put in each video. True respect for you dear and thanks. And with your video we can understand how teaching in native language can be so powerful. Best wishes.
@TheVarshita
@TheVarshita 7 ай бұрын
It was an amazing explanation, so thorough and still so simple to understand such complex topic. I have followed courses from NPTEL, Stanford, Deep learning but yet this was the smoothest explanation! Your content is highly underrated. I wish I had found your channel sooner! Thanks 🙂
@mrityunjaykumar2893
@mrityunjaykumar2893 8 ай бұрын
Hi @Nitish, All videos on Transformer are truly remarkable. KZbin pe bhut sare videos hai iss topic pe, but your's are GEM, Main yaha explanation ke liye he aaya tha, and deep down I am Satisfied. Keep making this type of videos with explanation, warna ajkal har koi short me just for view video bana deta hai. Truly appreciating your hard work .
@PavanK1234
@PavanK1234 11 ай бұрын
Thank you so much for restarting this playlist
@electricalengineer5540
@electricalengineer5540 3 ай бұрын
totally impressed. even chatgpt can't explain this good
@pujamehta9755
@pujamehta9755 8 ай бұрын
Great explanation, many coaching institutes teachers are copying your content and teaching students. They dont have there own contents and concepts clear. I dont think anyone can explain these concepts in such a easy manner. You are a great teacher!
@AakashGoyal25
@AakashGoyal25 6 ай бұрын
Hey Nitish, ultimate explanation. It deepens the understanding how attention mechanism works. It was needed to have a good grasp of the topic. This video not only explains the significance of scaling factor but also provides a direction how to tackle things while reading a research paper.
@tarun94060sharma
@tarun94060sharma 11 ай бұрын
Chote-2 concepts ko details me samjhana hi mujhe is channel ko daily open karne ko majboor karta h.
@narendersingh6492
@narendersingh6492 3 ай бұрын
You are absolutely on right path unless we dont understand why we will never able to appreciate the approach. So please continue this
@ShubhamAware18
@ShubhamAware18 10 ай бұрын
This person is similar to Ironman❤ Your explanation is incredible. I watched many videos, but no one could clear this topic. You always think creatively and clear up any confusion. Thank you for always being there. Love you 3000❤🫂
@sachink9102
@sachink9102 7 ай бұрын
Nitish Sir .... your knowledge is like MIT equivalent.. really outstanding !!
@tiwari45621
@tiwari45621 11 ай бұрын
Sir, one small request please if it is possible please increase the frequency of video upload. I was waiting your lecture video since last 20 days.
@MuhammadAmirMoazzamKhanNiazi
@MuhammadAmirMoazzamKhanNiazi Ай бұрын
sir ..from head to tow..full samjh aa gia ,,how and why this scaling is important and it is done ..thanks to u once again ..
@abdulwahabkhan1086
@abdulwahabkhan1086 4 ай бұрын
Aesi explanation mjhy kahin bhi nahi mili. Never stop explaining like this Nitish Sir. ❤❤
@NabidAlam360
@NabidAlam360 3 ай бұрын
Only video on the internet explaining from the roots!
@KiyotakaAyanokoji1
@KiyotakaAyanokoji1 11 ай бұрын
ye zyda variance ko solve krne ki approach thi na ki koi timepass explanation , so its good ki aap ne sikha diya ye , aage kahi aur kam aa jayenga aisa hi kuch . . . 😃
@akshitnaranje7426
@akshitnaranje7426 11 ай бұрын
this type of explaination we need to make our concept strong.. thank you sir...
@shubhamagrawal8620
@shubhamagrawal8620 4 ай бұрын
Bro, these are the best videos I have seen to understand Attention mechanism. Thank you.
@ayushrathore2570
@ayushrathore2570 11 ай бұрын
Thank you, Nitish sir, for your clear explanation of Scaled Dot Product Attention, especially regarding the variance issue. Your insights were incredibly helpful, and I appreciate your commitment to simplifying complex concepts. I really enjoyed your teaching style. Keep up the fantastic work!
@usamafiaz1974
@usamafiaz1974 Ай бұрын
great teacher i have ever watched God bless you , u are king of concepts.... 🥰
@saumilshah2371
@saumilshah2371 2 ай бұрын
too good Nitish…extra ordinary explanation..no one can beat your teaching style….keep making this informative videos…😊
@philosophy_lover123
@philosophy_lover123 11 ай бұрын
hi sir, i am completing my 100 days of machine learning course. just notification i see your full video. And thankyou . this lecture was clear understand .
@LohithReddy-dq9ud
@LohithReddy-dq9ud 10 ай бұрын
Amazing insight combined with a thorough analysis of the research paper's minute nuances! Thanks for providing such a clear explanation. Your videos provide priceless knowledge! 🌟👏
@omprakashnisad3666
@omprakashnisad3666 11 ай бұрын
loved it the way way you explained this self attention its superb.
@taashna_j
@taashna_j 5 ай бұрын
We really do appreciate the details. Explanation of the intuition behind every little step is very useful in understanding the concept
@akshaythakor5501
@akshaythakor5501 2 ай бұрын
Best teacher ever for ML/DL
@mentalgaming2739
@mentalgaming2739 10 ай бұрын
I am Doing this Playlist Deeply and Sir Your Efforts Are Amazing ! When ever someone Ask me to Suggest Him/her a Best Channel For DS , i always Recommend them your Channel . Your Content is Amazing ! The Way you Explain Everything is amazing , Love From Pakistan Sir ..
@krutikashimpi626
@krutikashimpi626 8 ай бұрын
"For in the flow of knowledge, true growth shall show." The phrase suits you. can never be grateful enough. Thank you very much Sir.
@fastfacts9898
@fastfacts9898 9 ай бұрын
We do not need to read research paper guys , campusx is enough to explain the whole concept , big salute to king of Data science
@habalallah
@habalallah 5 ай бұрын
It is just Awesome.. I think only channel that is giving this detail in hindi/Urdu. THANKS FROM PAKISTAN
@AfifaSadiq
@AfifaSadiq 5 ай бұрын
i think i made a good decision by choosing one 50 min lecture on self attention over a playlist containing 12-15 minutes videos on transformer. the detail with which each part is explained... its just a masterpiece. this playlist is just art.
@samikshakolhe5086
@samikshakolhe5086 7 ай бұрын
Sir, you're absolutely gem. Incredible!! This type of intuition behind scaling really hats of to you. because of this topic most of my variance and softmax related concepts got clear.. Thank you so much sir. I always recommend everyone to watch your playlists for Data Science/ML concepts.
@abhinavsmart2783
@abhinavsmart2783 5 ай бұрын
Watching at 3:23 AM, mind boggling explanation. Salute... Guru ji
@mentalgaming2739
@mentalgaming2739 10 ай бұрын
No Sir This is Great Explanation Ever , the Content your are providing , I think No One is Providing this type of Content With Same Effort and Same Energy . And The Best Part is that you explanation makes the Topic Simple and Easy
@waseemrandhawa5658
@waseemrandhawa5658 8 ай бұрын
Must hai... main ne abi tak itni detailed say koi b video nahi dekhi transformer pay.
@top10collection88
@top10collection88 9 ай бұрын
mind boggling teaching experience with you sir, i always first lie your videos and than watch your videos. thank you so much for providing us this wonderful free lectures for us.
@shariq021
@shariq021 9 ай бұрын
thank you sir , we need such explanation , the idea of first principle and explaining things , so that we can interpret and understand future things , you are not just a teacher of what content you are teaching us , but beyond that you are teaching us how can we further understand things if we work through your methodology . thank you for breaking down things to so simple that even a nursery child can understand . you are like Raghuram rajan of Deep Learning .
@rahulkumawat4601
@rahulkumawat4601 10 ай бұрын
sir i don't think, i can forget you, you are the shaper of my life, thanks a lot
@SamirGaykar-k6u
@SamirGaykar-k6u 5 ай бұрын
The explanation is detailed. kindly keep the explanation always this detailed. thanks :)
@avijitpaul9702
@avijitpaul9702 11 ай бұрын
Insanely good content When I was listening to the contents, I had the feeling of learning from the real Swaraswati devi. It's really uncomparable. Truly I reinvented the self attention architecture. I didn't know that reinventing the wheel would give me such a joyous moment in my life. I owe a lot to you sir. ❤👃
@rajgothi2633
@rajgothi2633 2 ай бұрын
You are legend. IIT prof was not able to answer this dividing root(dk) question in class.
@kavyasaxena235
@kavyasaxena235 5 ай бұрын
Very nice explanation. Really enjoyed the Transformer lecture series. The teaching style surely improves the thinking approach, which is helpful when reading a research paper.
@mohittiwari7379
@mohittiwari7379 Ай бұрын
after seeing your video i think nitish sir have only make all the ai concept. thankyou sir for such videos
@GayatriNikam2003
@GayatriNikam2003 6 ай бұрын
Explaination is superb sir👏👏👏.Itna jyada samaj mai aaya ki kabhi bhool nhi payenge😄
@dhavalsukhadiya8654
@dhavalsukhadiya8654 3 ай бұрын
that kind of explanation is really amazing. thank you for such kind of explanation sir keep it up god bless you and your family
@hariskarim4331
@hariskarim4331 5 ай бұрын
Sir you are good teacher. Long explanation is so good. I have understand how they worked in self attention
@FinanceMarketStocks
@FinanceMarketStocks 19 күн бұрын
Thankyou🤟No one else can explain this concept this way.
@TechSpot56
@TechSpot56 2 ай бұрын
very well explained. keep it up with the same level of depth. no need to reduce the depth.
@waheedkhan-ey6vm
@waheedkhan-ey6vm 4 ай бұрын
respected sir i cant express how to appreciate you and your hard work ,you clear each and everything very broadly ,you r gem accept love respect from pakistan sir m below average student and i learn a lot from you just like you put everything in my mind
@SameerKumar-r2n
@SameerKumar-r2n 6 ай бұрын
One of the best video.This explained self attention mechanism very efficently.Thanks for this.
@shrikantvanarase7870
@shrikantvanarase7870 5 ай бұрын
This was good and easy. Also it was as fine as we dont have to remember it again,It will flow automatically....Thank you so much!!!
@themlguyyy
@themlguyyy 2 ай бұрын
i really hope deepmind is watching this... God level explanation
@harshitdaga2225
@harshitdaga2225 10 ай бұрын
Its Truly Amazing to experience how beautifully you make us understand the concepts and give us a crystal clear intuition.. Hats-off to your level of patience/calmness. That soothing explanations...🤌😲
@ashishmalhotra2230
@ashishmalhotra2230 11 ай бұрын
it is criminal to have this level of explanation for free. 😍
@ashutoshpatidar3288
@ashutoshpatidar3288 9 ай бұрын
CRIME*
@VikasKumar-kx8xk
@VikasKumar-kx8xk 11 ай бұрын
i always prefer depth understanding over the shallow one. and i m really thankful to you sir the way you teach all the hard concept in simple way. I m ready for the depth understanding that you r providing and even i want to study all the things in more detailed as much as possible. So i m ready for complex and depth understanding. Thank You Sir
@SourabhDaate
@SourabhDaate 8 ай бұрын
My friend got interview question how VGP and EGP problem is solved in transformer.....This is my 3rd time I'm seeing this playlist...it is the one stop solution for DL ....Love u Nitish Bhai.....Love from Maharashtra
@neetikashree5098
@neetikashree5098 8 ай бұрын
hi! I am going through this masterpiece video by Nitish sir(all my love to him), wanted to ask what do you mean by VGP and EGP here?
@Gotit-ne4qb
@Gotit-ne4qb 10 ай бұрын
keep continuing your masterpiece sir for deep leaning and MLops...it is extraordinary showcase of brilliance in explaining Thank you sir !!
@Watchtower-h4u
@Watchtower-h4u 11 ай бұрын
explanation is very good it helped me get interested in how things work, and after watching it was the inspiration for the research field. Thanks, a lot
@sayvaish9078
@sayvaish9078 9 ай бұрын
First approach videos are perfect! Sir, please keep on continuing this method.
@sandeeppolaki664
@sandeeppolaki664 8 ай бұрын
Thank yo Nitish sir, you have given an amazing explanation of these concepts. Appreciate your time and efforts into this. Will soon clear interviews.
@rshaikh05
@rshaikh05 Ай бұрын
best teaching sir! i will be watching all your videos for sure.
@rutvikkapuriya2033
@rutvikkapuriya2033 11 ай бұрын
Best Playlist for deep learning ever Thank you so much for great explanation
@anmolshrestha4391
@anmolshrestha4391 4 ай бұрын
aap detail main samjhate hain to ache se samaj aata hain sir
@bhanusachdeva1001
@bhanusachdeva1001 11 күн бұрын
Hi Nitesh, Keep teaching the same way with full explanation pls. Cheers!!
@AjayPateldeveloper
@AjayPateldeveloper 4 ай бұрын
Very well taught. Thank you for putting in so much effort.
@ABHISHEKJAIN-z9o
@ABHISHEKJAIN-z9o Ай бұрын
Excellent video Sir, just one doubt - we can only apply the statistical property when Y = c(X) then only we can say the var(Y) = c^2 (Var(X)), but here how did we claimed that v1 = c(v4) or anything like that
@himanshu788
@himanshu788 11 ай бұрын
Long waited video sir, please upload the videos more frequently.
@decodingds
@decodingds 5 ай бұрын
Thank you so much for the explanation. I feel satisfied after understanding your explanation.
@muhammadfarrukhshafeeq1955
@muhammadfarrukhshafeeq1955 7 ай бұрын
Truly, a marvelous effort and a superb way of explaining as you said.
99.9% IMPOSSIBLE
00:24
STORROR
Рет қаралды 31 МЛН
It’s all not real
00:15
V.A. show / Магика
Рет қаралды 20 МЛН
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 62 МЛН
Self-Attention Using Scaled Dot-Product Approach
16:09
Machine Learning Studio
Рет қаралды 18 М.
AI Is Making You An Illiterate Programmer
27:22
ThePrimeTime
Рет қаралды 107 М.
one year of studying (it was a mistake)
12:51
Jeffrey Codes
Рет қаралды 277 М.
Introduction to Transformers | Transformers Part 1
1:00:05
CampusX
Рет қаралды 92 М.
Bayes theorem, the geometry of changing beliefs
15:11
3Blue1Brown
Рет қаралды 4,6 МЛН
Лекция. Внимание (Attention)
38:51
Deep Learning School
Рет қаралды 16 М.