Attention Mechanism in 1 video | Seq2Seq Networks | Encoder Decoder Architecture

  Рет қаралды 23,963

CampusX

CampusX

Күн бұрын

In this video, we introduce the importance of attention mechanisms, provide a quick overview of the encoder-decoder structure, and explain how the workflow functions.
An attention mechanism is a key concept in the field of machine learning, particularly in the context of sequence-to-sequence (Seq2Seq) models with encoder-decoder architecture. Instead of processing an entire input sequence all at once, attention mechanisms allow the model to focus on specific parts of the input sequence while generating the output sequence. This mimics the human ability to selectively attend to different elements when processing information. Watch the video till the end to develop a deep understanding about this concept.
🔗 Research Paper: arxiv.org/pdf/1409.0473.pdf
============================
Do you want to learn from me?
Check my affordable mentorship program at : learnwith.campusx.in
============================
📱 Grow with us:
CampusX' LinkedIn: / campusx-official
CampusX on Instagram for daily tips: / campusx.official
My LinkedIn: / nitish-singh-03412789
Discord: / discord
👍If you find this video helpful, consider giving it a thumbs up and subscribing for more educational videos on data science!
💭Share your thoughts, experiences, or questions in the comments below. I love hearing from you!
⌚Time Stamps ⌚
00:00 - 00:55 - Intro
00:56 - 08:39 - The Why
08:40 - 11:20 - The Solution
11:21 - 41:10 - The What
41:11 - 41:23 - Conclusion
✨ Hashtags✨
#DataScience #MachineLearning #Deeplearning #CampusX

Пікірлер: 109
@cool12345687
@cool12345687 6 ай бұрын
Have been learning from KZbin for quite some years, have never seen a teacher like you.. Hats off.
@NabidAlam360
@NabidAlam360 4 ай бұрын
He explains it so precisely! The good thing about his teaching is that he does not make the video short just to finish the topic, instead, he explains each thing with patience! Hats off!
@Ocean_77848
@Ocean_77848 6 ай бұрын
Million dollars Lecture bro grab it.... clear concept in Hindi.... better lecture then the IIT'S and NIT'S Teacher
@WIN_1306
@WIN_1306 2 күн бұрын
aslo iiits
@vishutanwar
@vishutanwar 5 ай бұрын
I am planning to start your course from 2jan2023, I just thought to check last video when you uploaded, this is just 10 days old, nice, I wish I will complete this course in 3-4 months,
@kalyanikadu8996
@kalyanikadu8996 5 ай бұрын
I am writing to kindly request if you could consider creating videos on topics such as BERT, and Distill BERT and Transformers etc as soon as possible because of the actively ongoing placement activities. Also guide us on how to use hugging face interface in context of different NLP usecases.. I understand that creating content requires time and effort, but I believe that your expertise would greatly enhance our understanding of these highly important and crucial topics. Thank you in advance and eagrly waiting for your future content.
@WIN_1306
@WIN_1306 2 күн бұрын
hii maam,how and where are you giving placement interviews.I am junior and i am getting confused on how to apply for internships and all
@shivampradhan6101
@shivampradhan6101 5 ай бұрын
first time in 2yrs of trying attention mechanism is clear to me now, thanks
@haseebmohammed3728
@haseebmohammed3728 6 ай бұрын
This was just amazing sir. I have watched so many videos to understand the "C" value, no one give a clear explanation except you. Looking forward for next video from you.
@sakshipote20
@sakshipote20 5 ай бұрын
Hello, sir! Pls keep uploading this playlist. Eagerly waiting for the next video!!!
@Sam-gl1md
@Sam-gl1md 6 ай бұрын
Best explanation so far I have seen for Attention mechanism. Simple and easy to understand 👌👌👌👌👌👌👌👌👌👌
@gulamkibria3769
@gulamkibria3769 6 ай бұрын
You are a great teacher....more video like this...please
@shalinigarg8522
@shalinigarg8522 6 ай бұрын
Your all videos are exceptionally excellent and knowledgeable.... Please sir make videos on GNN's also with full detail and it's architecture and types..... It's humble request to you🙏
@aritradutta9538
@aritradutta9538 3 ай бұрын
what a fabulous explanation it was. Mind Blowing. Thanks a ton for explaining this much clear.
@koushik7604
@koushik7604 5 ай бұрын
it's a gem, honestly.
@Sandesh.Deshmukh
@Sandesh.Deshmukh 5 ай бұрын
I have tried multiple resources to learn this difficult concept...But the way you have explained is God level!!!!!!.... Thanks for efforts Nitish Sir ❤❤ Super Excited for upcoming videos.
@mohammadarif8057
@mohammadarif8057 5 ай бұрын
God Bless You ..Man what a session ..thanks a lot !!
@ranaasad6887
@ranaasad6887 5 ай бұрын
Hats off... such a clear and step by step approach. you connect concepts amazingly well. great teaching.
@user-jx2en8mo2b
@user-jx2en8mo2b 4 ай бұрын
THank you for such precise and clear explanation videos. 🙏🙏🙏🙏🙏🙏
@utkarshtripathi9118
@utkarshtripathi9118 5 ай бұрын
Sir You are making ossm Videos with Excellent way of Teaching.
@sheetalsharma4366
@sheetalsharma4366 Ай бұрын
Amazing explanation. Thank you very much sir.
@hetparekh1556
@hetparekh1556 4 ай бұрын
Amazing explanation of the topic
@pyclassy
@pyclassy 5 ай бұрын
Hello Nitesh your explanations are really really awesome..I need to understand transformers so I am waiting for that.
@sridharnomula3839
@sridharnomula3839 2 ай бұрын
Great. I feel its end to end explanation
@snehal7711
@snehal7711 29 күн бұрын
insightful & helpful...thanks!
@shyamjain8379
@shyamjain8379 6 ай бұрын
looking to finish this bro nice work
@sowmyak3326
@sowmyak3326 2 ай бұрын
I have no words to express how greatful I am. You really inspire me. I would recommend you to sell some goodies(t-shirts) so that we can purchase and show our love towards you....
@technicalhouse9820
@technicalhouse9820 3 ай бұрын
Best Teacher Ever
@somdubey5436
@somdubey5436 4 ай бұрын
Thanks!
@ariousvinx
@ariousvinx 6 ай бұрын
Thanks a lot for this amazing video! I always wait for your next of of this series ❤. Can you tell when the Transformers will start?
@itsamankumar403
@itsamankumar403 5 ай бұрын
TYSM Sir, Please also post Transformer lecture
@atrijpaul4009
@atrijpaul4009 6 ай бұрын
Much awaited
@financegyaninstockmarket2960
@financegyaninstockmarket2960 2 ай бұрын
What a video ! 😀, maza aa gaya
@mentalgaming2739
@mentalgaming2739 3 ай бұрын
Well Sir , As Always The Video is Soo Perfect , But Still i think , i Need More Practise on it . Thanks for the Giving the Best Version of Each Topic . Love from Pakistan . Sir Nitish. 😍
@guptariya43
@guptariya43 6 ай бұрын
Eagerly waiting for Self Attention and Multi head attention videos now. ..
@ParthivShah
@ParthivShah Ай бұрын
Thank You Sir.
@KumR
@KumR 5 ай бұрын
Great one. Waiting for transformers. BTW can u also speak about tools like Llama,Langchain, RAG, LLM tuning , DPO, etc as well.... coming from the guru will help community
@Shuraim843
@Shuraim843 6 ай бұрын
Sir Nitish , I am very excited and waiting for your next video which is about "Attention is all you need" research paper in which transformer is introduced. We are waiting for your next video because our favorite topic is Transformers next video. Sir please make next video of transformer soon.
@WIN_1306
@WIN_1306 2 күн бұрын
have u read the research papers?
@riyatiwari4767
@riyatiwari4767 5 ай бұрын
You mentioned in NLP playlist that once deep learning will be covered you will conver Topic Modelling & NER as well.. Please conver both these topics to complete your NLP playlist
@RashidAli-jh8zu
@RashidAli-jh8zu 5 ай бұрын
Thank you sir ❤
@vijaypalmanit
@vijaypalmanit 6 ай бұрын
First like then watch, because quality content is guaranteed
@ALLABOUTUFC-ms8nt
@ALLABOUTUFC-ms8nt 4 ай бұрын
Love you Sir.
@mrityunjaykumar2893
@mrityunjaykumar2893 Ай бұрын
Hi @campusX , Its great and clean explanation so far available on youtube, One confusion here, don"t you think--> to create Ci f(St-1,Hj), f= Neural Networ-->it will produce Eij, for all Eij, where i=1, j=1-4, will pass through softmax to generate the weights Aplha ij
@bibhutibaibhavbora8770
@bibhutibaibhavbora8770 6 ай бұрын
Maza aa gaya❤
@ranaasad6887
@ranaasad6887 5 ай бұрын
super excited to learn transformers😍😍
@not_amanullah
@not_amanullah 4 ай бұрын
Thanks
@user-uo2ri7fr4e
@user-uo2ri7fr4e 6 ай бұрын
tusi great ho sir
@amarendrakushwaha2206
@amarendrakushwaha2206 6 ай бұрын
Awesome
@sagarbhagwani7193
@sagarbhagwani7193 6 ай бұрын
Thankusomuch sir❤
@akshayshelke5779
@akshayshelke5779 6 ай бұрын
Please upload next video soon we are waiting for transformers and bert
@nomannosher8928
@nomannosher8928 3 ай бұрын
I first like the video, then watch it because I know the quality of the lecture.
@riyatiwari4767
@riyatiwari4767 6 ай бұрын
Thank u so much for this video. Please provide videos for transformer atleast by christmas. I have an interview aligned. Its a humble request
@Shuraim843
@Shuraim843 6 ай бұрын
mai aapko transformers ki thori information de dno aapke interview preparation k leye **Transformers:** Transformers ek advance deep learning architecture hai jo sequence data ko process karne mein kaam aata hai. Yeh recurrent neural networks (RNNs) ko ignore karte hain aur attention mechanism ka use karte hain. Transformers ne natural language processing (NLP) tasks mein kamaal ka kaam kiya hai aur kai applications mein istemal hota hai. **Key Components:** 1. **Attention Mechanism:** - Transformers ka ek mukhya component attention mechanism hai. - Isse model kisi bhi sequence ke har ek part ko dekhega, lekin dhyan kendrit taur par. Yani, model kisi specific part ko zyada importance dega jiska focus chal raha ho. 2. **Multi-Head Attention:** - Transformers mein multi-head attention ka concept hota hai. - Yeh ek sequence ko multiple subspaces mein divide karta hai, aur har ek subspace par alag-alag attention mechanism ka istemal hota hai. 3. **Positional Encoding:** - Kyunki Transformers sequence order ko ignore karte hain, iske liye positional encoding ka use hota hai. - Positional encoding model ko batata hai ki har ek word ka position sequence mein kya hai. 4. **Feedforward Neural Network:** - Har transformer layer mein feedforward neural network ka istemal hota hai jo non-linear mappings create karta hai. 5. **Encoder-Decoder Structure:** - Transformers mein usually encoder-decoder structure hoti hai, jisme input sequence ko encode kiya jata hai aur phir usse decode kiya jata hai. Transformers ki flexibility aur parallelization capability ne ise NLP tasks mein khaas karke kaabil banaya hai. Iske success ke baad, researchers ne isse inspire hokar aur improvements ke liye kaam kiya hai, jisme RWKV bhi ek example hai.
@ankitgupta1806
@ankitgupta1806 3 ай бұрын
thnk u
@GauravKumar-dx5fp
@GauravKumar-dx5fp 5 ай бұрын
Sir when will the next video in this series come...?
@Manishkumar-iw1cy
@Manishkumar-iw1cy 5 ай бұрын
Worth it
@muhammadikram375
@muhammadikram375 6 ай бұрын
sir please ap apni achievements or job k bary mai koi video banaen --- your job --- your monthly income (online earn by selling your skill, not from social media i.e. KZbin etc --- and many more etc
@shaktisd
@shaktisd 6 ай бұрын
Very good explanation, please make self attention video also
@Shuraim843
@Shuraim843 6 ай бұрын
Transformer ka ek component hai self attention mechanism
@shaktisd
@shaktisd 5 ай бұрын
@@Shuraim843 really , that's a news . Do u mind explaining in detail what is your understanding of self attention and how llm use it to produce SOTA results ?
@utkarshtripathi9118
@utkarshtripathi9118 5 ай бұрын
Sir Please Make videos on Generative AI and New AI Tools based on Generative AI
@user-dy4rp1ye4n
@user-dy4rp1ye4n 3 ай бұрын
doubt : 31:39 instead of ANN why we are not cosine simlarity score as you said alpha is score value .
@anupamgevariya341
@anupamgevariya341 4 ай бұрын
hii sir i have doubt like we are fedding input to our decoder as y1 and c1 so inside decoder it was rnn cell so how we can send two input os we are doing some operation before fed in. like dot prodeuct of y1*c1 so we get dimension back ??
@gourabguha3167
@gourabguha3167 3 ай бұрын
Sir MLOPs ka next videos pls upload karo, eagerly waiting Sir!!
@abhinavmaurya7305
@abhinavmaurya7305 6 ай бұрын
Sir please bring the implementation video also for encoder decoder
@aadityaadyotshrivastava2030
@aadityaadyotshrivastava2030 3 ай бұрын
One suggesion, please do not say in your video that this topic is tough to understand. The point here is that when you are teaching it doesn't seem to be tough to understand and secondly by saying "The topic is tough", physoclogically it impacts the learners... I have not found any tough video as of now. Your explanation is super!
@Bilal-rl2ee
@Bilal-rl2ee 5 ай бұрын
Sir, Kindly complete playlist please.
@mallemoinagurudarpanyadav4937
@mallemoinagurudarpanyadav4937 6 ай бұрын
@keshavpoddar9288
@keshavpoddar9288 5 ай бұрын
can't wait for the Transformers video
@HimanshuSharma-we5li
@HimanshuSharma-we5li 4 ай бұрын
Make some videos on who wants to start their ai/ml startups
@NarendraBME
@NarendraBME 5 ай бұрын
Hello sir, please cover the topics from GANs, diffusion models as well.
@himeshsarkar3259
@himeshsarkar3259 2 ай бұрын
he has a video on GAN.
@dattatreyanh6121
@dattatreyanh6121 3 ай бұрын
Make video on point net architecture
@lakshmims7590
@lakshmims7590 3 ай бұрын
Can make seperate video on LLM
@majidhabibkhan625
@majidhabibkhan625 3 ай бұрын
sir i am waiting for Attention Mechanism in 2 video
@not_amanullah
@not_amanullah 4 ай бұрын
Understood++
@vatsalshingala3225
@vatsalshingala3225 3 ай бұрын
❤❤❤❤❤❤❤❤❤❤
@naveenpoliasetty954
@naveenpoliasetty954 5 ай бұрын
Sir waiting for Transformers lecture
@teksinghayer5469
@teksinghayer5469 5 ай бұрын
when will video on transformer will come ?
@kalyanikadu8996
@kalyanikadu8996 5 ай бұрын
Sir Please add next videos.
@subhadwipmanna4130
@subhadwipmanna4130 5 ай бұрын
Thank sir, all clear, but one doubt only ,anyone pls answer. for i=2, there is one ANN or four ANN?....let say s1 = [1,2,3,4] and h1 = [5,6,7,8]..so to get alpha21, input in ANN will be [1,2,3,4,5,6,7,8] ?? or combination of all h ??
@neilansh
@neilansh 5 ай бұрын
I think, there will be single ANN for a particular Ci, and total there will be 4 ANN (for C1,C2,C3 and C4). Regarding calculation of alpha21 => You will provide s1 and h1 and alpha21 will be calculated. Similarly for calculating alpha22 you will provide s1 and h2 and so on... ( I can't guarantee that this explanation is correct)
@saikatdaw6909
@saikatdaw6909 5 ай бұрын
When can we expect the next video in the series....Transformers
@data_scientist_harish
@data_scientist_harish 6 ай бұрын
Sir Llms pr playlists bnao please
@Bilal-rl2ee
@Bilal-rl2ee 5 ай бұрын
Next vedio please sir
@Bilal-rl2ee
@Bilal-rl2ee 5 ай бұрын
Sir next vedio
@not_amanullah
@not_amanullah 5 ай бұрын
I didn't find better explanation Even in English
@himanshurathod4086
@himanshurathod4086 5 ай бұрын
nlp ka ner and topic modeling baki hai
@debojitmandal8670
@debojitmandal8670 5 ай бұрын
Where are you passing only h then your c is nothing but addition of all h multiplied by respective weights so for eg if h4 is a vector say [1,2,3,4] then your c is not h4 instead it's a different number c1 = weights*h1+weights2*h2+weights3*h3+weights4*h4 which will a completely different vector and not 1,2,3,4
@WIN_1306
@WIN_1306 2 күн бұрын
i cant understand that now this video has 23k views but only 926 likes!!!!!!!!!!!!! like wtf
@campusx-official
@campusx-official 2 күн бұрын
Aap to kar do!
@sharangkulkarni1759
@sharangkulkarni1759 7 күн бұрын
Ci lgega kidhr but?
@toxoreed4313
@toxoreed4313 6 ай бұрын
tytytyty
@user-qu1dm6pn3k
@user-qu1dm6pn3k 5 ай бұрын
Hello, sir! Pls keep uploading this playlist. Eagerly waiting for the next video!!!
@Bilal-rl2ee
@Bilal-rl2ee 5 ай бұрын
Sir, Kindly complete playlist please.
@not_amanullah
@not_amanullah 4 ай бұрын
@Bilal-rl2ee
@Bilal-rl2ee 5 ай бұрын
Sir, Kindly complete playlist please.
@Bilal-rl2ee
@Bilal-rl2ee 5 ай бұрын
Sir, Kindly complete playlist please.
@Bilal-rl2ee
@Bilal-rl2ee 5 ай бұрын
Sir, Kindly complete playlist please.
@Bilal-rl2ee
@Bilal-rl2ee 5 ай бұрын
Sir, Kindly complete playlist please.
@Bilal-rl2ee
@Bilal-rl2ee 5 ай бұрын
Sir, Kindly complete playlist please.
@Bilal-rl2ee
@Bilal-rl2ee 5 ай бұрын
Sir, Kindly complete playlist please.
@Bilal-rl2ee
@Bilal-rl2ee 5 ай бұрын
Sir, Kindly complete playlist please.
@Bilal-rl2ee
@Bilal-rl2ee 5 ай бұрын
Sir, Kindly complete playlist please.
@Bilal-rl2ee
@Bilal-rl2ee 5 ай бұрын
Sir, Kindly complete playlist please.
@Bilal-rl2ee
@Bilal-rl2ee 5 ай бұрын
Sir, Kindly complete playlist please.
@Bilal-rl2ee
@Bilal-rl2ee 5 ай бұрын
Sir, Kindly complete playlist please.
@Bilal-rl2ee
@Bilal-rl2ee 5 ай бұрын
Sir, Kindly complete playlist please.
Bahdanau Attention Vs Luong Attention
52:33
CampusX
Рет қаралды 14 М.
Gated Recurrent Unit | Deep Learning | GRU | CampusX
1:26:22
CampusX
Рет қаралды 23 М.
Just try to use a cool gadget 😍
00:33
123 GO! SHORTS
Рет қаралды 81 МЛН
Пробую самое сладкое вещество во Вселенной
00:41
Кушать Хочу
Рет қаралды 4,1 МЛН
The Noodle Picture Secret 😱 #shorts
00:35
Mr DegrEE
Рет қаралды 29 МЛН
Learn PyTorch for deep learning in a day. Literally.
25:36:58
Daniel Bourke
Рет қаралды 1,4 МЛН
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 207 М.
xLSTM: Extended Long Short-Term Memory
57:00
Yannic Kilcher
Рет қаралды 29 М.
The math behind Attention: Keys, Queries, and Values matrices
36:16
Serrano.Academy
Рет қаралды 209 М.
Attention in Neural Networks
11:19
CodeEmporium
Рет қаралды 199 М.
Introduction to Transformers | Transformers Part 1
1:00:05
CampusX
Рет қаралды 36 М.
Attention Mechanism In a nutshell
4:30
Halfling Wizard
Рет қаралды 75 М.
Just try to use a cool gadget 😍
00:33
123 GO! SHORTS
Рет қаралды 81 МЛН