This is far better than other videos on the same topic- finally I understood the topic. I slept off in all other videos and grasped the topic here fully!
@lavasaigo24 күн бұрын
Excellent explanation sir. Thank you!!
@bakyt_yrysov3 ай бұрын
Thank you very much! So far the best channel for Data Science! Please, keep it up!
@riyazbagban9190Ай бұрын
nice and neat explaination
@kcbaskar27 күн бұрын
You have explained the Simple Attention model clearly with mathematical examples. If you could make a small vocabulary set and show the next predicted word through the calculation, we can understand much better. We will know what these calculations are doing and how they are finding the next word. If possible, use a Python program for the example.
@danzellbanksgoree232 ай бұрын
This was so great!!!!
@SanthoshKumar-m2s7 ай бұрын
Thank you for your clear explanation
@mahieerati65052 ай бұрын
Great explanation
@ajitkulkarni17027 ай бұрын
Best expalinination on self attention !!!
@dinu96707 ай бұрын
You are a saviour man. Great explanation. Please keep doing these videos 🙏
@UnfoldDataScience7 ай бұрын
Thanks, will do!
@jayeshsingh1167 ай бұрын
well explained thank you for covering these topics
@cgqqqq2 ай бұрын
a true gem
@AnkitGupta-rj4yy7 ай бұрын
Thank you for provide us ❤ in easy way
@PAadhanАй бұрын
nice explanation, Thank you how to calculate the vector value x1, x2, x3 f
@irfanhaider30216 ай бұрын
Kindly make a video on GRU layer as well.
@manoj1bk7 ай бұрын
can be used as self attention mechanism(as an embedding layer) before LSTM in the context of time series analysis?
@mohammedajaz60343 ай бұрын
Thanks for the video.
@UnfoldDataScience3 ай бұрын
Welcome Ajaz.
@funwithtechnology65267 ай бұрын
Thank you for the very clear explanation :) . I have a small question here. In self-attention, is there a limit to the dimension of the final attention embedding space?
@dhirajpatil67767 ай бұрын
Please made video on explanation of transformers architecture