personally, I find that seeing someone actually code something from scratch is the best way to get a basic understanding
@zhilinwang63038 ай бұрын
indeed
@馬桂群8 ай бұрын
indeed
@CM-mo7mv6 ай бұрын
i don't need to see someone typing... but you might also enjoy watching the gras grow or paint dry
@FireFly9695 ай бұрын
Yeah, and you see how these technologies works. It's insane, that in the end it looks easy that you can do something that companies of millions and billions of dollars do. In a small way but the same idea at the end.
@AtomicPixels4 ай бұрын
Yeah kinda ironic how that works. The simplest stuff required the most complex explanations
@physicswithbilalasmatullah5 ай бұрын
Hi Umar. I am a first year student at MIT who wants to do AI startups. Your explanation and comments during coding were really helpful. After spending about 10 hours on the video, I walk away with great learnings and great inspiration. Thank you so much, you are an amazing teacher!
@umarjamilai5 ай бұрын
Best of luck with your studies and thank you for your support!
@shauryaomar50909 күн бұрын
I am a 3rd semester student at IIT Roorkee. I am also interested in AI startups.
@umarjamilai Жыл бұрын
The full code is available on GitHub: github.com/hkproj/pytorch-transformer It also includes a Colab Notebook so you can train the model directly on Colab. Of course nobody reinvents the wheel, so I have watched many resources about the transformer to learn how to code it. All of the code is written by me from zero except for the code to visualize the attention, which I have taken from the Harvard NLP group article about the Transformer. I highly recommend all of you to do the same: watch my video and try to code your own version of the Transformer... that's the best way to learn it. Another suggestion I can give is to download my git repo, run it on your computer while debugging the training and inference line by line, while trying to guess the tensor size at each step. This will make sure you understand all the operations. Plus, if some operation was not clear to you, you can just watch the variables in real time to understand the shapes involved. Have a wonderful day!
@AiEdgar Жыл бұрын
The best video ever
@odyssey0167 Жыл бұрын
Can you provide with the pretrained models?
@wilfredomartel77816 ай бұрын
🎉is this Bert architecture?
@sachinmohanty4577Ай бұрын
@@wilfredomartel7781 Its complete encoder- decoder based model, bert is the encoder part of this encoder-decoder model
@yangrichard78749 ай бұрын
Greeting from China! I am PhD student focused on AI study. Your video really helped me a lot. Thank you so much and hope you enjoy your life in China.
@umarjamilai9 ай бұрын
谢谢你!我们在领英联系吧
@germangonzalez3063Ай бұрын
I am also a Ph.D. student. This video is valuable. Many thanks!
@ArslanmZahid9 ай бұрын
I have browsed KZbin for the perfect set of videos on transformer, but your set of videos (the video explanation you did on the transformer architecture) and this one is by far the best !! Take a bow brother, you have really contributed to the viewers in amount you cant even imagine. Really appreciate this !!!
@@decarteaoO cara da China e muito engracado con o video
@MuhammadArshad10 ай бұрын
Thank God, it's not one of those 'ML in 5 lines of Python code' or 'learn AI in 5 minutes'. Thank you. I can not imagine how much time you must have spent on making this tutorial. thank you so much. I have watched it three times already and wrote the code while watching the second time (with a lot of typos :D).
@faiyazahmad28692 ай бұрын
One of the best tutorial to understand and implement the Transformer model...Thank you for making such a wonderful video
@californiaBala12 күн бұрын
This is the best one; we need to train a model; let the model observe your actions; and learn from you. With a physical structure, Tesla robot, could take classes based on your training.
@kozer198611 ай бұрын
I'm not sure if it is because I have study this content 1000000 times or not, but is the first time that I understood the code, and feel confident about it. Thanks!
@abdullahahsan385911 ай бұрын
Keep doing what you are doing. I really appreciate you taking out so much time to spread such knowledge for free. Been studying transformers for a long time but never have I understood it so well. The theoretical explanation in the other video combined with this practical implementation, just splendid. Will be going through your other tutorials as well. I know how much time taking it is to produce such high level content and all I can really say is that I really am grateful for what you are doing and hope that you continue doing it. Wish you a great day!
@umarjamilai11 ай бұрын
Thank you for your kind words. I wish you a wonderful day and success for your journey in deep learning!
@SaltyYagiАй бұрын
I really appreciate your efforts. The explanations are very clear. This is a great service for people that wish to learn the future of AI! All the best from Spain!
@zhengwang14029 ай бұрын
This feels really fantastic when looking someone write a program from bottom up
@dengbuqi5 ай бұрын
What a WONDERFUL example of transformer! I am Chinese and I am doing my PhD program in Korea. My research is also about AI. This video helps me a lot. Thank you! BTW, your Chinese is very good!😁😁
@raviparihar32983 ай бұрын
best video I have ever seen on whole youtube eon transformer model. Thank you so much sir!
@mikehoops10 ай бұрын
Just to repeat what everyone else is saying here - many thanks for an amazing explanation! Looking forward to more of your videos.
@JohnSmith-he5xg11 ай бұрын
Loving this video (only 13 minutes in), really like you using type hints, commenting, descriptive variable names, etc. Way better coding practices than most of the ML code I've looked at. At 13:00, for the 2nd arg of the array indexing, you could just do ":" and it would be identical.
@tonyt13439 ай бұрын
Thank you for this comment! I'm coding along with this video and I wasn't sure if my understanding was correct. I'm glad someone else was thinking the same thing. Just to be clear, I am VERY THANKFUL for this video and am in no way complaining. I just wanted to make sure I understand because I want to fully internalize this information.
@SaiManojPrakhya-mp4oe2 ай бұрын
Dear Umar - thank you so much for this amazing and very clear explanation. It has deeply helped me and many others in understanding the theoretical and practical implementation of transformers! Take a bow!
@manishsharma221110 ай бұрын
WOW WOW WOW, though it was a bit tough for me to understand it, I was able to understand around 80 % of the code, beautiful. Thank you soo much
@terryliu36353 ай бұрын
I learnt a lot from following the steps out of this video and create a transformer myself step by step!! Thank you!!
@abdulkarimasif6457 Жыл бұрын
Dear Umar, your video is full of knowledge; thanks for sharing.
@sagarpadhiyar36664 ай бұрын
Best video I came across for transformer from scratch.
@aiden30859 ай бұрын
Thank you Umar for our extraordinary excellent work! Best transformer tutorial ever I have seen!
@saziedhassan3976 Жыл бұрын
Thank you so much for taking the time to code and explain the transformer model in such detail. You are amazing and please do a series on how transformers can be used for time series anomaly detection and forecasting!
@amiralioghli8622 Жыл бұрын
My question and request is same as you. if you found any tutorial please share with me.
@ghabcdef7 ай бұрын
Thanks a ton for making this video and all your other videos. Incredibly useful.
@umarjamilai7 ай бұрын
Thanks for your support!
@balajip503011 ай бұрын
Thanks Bro. With your explanation, I am able to build the transformer model for my application. You explained so awesome. Please do what you are doing.
@codevacaphe37633 ай бұрын
Hi, I just happen to see your video. It's really amazing, your channel is so good with valuable information. Hope, you keep this up because I really love your contents.
@maxmustermann106611 ай бұрын
This video is incredible, never understood it like this before. I will watch your next videos for sure, thank you so much!
@mohamednabil374 Жыл бұрын
Thanks Umar for this comprehensive tutorial, after watching many videos I would say, this is AWESOME! It would be really nice if you can provide us with more tutorials on Transformers especially training them for longer sequences. :)
@umarjamilai Жыл бұрын
Hi mohamednabil374, stay tuned for my next video on the LongNet, a new transformer architecture that can scale up to 1 billion tokens.
@phanindraparashar8930 Жыл бұрын
It is really amazing video. I tried understanding the code of it from various other youtube channel; but was always getting confused. Thanks a lot :) . Can you make a series on BERT & GPT aswell; where you build these models and train on custom data?
@umarjamilai Жыл бұрын
Hi Phanindra! I'll definetely continue making more videos. It takes a lot of time and patience to make just one video, not considering the preparation time to study the model, write the code and test it. Please share the channel and subscribe, that's the biggest motivation to continue providing high quality content to you all.
@rubelahmed545810 ай бұрын
A coding example for BERT would be great!@@umarjamilai
@AyushRaj-nt3ot3 ай бұрын
sir, your explanation is just beyond awesome!!! Thank you so much for creating such content. Sir I didn't get the residual connections part. As I am from India, I was working on Indic Languages, so i had to make more code but that's just okay. I just want if you could please help in understanding beam search code, the one which you also gave in the GitHub File. Also, if you could give the code for evaluating the BLEU Score. I'll be really grateful to you. And again, thank you so much for such a comprehensive content. We'd love to see your more videos especially in Generative AI! P.S. : I didn't understand how you wrote it, what I've understood is that we have to take the input of the previous layer and then add with o/p of the same layer and then apply layer norm on that. Basically Add and then LayerNorm. Please help me correct mysefl!
@forresthu620410 ай бұрын
At 22:39, it describes the essentials of self-attentions computation in very clear and easy to understand way.
@shresthsomya74197 ай бұрын
Thanks a lot for such a detailed video. Your videos on transformer are best.
@dapostop73844 ай бұрын
Wow super usefull! Coding really helps me understand the process better than visuals.
@VishnuVardhan-sx6bq9 ай бұрын
This is such a great work, I don't really know how to thank you but this is an amazing explanation of an advanced topic such as transformer.
@Patrick-wn6uj6 ай бұрын
Hi Umar thank you for all the work you are doing, please consider making a video like this on vision transformers
@JohnSmith-he5xg11 ай бұрын
OMG. And you also note Matrix shapes in comments! Beautiful. I actually know the shapes without having to trace some variable backwards.
@amiralioghli8622 Жыл бұрын
Thank you so much for taking the time to code and explain the transformer model in such detail. You are amazing and, if possible please do a series on how transformers can be used for time series anomaly detection and forecasting. it is extremly necessary on yotube for somone! Thanks in advance.
@salmagamal56768 ай бұрын
I can't possibly thank you enough for this incredibly informative video
@shakewingo321610 ай бұрын
Thanks for making it so easy to understand. I definitely learn a lot and gain much more confidence from this!
@si0n4ra Жыл бұрын
Umar, thank you for the amazing example and clear explanation of all your steps and actions.
@umarjamilai Жыл бұрын
Thank you for watching my video and your kind words! Subscribe for more videos coming soon!
@si0n4ra Жыл бұрын
@@umarjamilai , mission completed 😎. Already subscribed. All the best, Umar
@DatabaseAdministration7 ай бұрын
You are one of the coolest dude in this area. It'd be helpful if you provide a roadmap to reach your expertise. I'd really love to learn from you but i can't understand. Roadmap will help so many of your subscribers.
@lyte6911 ай бұрын
Hey there! I enjoyed watching that video, you did a wonderful job explaining everything, and I found it super easy to follow along. Overall, it was a really great experience!
@Hdjandbkwk Жыл бұрын
Just want to say thank you!! This is easily one of my favorite video on KZbin! I have watched a few videos on transformers but none explained it as clear as you, at first I was scared by the length of the video but you managed to have my attention for the full 3 hours! Following your instructions I am now able to train my very first transformer! Btw, I am using the tokenizer the way you are but looking at the tokenizer file it looks like my tokenizer didn’t split the sentences into words and it is using the whole sentence as token. Do you have any idea why? I am using mac if that matters.
@umarjamilai Жыл бұрын
Hi! Thanks for your kind words! Make sure your PreTokenizer is the "Whitespace" one and that the Tokenizer is the "WordLevel" tokenizer. As a last resort, you can clone the repository from my GitHub and compare my code with yours. Have a wonderful rest of the day!
@Hdjandbkwk Жыл бұрын
I have PreTokenizer set as whitespace and using WordLevel tokenizer and trainer but it will still encode the sentence as a whole. I did a direct swap to use BPE tokenizer and that is correctly encoding the sentences, maybe there is bug in WordLevel tokenizer for macOS. Another question that I have is what determines the max context size for LLMs? Is it the d_model size?@@umarjamilai
@MrSupron00 Жыл бұрын
This is excellent! Thank you for putting this together. I do have one point of confusion with how the final multihead attention concatenation takes place. I believe the concatenation takes place on line 110 where V' = (V1, V2,.. Vh) (sequenc_length, h*dk) This is intended to be multiplied by matrix W0 (h*dk, dmodel) to give something of shape (sequenc_length, dmodel ) as is required. However, here you implement a linear layer operation which takes the concat V' (sequence_length, d_model) and is fed into a linear layer constructed so that we do the following: W*V'+b where the dimension of W and b are chose to satisfy the output dimension. This is different from multiplying directly with a predefined trainable matrix of size W0. Now, I can see how these are nearly the same thing and in practice it may not matter, but it would be helpful to point out these tricks of the trade so folks like myself don't get bogged down with these subtleties. Thanks
@ansonlau70406 ай бұрын
Big thankyou for the video, makes transformer so easy to learn(also the explanation video)👍👍
@gunnvant Жыл бұрын
This was really good. I understood multihead attention better with the code explanation.
@goldentime114 ай бұрын
Thanks for your detailed tutorial. Learned a lot!
@angelinakoval836010 ай бұрын
Dear Umar, thank you so so much for the video! I don't have much experience in deep learning, but your explanations are so clear and detailed I understood almost everything 😄. It wil be a great help for me at my work. Wish you all the best! ❤
@umarjamilai10 ай бұрын
Thank you for your kind words, @angelinakoval8360!
@texwiller75775 ай бұрын
Dottore...sei un grande!
@skirazai75919 ай бұрын
Great video, you are insanely talented btw.
@CathyLiu-d4k11 ай бұрын
Really great explanation to understand Transformer, many thanks to you.
@keflat239 ай бұрын
what to say.. just WOW! thank you so much !!
@ChathikaGunaratne2 ай бұрын
Amazingly useful video. Thank you.
@michaelscheinfeild9768 Жыл бұрын
Im enjoying clear explanation of The Transformer Coding !
@pratheeeeeesh48392 ай бұрын
Hidden gem!💎
@neelarahimi105323 күн бұрын
Great video! Thanks :)
@prajolshrestha968611 ай бұрын
I appreciate you for this explanation. Great video!
@OleksandrAkimenko11 ай бұрын
You are a great professional, thanks a ton for this
@oborderies10 ай бұрын
Sincere congratulations for this fine and very useful tutorial ! Much appreciated 👏🏻
@jihyunkim431510 ай бұрын
perfect video!! Thank you so much. I always wonder the detail code and its explanation and now I almost understand all of it. thanks:) you are the best for me!
@umarjamilai10 ай бұрын
You're welcome!
@nhutminh15529 ай бұрын
Thank you admin. Your video is great. It helps me understand. Thank you very much.
@babaka18504 ай бұрын
for determining the max len of tgt sentence, I believe you should point to tokenizer_tgt rather than tokenizer_src. tgt_ids = tokenizer_tgt.encode(item['transaltion'][config['ang_tgt']]).ids
@Mostafa-cv8jc10 ай бұрын
Very good video. Tysm for making this, you are making a difference
@godswillanosike8965 ай бұрын
Great explanation! Thanks very much
@jeremyregamey49510 ай бұрын
I love your videos. Thank you for sharing your knowledge and i cant wait to learn more.
@peregudovoleg8 ай бұрын
Thank you for your straight to the point no bs videos. Good code alongs and commentary. But it looks like positional encoding aren't correct (as per paper). There is power there in: denominator = np.power(10000, 2*i/d). I get it you decided to use exp+log pair for stability, but no mentions of the power gone. And extra layer norm after encoder. As in we "norm + add" (like you defined in the video, instead add & norm as per paper, but you said "lets stick with it", I understand) but than we norm again, after the last encoder block. Like so: layer norm + add (in last encoder block (inside residual connection ) + layer norm again (inside Encoder class) (I could be wrong but it looks like that)
@shengjiadiao31662 ай бұрын
the contents are crazy !!!!
@divyanshbansal23217 ай бұрын
Thank you mate. You are a godsend!
@vigenisayan23437 ай бұрын
it was very useful to watch. Question: What books or learning sources you would suggest to learn pytorch deeply. Thanks
@albert4392 Жыл бұрын
This is an excellent video, your explanation is so clear and the live coding helps understanding! Can you give us tips to debug such an huge model? Because it is really hard to make sure the model works well. My tips on debugging is to print out the shape of the tensor in each step, but this only make sure the shape is correct, there may be some logical error I may miss out. Thank you!
@umarjamilai Жыл бұрын
Hi! I'd love to give a golden rule for debugging models, but unfortunately, it depends highly on the architecture/loss/data itself. One thing that you can do is, before training the model on a big dataset, it is recommended to train it on a very small dataset to make sure everything is working and the model should overfit on the small dataset. For example, if instead of training on many books, you train a LLM on a single book, hopefully it should be able to write sentences from that book, given a prompt. The second most important thing is to validate the model as the training is proceeding to verify that the quality is improving over time. Last but not least, use metrics to decide if the model is going in the right direction and make experiments on hyper parameters to verify assumptions, do not just make assumptions without validating them. When you have a model with billions of parameters, it is difficult to predict patterns, so every assumption must be verified experimentally. Have a nice day!
@cicerochen31310 ай бұрын
Awesome! Highly appreciate. 超級讚!非常的感謝。
@toxicbisht43448 ай бұрын
Amazing explanation Thank you for this
@LeoDaLionEdits4 ай бұрын
thank you so much for these videos
@AdityaAgarwal-v3b Жыл бұрын
one of the best videos thanks a lot for the video.
@aspboss1973 Жыл бұрын
Its really awesome video with clear explanations. And flow of code is very easy to understand. One question, how to implement this transformer architecture for Question-Answer based model ? (Q/A on very specific topic lets say a manual of instrument..) Thank you ! so much for this video !!!
@AmishaHSomaiya4 ай бұрын
Thank you very much, this is very useful.
@sypen110 ай бұрын
Mate you are a beast!
@mohsinansari3584 Жыл бұрын
Just finished watching. Thanks so much for the detailed video. I plan to spend this weekend on coding this model. How long did it take to train on your hardware?
@umarjamilai Жыл бұрын
Hi Mohsin! Good job! It took around 3 hours to train 30 epochs on my computer. You can train even for 20 epochs to see good results. Have a wonderful day!
@NaofumiShinomiya Жыл бұрын
@@umarjamilai what is your hardware? Just started studying deep learning few days ago and i didnt know transformers could take this long to train
@umarjamilai Жыл бұрын
@@NaofumiShinomiya Training time depends on the architecture of the network, on your hardware and the amount of data you use, plus other factors like learning rate, learning scheduler, optimizer, etc. So many conditions to factor in.
@SyntharaPrime10 ай бұрын
Great Job. Amazing. Thanks a lot. I really appreciate you. It is so much effort.
@andybhat59882 ай бұрын
Thanks for a great video.
@user-ul2mw6fu2e9 ай бұрын
Wow Your explanation amazing
@ZhenjiaoDu6 ай бұрын
in the 13:13/2:59:23, when we build the PositionalEncoding function, this line x = x + (self.pe[:,:x.shape[1],:]).requires_grad_(False), the x.shape[1] looks like not be used in the transformer model, because when we build the dataset.py function, we pad all the sentences into the same length, and then we load the (batch, seq_len, input_embedding_dim) into the PositionalEncoding function, where all x.shape[1] in the batch is the seq_len, instead of varying by their original sentence length.
@majidwasiqi4031 Жыл бұрын
Great video. Been watching for 12 hours now. My heads about to explode. Think I lost all the attention.
@coc2912 Жыл бұрын
Thanks for your video and code.
@GadepalliFamily-i4x6 ай бұрын
You are a genius
@mohammadyahya782 ай бұрын
Perfect
@user-wr4yl7tx3w10 ай бұрын
the code is really well written. very easy and nicely organized.
@linlinpan31502 ай бұрын
For the Encoder/Decode code - why is the last step in these a normalization layer? We wrote the Residual Connection layers with a pre-normalization step (instead of post-normalization as was in the original Transorfer paper).
@omidsa83238 ай бұрын
Great Job!
@ayushlakshakar939311 ай бұрын
Why can't we use pytorch inbuilt layer normalization ? Why you are creating a class for separate layer normalization ?
@umarjamilai11 ай бұрын
Hi! The video is part of the "from scratch" series, so I try to make as many things as possible from scratch. The goal is to learn the underlying mechanisms.
@pawanmoon Жыл бұрын
Great work!!
@txxie9 ай бұрын
This video is great! But can you explain how you convert the formula of positional embeddings into log form?
@SchadenfreudeeАй бұрын
You forgot to take the square root of the denominator in line 52 of your Layer Normalization class in the forward method at 18:09 timestamp.
@SaltyYagiАй бұрын
You're right, it should be: return self.alpha * (x - mean) / (torch.sqrt(std**2 + self.eps)) + self.bias Right?
@nobuyanishio8191Ай бұрын
He omitted the square root because it is computationally better to use std + eps instead of sqrt (var + eps). Remember, variance is std**2. The square root and power of 2 cancels out.
@ionut.6667 ай бұрын
I would suggest increasing the font by a few points because it is really hard to see since your resolution is high. Please take this into consideration for future videos.
@michaelscheinfeild9768 Жыл бұрын
i enjoyed the video ! now i can transform the world !
@ZhenjiaoDu7 ай бұрын
Really helpful video. I watched it many times. Hope you enjoy your life in China. 龙年大吉
@umarjamilai7 ай бұрын
谢谢老板的精准扶贫
@daviderizzotti27247 ай бұрын
Why at 51:08 are you applying an extra normalization at the end of the whole encoder pass? The tutorial has been amazing so far ;)
@ciliamadani30466 ай бұрын
I have the same question ...
@reannwu328311 ай бұрын
This is work of art.
@FireFly9694 ай бұрын
Thank you umar jamil for this wonderfull content, to be honest i find it so hard to keep undertanding each part and what happens in each line of code for a beginner in pytorch. I wonder what i need to know before starting one of your videos. I think i need to read the paper multiple times till understand it?