This is the best video I’ve watched on distributed training
@abhirajkanse64184 күн бұрын
That makes things very clear! Thanks a lot!!
@yixiaoli67866 ай бұрын
The best video of FSDP. Very clear and helpful!
@chenqian340411 ай бұрын
To me this is by far the best video explaining how FSDP works, thanks a lot!
@dhanvinmehta3294Ай бұрын
Thank you very much for making such a knowledge-dense, yet self-contained video!
@mahmoudelhage69962 ай бұрын
As a Machine learning Research Engineer working on fine-tuning LLMs, I normally use DDP, or Deepspeed, and wanted to understand more about how FSDP works. This video is well structured and provides a detailed explanation about FSDP, I totally recommend it. Thanks Ahmed for your effort :)
@phrasedparasail96859 күн бұрын
You and I both friend
@MrLalafamily9 ай бұрын
Thank you so much for investing your time in creating this tutorial. I am not an ML engineer, but I wanted to build intuition around parallelizing computation across GPUs and your video was very helpful. I especially liked that you provided multiple examples for parts that were a bit more nuanced. I paused the video many times to think things over. Again, gratitude as a learner
@yuxulin13226 ай бұрын
Thank you so much for such detailed explanations.
@xxxiu132 ай бұрын
A great explanation of FSDP indeed. Thanks for the video!
@AntiochSanders Жыл бұрын
Wow this is super good explanation, cleared up a lot of misconceptions I had about fsdp.
@lazycomedy93588 ай бұрын
This is really clear and help me understand a lot of details in FSDP!! Thanks
@pankajvermacr7 Жыл бұрын
thanks for this, im having trouble undertanding FSDP, even i read a research paper but hard to understand, i really appreciate your effort, please make more such videos.
@bharadwajchivukula294511 ай бұрын
crisp and amazing explanation so far
@saurabhpawar26829 ай бұрын
Excellent explanation. Thank you so much for putting this out!
@amansinghal59084 ай бұрын
great video - one recommendation. make 3 videos, one like this, one that goes deeper into the implementation e.g. FSDP code and finally how to use it e.g. case studies
@mandeepthebest2 ай бұрын
amazing video! very well articulated.
@tharunbhaskar67953 ай бұрын
The best explanation so far
@NachodeGregorio7 ай бұрын
Amazing explanation, well done.
@yuvalkirstain71908 ай бұрын
Fantastic presentation, thank you!
@ElijahTang-t1yАй бұрын
well explained, great job!
@dan1arАй бұрын
Great video!
@RaviTeja-zk4lb11 ай бұрын
I was struggling to understand how FSDP works and your video helped me a lot in understanding. Thank you. After understanding what are this backends. I see that FSDP definetly requires 'GPU'. For 'CPU" we use 'gloo' as backend and it doesn't support reduce-scatter. It would be great if you also cover Paramater serving training using RPC framework.
@gostaforsum6141Ай бұрын
Great explanation!
@amirakhlaghi81434 ай бұрын
Excellent presentation
@coolguy6923510 ай бұрын
very good video ! seriously keep up the good work !
@phrasedparasail96859 күн бұрын
This is amazing
@Veekshan9511 ай бұрын
Amazing video with great visual aids and even better explanation. I just had one question - At 24:45 you mentioned that FSDP layer 0 is never freed until the end. So does this mean, the GPUs will have layer0 all the time and in addition to that they will consider other layers as needed?
@ahmedtaha884811 ай бұрын
Yes, Unit 0 (layer 0 + layer 3) -- which is the outermost FSDP unit -- will be available across all nodes (GPUs) during an entire training iteration (forward + backward). Quoting from arxiv.org/pdf/2304.11277 (Page #6), "Note that the backward pass excludes the AG0 All-Gather because FSDP intentionally keeps the outermost FSDP unit’s parameters in memory to avoid redundantly freeing at the end of forward and then re-All-Gathering to begin backward."
@yatin-arora4 ай бұрын
well explained 👏
@p0w3rFloW10 ай бұрын
Awesome video! Thanks for sharing
@TrelisResearch6 ай бұрын
Great video, congrats
@hannibal04665 ай бұрын
Awesome Bro! One short question: in the example shown (24:06), why there are two consecutive AG2 stages?
@ahmedtaha88485 ай бұрын
Thanks! One for the forward pass and another for the backward pass. I suppose you can write a special handler for the last FSDP unit to avoid freeing the parameters then re-gathering them. Yet, imagine if FSDP unit#0 have another layer (layer#6) after FSDP unit#2, i.e., so total (layer#0, layer#3, layer#6). The aforementioned special handler won't look wise then.
@clarechen15902 ай бұрын
great video!
@ManishPrajapati-o4xАй бұрын
TY!
@louiswang5383 ай бұрын
how is FSDP different from gradient accumulation? seems both have a mini-batch to get 'local gradients' and sum up to get global gradient for model update.
@adamlin120 Жыл бұрын
Amazing explanation 🎉🎉🎉
@ahmedtaha8848 Жыл бұрын
Thanks!
@AIwithAniket Жыл бұрын
it helped a lot. thank you so much
@ahmedtaha8848 Жыл бұрын
Thanks!
@bennykoren21211 ай бұрын
Excellent !
@mohammadsalah2307 Жыл бұрын
Thanks for sharing! 19:19 The first FSDP unit to compute forward process is FWD0; however this FSDP unit contains layer 0 and layer 3; How could we compute the result of layer3 without compute the result of layer 1 and 2 first?
@ahmedtaha8848 Жыл бұрын
Layer3 is computed only after computing layer 1 and layer 2. Please note that there are two 'FWD0': the first one computes layer 0; the second one computes layer 3 after FWD1 (layer 1 and 2) finishes.
@aflah75723 ай бұрын
Thank You!
@santiagoruaperez73947 ай бұрын
Hi. I want to ask you something. In the 3:01 you also include de optimizer state in the multiplication for each parameter. I want to ask if the optimizer state is not just one for the whole model? What I mean is: if I have a 13B model in comparisson with a 7B model, the gradients are going to be more. But in the case of the optimizer state is going to depend also from the number of parameters?
@ahmedtaha88487 ай бұрын
The optimizer state is not just one for the whole model. A 13B model has both more gradients and more optimizer state compared to 7B model. Yes, optimizer state depends on the number of parameters. For Adam optimizer, the optimizer state (Slide 4 & 5) includes both momentum (first moment) and variance (second moment) for each gradient, i.e., for each parameter.
@santiagoruaperez73947 ай бұрын
Amazing video, super clear@@ahmedtaha8848
@maxxu88187 ай бұрын
Hello Ahmed, If it's a 4 way FSDP in a node, does it mean there is only 4 GPU used for that node? Usually there are 8 GPUs in a node? how the other 4 GPUs are used? Thanks!
@dhineshkumarr318210 ай бұрын
Thanks man!
@DarnellFie23 күн бұрын
Thanks for the interesting content! 😍 I’ve got a question: 🤨 I found these words 😅. (behave today finger ski upon boy assault summer exhaust beauty stereo over). Can someone explain what this is? 😅
@richeshc9 ай бұрын
Namaste, a doubt. For Pipeline parallelism (mins 10 to 12) you mentioned while we send weights from 1st gpu of minibatch 1 training to gpu 2, we start training gpu 1 feedforward network on minibatch 2. Doubt is isnt it suppose to be feedforward followed by backward propogation and updation of weights, then training on batch 2? Words are giving an interpretation that we are straight away starting feedforward training on gpu1 with minimatch 2 soon we transfer weights from minibatch 1 training of gpu 1 to gpu 2.
@ahmedtaha88489 ай бұрын
For minibatch 1, we can't do back-propogation till we compute the loss, i.e, till mini-batch 1 passes through all layers/blocks. Same for mini-batch 2. After computing the loss for mini-batch 1, we can back-propogate one layer/block at a time on different GPUs -- and of course update the gradient. Yet, again other GPUs will remain idle if we are processing (forward/backward) a single mini-batch. Thus, it is better to work with multiple mini-batches, each with a different loss value. These mini-batches will be forwarding/backwarding on different GPUs in parallel.
@amalekiАй бұрын
there is a bit of discrepancy in what is defined as model parallelism with Nvidia literature. Name, in nvidia literature, Model parallelism is overarching idea of sharing the model and it can take two forms: i) Tensor MP (splitting layers between GPUs, each GPU get a portion of each layer) ii) Pipeline Parallelism (each GPU responsible for computing some layers entirely).
@RedOne-t6w7 ай бұрын
Awesome
@DevelopersHutt6 ай бұрын
TY
@piotr7808 ай бұрын
how whole gradient is calculated if weights does not fit single GPU memory ???????