Distributed Training with PyTorch: complete tutorial with cloud infrastructure and code

  Рет қаралды 11,549

Umar Jamil

Umar Jamil

Күн бұрын

A complete tutorial on how to train a model on multiple GPUs or multiple servers.
I first describe the difference between Data Parallelism and Model Parallelism. Later, I explain the concept of gradient accumulation (including all the maths behind it). Then, we get to the practical tutorial: first we create a cluster on Paperspace with two servers (each having two GPUs) and then training a model in a distributed manner on the cluster.
We will explore collective communication primitives: Broadcast, Reduce and All-Reduce and the algorithm behind them.
I also provide a template on how to integrate DistributedDataParallel in your existing training loop.
In the last part of the video we review advanced topics, like bucketing and computation-communication overlap during backpropagation.
Code: github.com/hkproj/pytorch-tra...
PDF slides: github.com/hkproj/pytorch-tra...
Chapters
00:00:00 - Introduction
00:02:43 - What is distributed training?
00:04:44 - Data Parallelism vs Model Parallelism
00:06:25 - Gradient accumulation
00:19:38 - Distributed Data Parallel
00:26:24 - Collective Communication Primitives
00:28:39 - Broadcast operator
00:30:28 - Reduce operator
00:32:39 - All-Reduce
00:33:20 - Failover
00:36:14 - Creating the cluster (Paperspace)
00:49:00 - Distributed Training with TorchRun
00:54:57 - LOCAL RANK vs GLOBAL RANK
00:56:05 - Code walkthrough
01:06:47 - No_Sync context
01:08:48 - Computation-Communication overlap
01:10:50 - Bucketing
01:12:11 - Conclusion

Пікірлер: 50
@tharunbhaskar6795
@tharunbhaskar6795 6 күн бұрын
Dang. Never thought learning DDP would be this easy. Another great content from Umar. Looking forward for FSDP
@chiragjn101
@chiragjn101 7 ай бұрын
Great video, thanks for creating this. I have use DDP quite a lot but seeing the visualizations for communication overlap helped me build a very good mental model. Would love to see more content around distributed training - Deepspeed ZeRO, Megatron DP + TP + PP
@amishasomaiya9891
@amishasomaiya9891 2 ай бұрын
Starting to watch my 3rd video on this channel, after transformer from scratch and quantization. Thank you for the great content and also for the code and notes to look back again. Thank you.
@karanacharya18
@karanacharya18 2 ай бұрын
Super high quality lecture. You have a gift of teaching, man. Thank you!
@oliverhitchcock8436
@oliverhitchcock8436 7 ай бұрын
Another great video, Umar. Nice work
@abdallahbashir8738
@abdallahbashir8738 3 ай бұрын
I really love your vidoes. you have a natural talent on simplifying logic and code. in same capacity as Andrej
@user-td8vz8cn1h
@user-td8vz8cn1h 3 ай бұрын
This is second video Ive watched from this channel after "quantization". And frankly wanted to express my gratitude towards your work as it is very easy to follow and the level of abstractions is tenable to understand concepts holistically.
@vasoyarutvik2897
@vasoyarutvik2897 2 ай бұрын
this channel is hidden gem
@631kw
@631kw 7 ай бұрын
Amazing content! Thanks for your sharing
@prajolshrestha9686
@prajolshrestha9686 7 ай бұрын
Thankyou so much for this amazing video. It is really informative.
@user-wm5xv5ei8o
@user-wm5xv5ei8o 4 ай бұрын
very nice and informative video. Thanks
@810602jay
@810602jay 7 ай бұрын
Amazing learning stuff ! Very Thanks !~ 🥰🥰🥰
@user-jf6li8mn3l
@user-jf6li8mn3l 6 ай бұрын
The video was very interesting and useful. Please make a similar video on DeepSpeed functionality. And in general, how to train large models (for example LLaMa SFT) on distributed systems (Multi-Server) when GPUs are located on different PCs.
@nova2577
@nova2577 5 ай бұрын
You deserve many more likes and subscribers!
@riyajatar6859
@riyajatar6859 4 ай бұрын
In broadcast , if we are sending the copy of file from rank 0 and rank 4 node to other node. How is the total time still 10 second. Because still I am having same internet speed of 1MB/s. Could anyone explain? I am bit confused. Also what happens if I am having odd numbers of nodes
@svkchaitanya
@svkchaitanya 26 күн бұрын
You rock always 😂
@user-od3ig9qt6h
@user-od3ig9qt6h 7 ай бұрын
Thank you very much for your wonderful video. Can you teach a video on how to use the accelerate library with dpp?
@Engrbilal143
@Engrbilal143 4 ай бұрын
Awesome video. Please make tutorial on FSDP as well
@felipemello1151
@felipemello1151 2 ай бұрын
I wish i could like it twice
@umarjamilai
@umarjamilai 2 ай бұрын
You can share it on social media. That's the best way to thank me 😇
@felipemello1151
@felipemello1151 2 ай бұрын
@@umarjamilai not sure if it’s in your plans, but if you are open to suggestions, I would love to watch a video on multimodal models. Again, awesome work!
@d.s.7857
@d.s.7857 7 ай бұрын
Thank you so much for this
@loong6127
@loong6127 4 ай бұрын
Great video
@manishsharma2211
@manishsharma2211 7 ай бұрын
you teach soooooooo good
@rohollahhosseyni8564
@rohollahhosseyni8564 4 ай бұрын
great video
@madhusudhanreddy9157
@madhusudhanreddy9157 6 ай бұрын
If time permits for you, Please make an video for entire GPU and TPU and how to them effectively and most of us donno . please create a playlist for pytorch for beginners and intermediates. Thanks for reading.
@mandarinboy
@mandarinboy 5 ай бұрын
Great intro video. Do you have any plans to also cover other parallelism: Model, Pipeline, Tensor, etc.
@user-el4uh3uk2k
@user-el4uh3uk2k 4 ай бұрын
fantastic
@mdbayazid6837
@mdbayazid6837 7 ай бұрын
Federated learning basics please.❤
@ramprasath6424
@ramprasath6424 7 ай бұрын
please do some thing related to audio large models like conformers,quartznet ,etc
@Yo-rw7mq
@Yo-rw7mq 3 ай бұрын
Great!
@hellochli
@hellochli 6 ай бұрын
Thanks!
@umarjamilai
@umarjamilai 6 ай бұрын
谢谢你!我们在领英connect吧
@waynelau3256
@waynelau3256 3 ай бұрын
Working with fsdp and megatron now and I really want to figure this out from scratch haha, it sounds fun but a big headache
@user-fw5sg5mx4m
@user-fw5sg5mx4m 2 ай бұрын
could provide another videos with respect to model parallel and pipeline parallel ? thanks..
@sounishnath513
@sounishnath513 7 ай бұрын
SUUUPERRRR
@Allen-TAN
@Allen-TAN 7 ай бұрын
Always great to watch your video, excellent work
@tryit-wv8ui
@tryit-wv8ui 7 ай бұрын
another banger
@ai__76
@ai__76 2 ай бұрын
How to do in Kubernetes? Please explain it.
@Erosis
@Erosis 7 ай бұрын
Wouldn't the accumulated gradient need to be divided by the total number of individual gradients summed (or the learning rate needs to be divided by this value) to make it equivalent?
@umarjamilai
@umarjamilai 7 ай бұрын
Yes, if you want to treat the "cumulative gradient" as a big batch, then you'd usually divide it by the number of items to keep it equivalent to the single-item setup. But it's not mandatory: as a matter of fact, loss functions on PyTorch have a "reduction" parameter, which is usually set to "mean" (so dividing the loss by the number of items) but can also be set to "sum". One reason we usually calculate the "mean" loss is because we want to make comparisons between models with different hyperparameters (batch size), so the loss should not depend on the batch size. But remember that mathematically you don't have to
@khoapham7303
@khoapham7303 7 ай бұрын
I'm always confused with DP and DDP. Can you please tell me the difference between them? While both of them belong to data parallelism method.
@umarjamilai
@umarjamilai 7 ай бұрын
DP only works on a single machine, while DDP can work on multiple machines. However, PyTorch now recommends using DDP also for single-machine setup.
@khoapham7303
@khoapham7303 7 ай бұрын
@@umarjamilai thank you for your reply
@madhusudhanreddy9157
@madhusudhanreddy9157 6 ай бұрын
Hi Umar, Great video and enjoyed thorughly but i have one question.why are we using the approach of sum(grad1+grad2+....+gradN), why cant we use Avg of Gradients.
@umarjamilai
@umarjamilai 6 ай бұрын
Of course you can (but you don't have to) use the average of the gradients. Actually, people usually take the average of the gradients. The reason we use the average is because we want the loss to be (more of less) the same as the non-distributed model, so you can compare the plots of the two. I don't know if PyTorch internally automatically takes the average of the gradients, I'd have to check the documentation/source.
@madhusudhanreddy9157
@madhusudhanreddy9157 6 ай бұрын
@@umarjamilaithanks for the info.
@user-ze3ok8hh6c
@user-ze3ok8hh6c 7 ай бұрын
do you have a discord channel?
@milonbhattacharya4097
@milonbhattacharya4097 5 ай бұрын
shouldnt loss be accumulated ? loss += (y_pred - y_actual)^0.5
@user-pt7gs2ei1r
@user-pt7gs2ei1r 5 ай бұрын
In my understanding, yes the loss is accumulated for one batch theoretically, and the gradients are computed based on this accumulated loss too. But in the parallel implementation, both the loss calculated in the feedforward process, and the gradients calculated in the back propagation process executed in a parallel way. Here @umarjamilai use a for loop to illustrate the de facto parallel mechanism.
Самый Молодой Актёр Без Оскара 😂
00:13
Глеб Рандалайнен
Рет қаралды 5 МЛН
HAPPY BIRTHDAY @mozabrick 🎉 #cat #funny
00:36
SOFIADELMONSTRO
Рет қаралды 17 МЛН
DEFINITELY NOT HAPPENING ON MY WATCH! 😒
00:12
Laro Benz
Рет қаралды 56 МЛН
PyTorch at Tesla - Andrej Karpathy, Tesla
11:11
PyTorch
Рет қаралды 514 М.
Andrew Ng On AI Agentic Workflows And Their Potential For Driving AI Progress
30:54
Segment Anything - Model explanation with code
42:53
Umar Jamil
Рет қаралды 16 М.
A Few Moments from the 2016 Election Cycle
1:03:28
Knewjash
Рет қаралды 2,2 МЛН
What are AI Agents?
12:29
IBM Technology
Рет қаралды 59 М.
How Fully Sharded Data Parallel (FSDP) works?
32:31
Ahmed Taha
Рет қаралды 10 М.
Scientific Concepts You're Taught in School Which are Actually Wrong
14:36
Top 50 Amazon Prime Day 2024 Deals 🤑 (Updated Hourly!!)
12:37
The Deal Guy
Рет қаралды 1,4 МЛН
ГОСЗАКУПОЧНЫЙ ПК за 10 тысяч рублей
36:28
Ремонтяш
Рет қаралды 501 М.
КРУТОЙ ТЕЛЕФОН
0:16
KINO KAIF
Рет қаралды 6 МЛН
Cheapest gaming phone? 🤭 #miniphone #smartphone #iphone #fy
0:19
Pockify™
Рет қаралды 4,1 МЛН