A friendly introduction to distributed training (ML Tech Talks)

  Рет қаралды 45,536

TensorFlow

TensorFlow

Күн бұрын

Пікірлер: 27
@TensorFlow
@TensorFlow 3 жыл бұрын
Learn more: Mesh TensorFlow → goo.gle/3sFPrHw Distributed Training with Keras tutorial → goo.gle/3FE6QEa GCP Reduction Server Blog → goo.gle/3EEznYB Multi Worker Mirrored Strategy tutorial → goo.gle/3JkQT7Y Parameter Server Strategy tutorial → goo.gle/2Zz3UrW Distributed training on GCP Demo → goo.gle/3pABNDE
@lighttheoryllc4337
@lighttheoryllc4337 2 жыл бұрын
Assalamualaikum She is a Mushrik wearing Graven images on necklace and worshipping man made false idols. She is Breaking the 1st Commandment in Deuteronomy Chapter 5 verses 7-9
@giovannimurru
@giovannimurru 2 жыл бұрын
Wow Ring All-Reduce is just... beautiful. 😍
@SiChRa
@SiChRa 16 күн бұрын
Correction: At 14:37, there is no C4 in GPU-1. It should be C2.
@user-wr4yl7tx3w
@user-wr4yl7tx3w Жыл бұрын
this is really well explained. more on this series please. thanks
@kkmm4179
@kkmm4179 5 ай бұрын
awesome video, crystal clear with the content design and easy to understand
@lukasvandewiel860
@lukasvandewiel860 8 ай бұрын
Thank you very much for the insightful presentation.
@chaaboudd2272
@chaaboudd2272 Жыл бұрын
Well done. I hop to see more videos.
@VanWarren
@VanWarren 3 жыл бұрын
At time code 15:23 in the Ring Reduce algorithm the subscripts for the c vector are incorrect.
@giovannimurru
@giovannimurru 2 жыл бұрын
there is a bomb hidden in the algorithm LOL
@extrememike
@extrememike 3 жыл бұрын
Good an easy to understand explanations!
@hasithkashyapa7645
@hasithkashyapa7645 3 жыл бұрын
Straightforward and awesome
@amanvishnoi2721
@amanvishnoi2721 2 жыл бұрын
Awsm and easy to understand.
@ahmadnoroozi8101
@ahmadnoroozi8101 2 жыл бұрын
GPU must be same, what happens if i use different GPU???
@hannibalbra1216
@hannibalbra1216 3 жыл бұрын
Hello is there any docs about federated learning with differentiel privacy, thank you
@Ajaytshaju
@Ajaytshaju 8 ай бұрын
What if i dont have gpus as you said in the video, i have 32 systems with i5 CPU..can i run this mirrored strategy on multiple CPUs?
@user-wr4yl7tx3w
@user-wr4yl7tx3w Жыл бұрын
Great content. Look forward to more.
@TensorFlow
@TensorFlow Жыл бұрын
Thank you! Want to watch more? Check out this playlist, ML Tech Talks → goo.gle/ml-tech-talks
@sudhanshuhate5633
@sudhanshuhate5633 2 жыл бұрын
Nicely explained!
@tricialobo9233
@tricialobo9233 Жыл бұрын
nicely done!
@NoDoubt747
@NoDoubt747 Жыл бұрын
wow, great job!
@shivangitomar5557
@shivangitomar5557 2 жыл бұрын
Amazing!!
@advanceprogramming225
@advanceprogramming225 7 ай бұрын
Thank you
@abdourahmanebalde9153
@abdourahmanebalde9153 3 жыл бұрын
Thank
@TheHandbook2802
@TheHandbook2802 11 ай бұрын
thanks
@liuauto
@liuauto 2 жыл бұрын
That tf.distribute.experimental worries me.. not sure when the api will be deprecated.
@bhuvandwarasila
@bhuvandwarasila 2 ай бұрын
Thanks
How to make TensorFlow models run faster on GPUs
21:04
TensorFlow
Рет қаралды 33 М.
Transfer learning and Transformer models (ML Tech Talks)
44:59
TensorFlow
Рет қаралды 116 М.
REAL or FAKE? #beatbox #tiktok
01:03
BeatboxJCOP
Рет қаралды 18 МЛН
СИНИЙ ИНЕЙ УЖЕ ВЫШЕЛ!❄️
01:01
DO$HIK
Рет қаралды 3,3 МЛН
Don’t Choose The Wrong Box 😱
00:41
Topper Guild
Рет қаралды 62 МЛН
Intro to graph neural networks (ML Tech Talks)
51:06
TensorFlow
Рет қаралды 182 М.
TensorFlow from the ground up (ML Tech Talks)
30:15
TensorFlow
Рет қаралды 24 М.
How Fully Sharded Data Parallel (FSDP) works?
32:31
Ahmed Taha
Рет қаралды 17 М.
Training AI Models with Federated Learning
6:27
IBM Technology
Рет қаралды 37 М.
Learn Machine Learning Like a GENIUS and Not Waste Time
15:03
Infinite Codes
Рет қаралды 229 М.
Verified Regular Expression Matching - Derivatives, NFAs and more
1:08:43
Agnishom Chattopadhyay
Рет қаралды 35
Tips and tricks for distributed large model training
26:37
TensorFlow
Рет қаралды 7 М.