Mesh-TensorFlow: Model Parallelism for Supercomputers (TF Dev Summit ‘19)

  Рет қаралды 16,564

TensorFlow

TensorFlow

Күн бұрын

Batch-splitting (data-parallelism) is the dominant distributed Deep Neural Network (DNN) training strategy, due to its universal applicability and its amenability to Single-Program-Multiple-Data (SPMD) programming. However, batch-splitting suffers from problems including the inability to train very large models (due to memory constraints), high latency, and inefficiency at small batch sizes. All of these can be solved by more general distribution strategies (model-parallelism). Unfortunately, efficient model-parallel algorithms tend to be complicated to discover, describe, and to implement, particularly on large clusters. We introduce Mesh-TensorFlow, a language for specifying a general class of distributed tensor computations. Where data-parallelism can be viewed as splitting tensors and operations along the "batch" dimension, in Mesh-TensorFlow, the user can specify any tensor-dimensions to be split across any dimensions of a multi-dimensional mesh of processors. A Mesh-TensorFlow graph compiles into a SPMD program consisting of parallel operations coupled with collective communication primitives such as Allreduce. We use Mesh-TensorFlow to implement an efficient data-parallel, model-parallel version of the Transformer sequence-to-sequence model. Using TPU meshes of up to 512 cores, we train Transformer models with up to 5 billion parameters, surpassing state of the art results on WMT'14 English-to-French translation task and the one-billion-word language modeling benchmark. Mesh-Tensorflow is available at github.com/tensorflow/mesh .
Speaker: Noam Shazeer, Google
See the revamped dev site → www.tensorflow.org/
Watch all TensorFlow Dev Summit '19 sessions → bit.ly/TFDS19Sessions
Subscribe to the TensorFlow KZbin channel → bit.ly/TensorFlow1
Music by Terra Monk → bit.ly/TerraMonkTFDS
Event Photo Album → bit.ly/TFSummit19
event: TensorFlow Dev Summit 2019; re_ty: Publish; product: TensorFlow - TensorFlow Core; fullname: Noam Shazeer;

Пікірлер: 14
@ojbk
@ojbk Жыл бұрын
So great!
@conflagration95
@conflagration95 4 жыл бұрын
I imagine we can also split some layers by h and some layers by d?
@azrathashemi1523
@azrathashemi1523 3 жыл бұрын
Please refresh the time line with this design
@AidanGomez
@AidanGomez 5 жыл бұрын
WOOOOOOOOOO Noammmm!
@abcfy2
@abcfy2 5 жыл бұрын
No subtitles for this video?
@krzysiek2768
@krzysiek2768 5 жыл бұрын
added
@stephennfernandes
@stephennfernandes 2 жыл бұрын
In 6:49 he said, activation have batch dimensions and E parameters have batch dimensions ?? Is that correct ? I used to think batch size is an independent dimension when defining model and is initialised for all parameters including W and activation parameters
@pierricklee8520
@pierricklee8520 2 жыл бұрын
Batch size is an independent size yes, so the input of the whole model has batch size and also it's outputs of every layer, which are activations. Activations are not parameter, you don't need these activations at all. What you want to train is the W and V rather than X (input) or Y (output). Parameters like W and V don't have batch size, because every batch inputs will dot with the same W or V.
@abunickabhi
@abunickabhi 5 жыл бұрын
Yay
@azrathashemi1523
@azrathashemi1523 3 жыл бұрын
So make sure my five sensors are programmed by tensorflow supercomputer federated learning deep learning machine learning replacement because Bluetooth sensors are the very best definition for neural linguistics programming like FBI's co Intel pro Super computer NLP from the fifties
@chrisminnoy3637
@chrisminnoy3637 4 жыл бұрын
I'm not convinced. Let me say why. Only a happy few, very few, have the possibility to use a supercomputer or a TPU for that matter. But most of us already have access to a cluster of non homogenius nodes. Some nodes faster, more powerful, some quite slow but with maybe more memory/disk space. It makes more sense to have tensorflow be able to detect capabilities, detect latencies, and build a graph that fits that cluster best. That way ALL can see model parallelism AND data parallelism with affordable equipment. One might take the step a bit further, and even include nodes over the internet, not needing fiber, if the graphs are setup right. But this needs to be automated, while now it is pure manual work.
@gauravvij137
@gauravvij137 3 жыл бұрын
I believe one of the challenges with homogeneity is the use of collective communications at the end i.e. All reduce. So the fastest nodes end up waiting for the slowest node so that the outputs can be collected and gradients can be redistributed.
@saltcl9101
@saltcl9101 3 жыл бұрын
mostly, we don't need to train gaint models
What is PyTorch? (Machine/Deep Learning)
11:57
IBM Technology
Рет қаралды 25 М.
Sonnet 2.0 (TF Dev Summit ‘19)
14:27
TensorFlow
Рет қаралды 13 М.
КАК ДУМАЕТЕ КТО ВЫЙГРАЕТ😂
00:29
МЯТНАЯ ФАНТА
Рет қаралды 8 МЛН
Despicable Me Fart Blaster
00:51
_vector_
Рет қаралды 22 МЛН
Looks realistic #tiktok
00:22
Анастасия Тарасова
Рет қаралды 104 МЛН
Now THIS is entertainment! 🤣
00:59
America's Got Talent
Рет қаралды 37 МЛН
Improving AI with Anthropic's Dario Amodei
20:40
a16z
Рет қаралды 21 М.
A friendly introduction to distributed training (ML Tech Talks)
24:19
Training LLMs at Scale - Deepak Narayanan | Stanford MLSys #83
55:59
Stanford MLSys Seminars
Рет қаралды 7 М.
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 880 М.
I Melted Wood With Friction
8:44
The Action Lab
Рет қаралды 755 М.
Scientific Concepts You're Taught in School Which are Actually Wrong
14:36
САМЫЙ ДОРОГОЙ ЧЕХОЛ! В стиле Mac Pro
0:35
Здесь упор в процессор
18:02
Рома, Просто Рома
Рет қаралды 376 М.
iPhone 15 Pro в реальной жизни
24:07
HUDAKOV
Рет қаралды 411 М.
تجربة أغرب توصيلة شحن ضد القطع تماما
0:56
صدام العزي
Рет қаралды 57 МЛН