Walk with fastai, all about Hugging Face Accelerate

  Рет қаралды 2,131

Zachary Mueller

Zachary Mueller

Жыл бұрын

A small snippet from my course walk with fastai revisited (store.walkwithfastai.com) where I discuss Hugging Face Accelerate, a project I work on.
Documentation: hf.co/docs/accelerate
Github: github.com/huggingface/accele...

Пікірлер: 1
@giovannibonetta3949
@giovannibonetta3949 4 ай бұрын
Hi Zach, very nice presentation, you make it seem easy! So, since i stumbled on this video by chance (youtube algo, and i thank it for that), i feel i may push my chances even further puzzling you with a tricky (for me) question: How do you use Accelerate if you wanna train more than one model instance, so distributed training? I envision this chat.... [Zach] - just launch with accelerate launch --num_processes=2 et voilà. This loads 2 instances on 2 gpus and you can train distributed. [dumb me] - BUT, what if unfortunately even a single batch do not fit in GPU memory? [Zach] -Make it smaller. [dumb me] - BUT, unfortunately i cannot since i need an entire batch to gather logits of all the samples within it and train a model to choose the best (like in NLP multichoice setting), if i can not put the entire set of possibilities on the batch i am screwed. [Zach] - Just use Accelerate and HF dataset integration via accelerator.prepare(dataset, model) and it will split and dispatch/gather minibatches like a charm. (not sure about this answer tho) [dumb me] - BUT, what if you can not really use use Accelerate and HF dataset integration to allow for smart distributred batching, since you are doing RL and you don't really have a usual dataset? [Zach] - ... not sure what is the problem. [dumb me] - yes, maybe i am not explaining the problem well, but i asked for help about it on discord, and if are still reading, maybe you would like to give a glance at the whole story here: discord.com/channels/879548962464493619/1201516075389554778/1224650024793935895 which btw is shorter than this post, which i liked writing... (NOTE: taking this confersation and feeding LLaMA, Mistral and company with it did not solve the problem). :)
Pipeline parallel inference with Hugging Face Accelerate
29:12
Zachary Mueller
Рет қаралды 640
Walk with fastai: Revisited; Lesson 1
1:24:37
Zachary Mueller
Рет қаралды 1,5 М.
КАК ДУМАЕТЕ КТО ВЫЙГРАЕТ😂
00:29
МЯТНАЯ ФАНТА
Рет қаралды 10 МЛН
Spot The Fake Animal For $10,000
00:40
MrBeast
Рет қаралды 189 МЛН
Building a real second brain with ekg!
24:20
Andrew Hyatt
Рет қаралды 389
Water powered timers hidden in public restrooms
13:12
Steve Mould
Рет қаралды 729 М.
Multi GPU Fine tuning with DDP and FSDP
1:07:40
Trelis Research
Рет қаралды 4,8 М.
ML Frameworks: Hugging Face Accelerate w/ Sylvain Gugger
1:05:29
Weights & Biases
Рет қаралды 4,1 М.
Turing-NLG, DeepSpeed and the ZeRO optimizer
21:18
Yannic Kilcher
Рет қаралды 16 М.
Supercharge your PyTorch training loop with 🤗 Accelerate
12:53
HuggingFace
Рет қаралды 2,7 М.
#samsung #retrophone #nostalgia #x100
0:14
mobijunk
Рет қаралды 13 МЛН
КРУТОЙ ТЕЛЕФОН
0:16
KINO KAIF
Рет қаралды 6 МЛН
Новые iPhone 16 и 16 Pro Max
0:42
Romancev768
Рет қаралды 2,3 МЛН