How does Groq LPU work? (w/ Head of Silicon Igor Arsovski!)

  Рет қаралды 20,632

Aleksa Gordić - The AI Epiphany

Aleksa Gordić - The AI Epiphany

Күн бұрын

Пікірлер: 51
@TheAIEpiphany
@TheAIEpiphany 9 ай бұрын
Let me know how you like this one! If you need some GPUs check out Hyperstack: console.hyperstack.cloud/?Influencers&Aleksa+Gordi%C4%87
@AR-iu7tf
@AR-iu7tf 9 ай бұрын
Thank you so much for this interview. This was very helpful. It is not clear why they can’t do training - is it the additional memory needed ? That can’t be correct because he said they can connect multiple chips to have a large amount of sram. When you asked him that question his answer was Nvidia does it well already anyway .
@BillKatz
@BillKatz 8 ай бұрын
@@AR-iu7tf I think you mean: why don't they target training instead of inference? It's probably because this LPU architecture is particularly better than GPUs at the smaller tensor sizes you'd see for inference (see the slide at 54:24). So they're focusing on the LPU's competitive advantage and in the coming years, the need for inference compute will skyrocket as these systems permeate society.
@AR-iu7tf
@AR-iu7tf 8 ай бұрын
@@BillKatz sorry yes !!! Thank you !fixed my brain freeze typo !
@AR-iu7tf
@AR-iu7tf 8 ай бұрын
@@BillKatz ah - so large batch size training would be a challenge?
@BillKatz
@BillKatz 8 ай бұрын
@@AR-iu7tf From the slide, you see it's more that LPU bandwidth isn't significantly better than GPU at larger batch size. It's also possible that the data flow (given how the loss backpropagates) is less conducive to the LPU architecture, but it's possible their software stack could still do that as well.
@rembautimes8808
@rembautimes8808 Ай бұрын
Great talk, and thanks Igor for sharing this excellent insight. Impressive to be able to sequence things down to ns. Reminds me of a chip called Transmeta 20:years ago 😂
@couldntfindafreename
@couldntfindafreename 9 ай бұрын
In short: We store everything in SRAM and use an optimized stack to drive it. But you cannot run a full model on a single card, you need two full racks of it. Good for LLM hosting at scale, but forget about buying such a card.
@autripat
@autripat 8 ай бұрын
per dylan patel's analysis and groq's own publication, 576 chips for sure (maybe 576 cards), a marvel in orchestration
@tchlux
@tchlux 9 ай бұрын
Fantastic talk! Loved all the information. Any software engineer that truly cares about performance eventually wants to implement custom hardware!😆 I was a little disappointed he dodged the question about how many chips were used to run llama 70B. But I get it, because that metric isn’t what matters for infrastructure. What matters is cost per token generated. That means to roughly minimize: (Cost of chips × power required × token latency) / (Lifetime of chips) Even if the chips cost 10× more, if the set up is 10× faster at half the power consumption it could still halve the total lifetime cost for operating inference infrastructure. Can’t wait to see what the Groq team does in the future.
@sabaokangan
@sabaokangan 9 ай бұрын
Thank you so much for sharing this with us on KZbin ❤️‍🔥 from SeoulNatU
@BillKatz
@BillKatz 8 ай бұрын
Very impressive unified software/hardware stack! Thanks so much for doing this interview. Please do schedule a follow-up so we can hear how graph neural networks might be handled given the way the LPUs do software-controlled data flow.
@abudhabi9850
@abudhabi9850 9 ай бұрын
Aleksa, thank you, you are an inspiration to me.
@Johnassu
@Johnassu 4 ай бұрын
incredible!
@DailySFY
@DailySFY 9 ай бұрын
Thanks for sharing this!!
@sucim
@sucim 9 ай бұрын
Really nice technical overview! I was hesitant to click at first because I expected some marketing blabla
@TheAIEpiphany
@TheAIEpiphany 9 ай бұрын
Never on this channel :)
@svenvanwier7196
@svenvanwier7196 6 ай бұрын
@@TheAIEpiphany I do not have a lot of technical know how, but just subbed because of this comment. No marketing bla bla lets see what you can teach me!
@couragefox
@couragefox 9 ай бұрын
Just what i wanted to know!
@stefisha
@stefisha 9 ай бұрын
bravo coa, samo napred
@JMeyer-qj1pv
@JMeyer-qj1pv 9 ай бұрын
It's mind boggling how disruptive this is going to be to GPU based AI inference. Nvidia must be looking at it and thinking OMG. Nvidia is talking about 1000 watt ultra expensive B100 chips in 2025, and maybe those will be needed for training, but most of the money in AI is for inference. If Groq can produce enough chips quickly, they will be in a very good spot.
@punk3900
@punk3900 4 ай бұрын
NVIDIA will buy them next year
@ericchang9568
@ericchang9568 8 ай бұрын
Thanks for the great talk, does the LPU compilation optimization still work with mixed workload, i.e. serving different models/experts in the same data centers?
@DynestiGTI
@DynestiGTI 4 ай бұрын
Please do another interview in the future on Groq!
@backacheache
@backacheache 8 ай бұрын
Hearing it's original name was "Tensor Streaming Unit" makes it sound like an evolution of their work at Google on "Tensor Processing Units", if that is the case , I wonder what the story is of them leaving Google?
@akakybashmachkin656
@akakybashmachkin656 7 ай бұрын
money
@TopGunMan
@TopGunMan 2 ай бұрын
I suppose in the even-further domain-specific direction, one would have a true ASIC for a specific, mature model, right? Groq still maintains some flexibility that an ASIC built for a specific family of models would give up for vastly increased performance?
@JayDee-b5u
@JayDee-b5u 9 ай бұрын
Does this architecture work just as well using the 1.58B networks (instead of using floating point values for weights, only , -1, 0 or 1?
@autripat
@autripat 9 ай бұрын
why does it take N=5 days to get a compiled version of a model like Llama. Is the speaker talking about optimizing and tuning?
@tchlux
@tchlux 9 ай бұрын
Probably time required to look at all the operators it’s using (softmax, mmul, dot, ..) and ensure they’re mapped to the best sequences of byte codes. They also might consider quantization at various levels to make things fit nicely in memory. And biggest of all they need to figure out how to “bin pack” the parameters into their distributed SRAM (across ~10’s of chips) so that the execution minimizes network hops (they’re not all-to-all beyond ~8 chips w/ ~2GB) and optimizes pipelining of data between core operators. Knowing they’ve only got ~10’s of engineers to do it, it must mean their compiler tooling for deciding how to place data on a cluster of cards given a directed compute graph is pretty heavily automated. Impressive.
@monicasun7797
@monicasun7797 9 ай бұрын
Any ways we are able to download the presentation ?
@autripat
@autripat 9 ай бұрын
Thanks! Can anyone shed some light...at 18:10 and 22:05, the speaker talks about 'all-reduce' which is a training primitive; groq is a inference chip.
@cocoastarrion4563
@cocoastarrion4563 9 ай бұрын
tensor parallelism
@canadianrepublican1185
@canadianrepublican1185 8 ай бұрын
get prices, and while Groq seems awesome, no one asked how much a groq system that compares with a H100 ( for LLM ).
@nhtna4706
@nhtna4706 9 ай бұрын
so, no need of using any DDRAM or no need of using a GPU anymore? or this acts as an accelerator on top of existing RAM's and GPU devices? what would be the cost of 10tb capacity? is there any product usability matrix? that talks about the capacity planning?
@web3devp
@web3devp 7 ай бұрын
Are these chips compatible with an ordinary computer like incase if i want to use the chip in an offline model. and is it on sale
@kozlovskyi
@kozlovskyi 9 ай бұрын
Looks similar to the Hailo product. Does groq hardware use the same approach?
@dmtap1768
@dmtap1768 8 ай бұрын
A simple question: Does latency really matter? If ChatGPT can produce output faster than a human can read, it makes no sense to prioritize reducing latency. However, when it comes to throughput, Groq's solution is not competitive with Nvidia's.
@zhurst
@zhurst 8 ай бұрын
Agentic systems currently require lots of looping which is one area where latency really matters atm. We will also see smaller, open-source models run on Groq soon that out-benchmark their larger counterparts when orchestrating at super high token rates. NPCs in games being truly interactive and responding quickly, real-time language translation driving speech synthesis systems, robots navigating the world, etc. require super low latency. Awesome time to be alive!
@davidkey4272
@davidkey4272 7 ай бұрын
It does. Our application needs to speak back and forth with the latency of the conversation contained in this video. Nothing else even gets close right now to being able to do that. That’s not even taking into account the class.
@ChrisPadron
@ChrisPadron 8 ай бұрын
So they have any customers? How much debt?
@heelspurs
@heelspurs 8 ай бұрын
H100 SXM is 700 TDP watts per 2,000 int8 TOPs while Groq is 215 per 750, ratios of 0.35 and 0.28 joules per TOP so I don't get how they can be 10x lower Joules per token. A100 is about like the H100 on this metric.
@anurag-vishwakarma
@anurag-vishwakarma 9 ай бұрын
I need slides please
@sureshchandrapatel3925
@sureshchandrapatel3925 Ай бұрын
Where is the parallel computing revolution? I think AI is not possible without The Parallel Computing Revolution. There have already been training algorithms and data, but not The Parallel Computing Revolution. Also, AI is a parallel computing itself. So it’s a Patellution: The Parallel Computing Revolution. Hope, I am not wrong.
@bengineer_the
@bengineer_the 8 ай бұрын
Has anyone run a raytracer on this yet? could be interesting.
@mzzzz3
@mzzzz3 9 ай бұрын
What’s to stop nvidia from making their own LPU?
@autripat
@autripat 8 ай бұрын
token, language, grammar, all the same, everything is converted to embeddings, NVDA already is...what's in a name?
@rominmanojchittettu5073
@rominmanojchittettu5073 9 ай бұрын
The numbers are so off on the competing company 🤣🤣🤣🤣🤣
@rick-kv1gl
@rick-kv1gl 9 ай бұрын
r u my caucasian
@Phils_Guide
@Phils_Guide 5 ай бұрын
CRAZY EYEZ KILLA!
@XnndjehdhkNxndjhds
@XnndjehdhkNxndjhds 3 ай бұрын
Hernandez Scott Thompson Richard Clark Christopher
GPT-Fast - blazingly fast inference with PyTorch (w/ Horace He)
1:05:05
Aleksa Gordić - The AI Epiphany
Рет қаралды 4 М.
AI’s Hardware Problem
16:47
Asianometry
Рет қаралды 634 М.
小路飞和小丑也太帅了#家庭#搞笑 #funny #小丑 #cosplay
00:13
家庭搞笑日记
Рет қаралды 7 МЛН
УДИВИЛ ВСЕХ СВОИМ УХОДОМ!😳 #shorts
00:49
From Small To Giant 0%🍫 VS 100%🍫 #katebrush #shorts #gummy
00:19
Groq CEO Jonathan Ross - Tech Giants in the Generative AI Age
1:02:21
World of DaaS with Auren Hoffman
Рет қаралды 12 М.
Securing the Future A Journey Through Network Security
52:00
ARO Technology
Рет қаралды 62
Ask Sean Show: How to Electrify Your Holiday!
1:11:41
Climate Reality Project Orange County
Рет қаралды 42
Untold story of AI’s fastest chip
7:04
Synapse
Рет қаралды 24 М.
Conversation with Groq CEO Jonathan Ross
34:57
Social Capital
Рет қаралды 35 М.
[1hr Talk] Intro to Large Language Models
59:48
Andrej Karpathy
Рет қаралды 2,3 МЛН
AI Hardware w/ Jim Keller
33:29
Tenstorrent
Рет қаралды 37 М.
Nvidia's Strategy Explained: AI Expert Breaks Down $NVDA Business Model l Groq CEO Jonathan Ross
8:10
小路飞和小丑也太帅了#家庭#搞笑 #funny #小丑 #cosplay
00:13
家庭搞笑日记
Рет қаралды 7 МЛН