Stanford CS25: V4 I Demystifying Mixtral of Experts

  Рет қаралды 7,233

Stanford Online

Stanford Online

Күн бұрын

April 25, 2024
Speaker: Albert Jiang, Mistral AI / University of Cambridge
Demystifying Mixtral of Experts
In this talk I will introduce Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model. Mixtral has the same architecture as Mistral 7B, with the difference that each layer is composed of 8 feedforward blocks (i.e. experts). For every token, at each layer, a router network selects two experts to process the current state and combines their outputs. Even though each token only sees two experts, the selected experts can be different at each timestep. As a result, each token has access to 47B parameters, but only uses 13B active parameters during inference. I will go into the architectural details and analyse the expert routing decisions made by the model.
About the speaker:
Albert Jiang is an AI scientist at Mistral AI, and a final-year PhD student at the computer science department of Cambridge University. He works on language model pretraining and reasoning at Mistral AI, and language models for mathematics at Cambridge.
More about the course can be found here: web.stanford.e...
View the entire CS25 Transformers United playlist: • Stanford CS25 - Transf...

Пікірлер: 6
@marknuggets
@marknuggets 4 ай бұрын
Cool format, Stanford quickly becomes my favorite blogger lol
@何孟飞
@何孟飞 4 ай бұрын
where to get slides
@crwhhx
@crwhhx Ай бұрын
6:22 here “xq_LQH = wq(x_LD).view(L, N, H)” should be “xq_LQH = wq(x_LD).view(L, Q, H)” right?
@acoustic_boii
@acoustic_boii 4 ай бұрын
Dear Stanford online recently I have completed product management course from Stanford online but i haven't got the certificate help me please how will I get the certificate
@Ethan_here230
@Ethan_here230 4 ай бұрын
Wait u will get it - Ethan from Stanford
@gemini_537
@gemini_537 2 ай бұрын
Gemini 1.5 Pro: The video is about demystifying mixture of experts (MoE) and Sparse Mixture of Experts (Smoe) models. The speaker, Albert Jang, who is a PhD student at the University of Cambridge and a scientist at Mistral AI, first introduces the concept of dense Transformer architecture. Then he dives into the details of Smoes. He explains that Smoes are a type of neural network architecture that can be more efficient than standard Transformers by using a gating network to route tokens to a subset of experts. This can be useful for training very large models with billions of parameters. Here are the key points from the talk: * Mixture of Experts (MoE) is a neural network architecture that uses a gating network to route tokens to a subset of experts. * Sparse Mixture of Experts (Smoe) is a type of MoE that can be more efficient than standard Transformers. * Smoes use a gating network to route tokens to a subset of experts, which can be more efficient than training a single large model. * Smoes are well-suited for training very large models with billions of parameters. The speaker also discusses some of the challenges of interpreting Smoes and the potential for future research in this area. Overall, the talk provides a good introduction to Smoes and their potential benefits for training large language models.
Stanford CS25: V4 I Aligning Open Language Models
1:16:21
Stanford Online
Рет қаралды 22 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 322 М.
My daughter is creative when it comes to eating food #funny #comedy #cute #baby#smart girl
00:17
Help Me Celebrate! 😍🙏
00:35
Alan Chikin Chow
Рет қаралды 17 МЛН
GIANT Gummy Worm Pt.6 #shorts
00:46
Mr DegrEE
Рет қаралды 84 МЛН
Stanford CS25: V4 I Hyung Won Chung of OpenAI
36:31
Stanford Online
Рет қаралды 190 М.
NEW TextGrad by Stanford: Better than DSPy
41:25
Discover AI
Рет қаралды 14 М.
Space-Time: The Biggest Problem in Physics
19:42
Quanta Magazine
Рет қаралды 116 М.
Stanford CS25: V3 I Retrieval Augmented Language Models
1:19:27
Stanford Online
Рет қаралды 162 М.
General Relativity Lecture 1
1:49:28
Stanford
Рет қаралды 4 МЛН
The architecture of mixtral8x7b - What is MoE(Mixture of experts) ?
11:42