Watching this at 1.25x speed. High-quality content as usual. Keep it up, Julien 💪
@itayatelis28985 ай бұрын
Love your content! thank you!
@juliensimonfr5 ай бұрын
Glad you enjoy it!
@road2nohand7 ай бұрын
Glorious Content :D
@juliensimonfr7 ай бұрын
Glad you like it!
@Joe-nh9fy6 ай бұрын
Great explanation! I have one question... Is it common practice to regularize the LLM cost function like with L2 to reduce the weight "outliers" while training?
@juliensimonfr6 ай бұрын
I don't think there is a strong consensus. It looks like regularization during fine-tuning can help with generalization. There are new ideas too, like noisy embeddings wandb.ai/byyoung3/ml-news/reports/A-New-Method-For-LLM-Regularization--Vmlldzo1ODIyMzIw
@bibiworm4 ай бұрын
I have been wanting to understand quantization for a very long time. Thank you! Would you mind sharing the slides please? Thank you.
@juliensimonfr2 ай бұрын
Hi, you can find the slides on Slideshare at fr.slideshare.net/slideshow/julien-simon-deep-dive-quantizing-llms/270921785
@jacehua73347 ай бұрын
🔥 🔥 🔥
@juliensimonfr7 ай бұрын
:)
@AI-Projects242 ай бұрын
Is there any chance to get the slides? Its very well organized and presented. Thank you so much for your work✨🔥🔥
@juliensimonfr2 ай бұрын
Hi, you can find the slides on Slideshare at Slides: fr.slideshare.net/slideshow/julien-simon-deep-dive-quantizing-llms/270921785
@monishostwal82556 ай бұрын
what is meant by calibration dataset? is it eqivalent to evaluation set?
@juliensimonfr6 ай бұрын
Pretty much, yes. It's used to figure out the "best" hyperparameter values.