LoRA - Explained!

  Рет қаралды 3,635

CodeEmporium

CodeEmporium

Күн бұрын

Пікірлер: 19
@Mohamed_Shokry
@Mohamed_Shokry Ай бұрын
Your explanations are easy to understand and in-depth at the same time. Thank you for making my life easier.
@IntegrandoIA
@IntegrandoIA 4 күн бұрын
I don't understand why you don't have much more views and engagement. Your videos are some of the best explanations out there. I've sent my students to your channel multiple times. Not a great timeline where virality reins over veracity. Amazing work.
@CodeEmporium
@CodeEmporium 4 күн бұрын
Thanks! This means a lot. I am just glad the channel is able to provide value. So thanks for sharing this around
@KhushPatel-x2n
@KhushPatel-x2n Ай бұрын
In finetuning of LLM we have 2 options. 1) change the parameter of actual Base model. But this require High resource and time. 2) Add new layers and change the architecture of the model. In finetuning only change the weight of this additional layer and Base model remain frozen. In inferencing we use both Base model and this additional layer. LoRA helps us in reducing this additional layer by using Low Rank Matrices. This is my knowledge. I want to please react on it So I can Verify my knowledge!😊
@CodeEmporium
@CodeEmporium Ай бұрын
This is a good overview 👍
@shisoy4809
@shisoy4809 Ай бұрын
I like simple methods yet extremely effective
@harshsharma5768
@harshsharma5768 Ай бұрын
Awesome explaination! I have few questions though: 1) At 24:00, you said we can do some matrix multiplication and addition to update the value of Wq so that the fine tuned information gets kinda infused in Wq which inturn allowed us to have faster inference time, but won't that hurt the performance in comparision to the case where we don't update Wq and keep A and B? Are we just trading performance for inference speed? 2) what if we do the same 'update Wq' part with additive adapters? That will also speed up their inference time?
@isaiahcastillo898
@isaiahcastillo898 Ай бұрын
LoRAs are the biggest thing to come out of AI since the transformer
@canygard
@canygard Ай бұрын
Custom GPTs or Gemini Gems are pretty spot on after you get good at making them. I would play around with these before building an AI agent with LangChain and vector embeddings.
@pauljones9150
@pauljones9150 Ай бұрын
Cursor with claude 3.5 or o1 mini is great. Use their shortcuts to save time. Still struggles with new languages and frameworks though
@pauljones9150
@pauljones9150 Ай бұрын
The quizzes aren't well connected to the content. Heck if you could add a timestamp after each quiz of "if you got this wrong, check out this timestamp" that would be helpful
@minasefikadu
@minasefikadu Ай бұрын
I enjoyed this video. Can you do QLoRA next?
@isaiahcastillo898
@isaiahcastillo898 Ай бұрын
Appreciate it!
@pauljones9150
@pauljones9150 Ай бұрын
When did you explain benefits of loras over adapters? I seem to have missed it
@Coding-for-startups
@Coding-for-startups Ай бұрын
Amazing, thank you. Can u do one for latent diffusion
@Ishaheennabi
@Ishaheennabi Ай бұрын
Back again ❤❤❤
Chain-of-thought prompting - Explained!
8:34
CodeEmporium
Рет қаралды 1,4 М.
RAG - Explained!
30:00
CodeEmporium
Рет қаралды 3 М.
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 63 МЛН
How Many Balloons To Make A Store Fly?
00:22
MrBeast
Рет қаралды 163 МЛН
LoRA explained (and a bit about precision and quantization)
17:07
Run ALL Your AI Locally in Minutes (LLMs, RAG, and more)
20:19
Cole Medin
Рет қаралды 260 М.
LLM (Parameter Efficient) Fine Tuning - Explained!
23:07
CodeEmporium
Рет қаралды 2,5 М.
LLM Agents - Explained!
14:13
CodeEmporium
Рет қаралды 1,3 М.
10 weird algorithms
9:06
Fireship
Рет қаралды 1,3 МЛН
Positional Encoding in Transformer Neural Networks Explained
11:54
CodeEmporium
Рет қаралды 44 М.
Training an unbeatable AI in Trackmania
20:41
Yosh
Рет қаралды 14 МЛН
The Man Who Solved the World’s Most Famous Math Problem
11:14
Newsthink
Рет қаралды 1 МЛН
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 63 МЛН