This is one of the easiest to follow explanations of LoRA that I’ve seen. Thanks a lot.
@DeepFindr9 ай бұрын
Glad you found it useful!
@InturnetHaetMachine Жыл бұрын
Another great video. I appreciate that you don't skip on giving context and lay a good foundation. Makes understanding a lot easier. Thanks!
@omgwenxx8 ай бұрын
Amazing video, feel like I finally understood every aspect of LoRA, thank you!
@DeepFindr8 ай бұрын
Glad it was helpful :)
@pokhrel37943 ай бұрын
The best explanation i found in internet
@vimukthisadithya62393 ай бұрын
This is a perfect explanation for LoRA I found so far !!!
@teleprint-me Жыл бұрын
I've been scouring for a video like this. You're the best explanation so far!
@CarolineWu0719Ай бұрын
thank you for your great explanation
@aurkom Жыл бұрын
Awesome! Waiting for a video on implementing LoRA from scratch in pytorch.
@lakshman58710 күн бұрын
Thanks for the video!!
@chrisschrumm6467 Жыл бұрын
Nice job with summarizing transfer learning and LoRA!
@shubhamtalks97183 ай бұрын
Bro u killed it. Best explanation. Trust me I have watched all tutorials but all other explanations were shitty. Please create one video on quantization.
@amulya12843 ай бұрын
you make the best explanation videos everrrr! is there one on how to train custom models using LORA?
@mohamedezzat50487 ай бұрын
Thanks a lot Amazing explanation, very clear and straightforward
@abhirampemmaraju63393 ай бұрын
Very good explanation
@beyond_infinity166 ай бұрын
Explained quite well !
@nomad_3d Жыл бұрын
Good summary! Next time it would be great if you add headings to the tables that you show on the video. Sometimes it is hard to follow. For example, what is computational efficiency? is it inference time or inference time increase over the increase in performance (e.g. accuracy, recall, etc.)? Thanks.
@marjanshahi9799 ай бұрын
Amazing explanation! Thanks a lot!
@이길현-p7f3 ай бұрын
perfect video
@k_1_1_2_3_56 ай бұрын
What an excellent video!! Congrats!!
@WangZhen-r2d6 ай бұрын
great video to explain lora! thanks
@unclecode Жыл бұрын
Yes, indeed was hrlpful! Do you have a video on quantization?
@sougatabhattacharya67037 ай бұрын
Good explanation
@Canbay127 ай бұрын
Thank you very much for this amazing vide. However, although this was probably only for demo purposes of a forward pass after LoRA finetuning; the modified forward pass method you`ve shown might be mislieading; since the forward pass of the function is assumed to be entirely linear. So, does the addition of the LoRA finetuned weights to the base model weights happen directly within model weights file (like .safetensors) or can it be done on a higher level on pytorch or tensorflow?
@ScoobyDoo-xu6oi4 ай бұрын
Do you know what is \Delta W? How is it defined?
@msfasha6 ай бұрын
Brilliant
@SambitTripathy5 ай бұрын
After watching many LoRA videos, this one finally makes me satisfied. I have a question: I see in the fine tuning code, they talk about merging lora adapters. What is that? Is this h + = x @ (W_A @ W_B) * alpha ? Can you mix and match adapters to improve the evaluation score?
@moonly3781 Жыл бұрын
I'm interested in fine-tuning a Large Language Model to specialize in specific knowledge, for example about fish species, such as which fish can be found in certain seas or which are prohibited from fishing. Could you guide me on how to prepare a dataset for this purpose? Should I structure it as simple input-output pairs (e.g., 'What fish are in the Mediterranean Sea?' -> 'XX fish can be found in the Mediterranean Sea'), or is it better to create a more complex dataset with multiple columns containing various details about each fish species? Any advice on dataset preparation for fine-tuning an LLM in this context would be greatly appreciated. Thanks in advance!"
@binfos743411 ай бұрын
Really Helpful!
@ahmadalis1517 Жыл бұрын
XAI techniques on LLMs is really interesting topic! When you would consider it?
@susdoge37679 ай бұрын
gold
@flecart6 ай бұрын
good job!
@ibongamtrang724711 ай бұрын
Thanks
@prashantlawhatre7007 Жыл бұрын
please make video on QLoRA
@aron2922 Жыл бұрын
Another great video, keep it up!
@xugefu9 ай бұрын
Thanks!
@DeepFindr9 ай бұрын
Thank you!
@ArunkumarMTamil6 ай бұрын
how is Lora fine-tuning track changes from creating two decomposition matrix? How the ΔW is determined?
@ScoobyDoo-xu6oi4 ай бұрын
same question
@henrywang401010 ай бұрын
Great video! Liked and subscribed
@poketopa12344 ай бұрын
I think the lora is scaled by the square root of the rank, not the rank.