This isn't really accurate. LoRA doesn't really make your model smaller, it makes the fine tuning process require fewer resources. As far as I understand the final model size is about the same, if not slightly bigger.
@amortalbeing Жыл бұрын
he meant during training I guess, but you are right, Lora is for fine-tuning
@viixby94814 ай бұрын
A better analogy I assume with the lego thing, would be that, it gives you a little handbook on how to build that specific thing with the legos. Not handpicking certain legos.
@sharathkumar84222 ай бұрын
Final model is slightly bigger as it has the LoRA layers added on top of the original layers unless you removed a few of the original before adding the LoRA ones. The training process however requires far less computational resources cause you're only training the newly added LoRA layers and freezing all the other model layers.
@Trinad356 Жыл бұрын
You explanation is really amazing one can easily understand without having any background knowledge, thank you very much.
@TheCradmin Жыл бұрын
This video needs more love. Thank you man, so well communicated.
@El_MA113 ай бұрын
That was excellent. Nice teaching skills, Wes.
@MasterBrain182 Жыл бұрын
Astonishing content Man 🚀
@lighthousesaunders7242 Жыл бұрын
Fine tunning? Stacking and transfering? Thanks for a great video.
@happyday.mjohnson Жыл бұрын
i subscribed to your channel after watching your explanation. Thank you for your clarity.
@ntesla66 Жыл бұрын
1. a large beer cask. 2. a measure of capacity, usually equal to 252 wine gallons. Otherwise well done!
@DarkDiripti9 ай бұрын
Only watched the first lego example, and that is just plain wrong. LoRA does not make the model smaller, that analogy does not hold at all. Don't want to know what follows from such a bad analgoy.
@sheevys Жыл бұрын
Is there a speed improvement during training only or also at inference?
@Lampshadx Жыл бұрын
Training only - you may encounter latency issues during inference due to separately loading the base model and the LoRA model, but most libraries will allow you to merge them, so it would end up being equivalent to the original model.
@Lampshadx Жыл бұрын
To actually run inference and generate predictions after fine-tuning Lora, you would need to combine the original large base model weight matrices with the small updated factor matrices that Lora learns. So at inference time, you still have essentially the same enormous number of parameters as the original foundation model. The key efficiency gains are seen during the adaptation/fine-tuning process. By only updating a tiny fraction of parameters, Lora allows much quicker and cheaper adaptation compared to full fine-tuning. But once the adapted model is ready for deployment and inference, multiplying/merging those factor matrices back into the original base weights results in effectively the full set of parameters at inference time.
@moeinhasani8718 Жыл бұрын
really good video for a high-level understanding of the concept. I wish there was a little bit of math included as well, just very high level mentioning what mathematical steps are taken.
@ArunkumarMTamil8 ай бұрын
how is Lora fine-tuning track changes from creating two decomposition matrix? How the ΔW is determined?
@seeess9256 күн бұрын
This makes no sense. Keeps saying models over and over with no info on what it h means. I get the overall point of being limited. But no info on what it actually is it what it does.. Essentially pretty much know nothing about what lora means
@scottstout Жыл бұрын
Is it possible/reasonable to use LoRA w/ GPT-4?
@fusseldieb Жыл бұрын
Afaik GPT-4 is closed-source
@Daligliding2 ай бұрын
What kind of tool is used making the animation?
@chyldstudios Жыл бұрын
nice explainer
@AurobindoTripathy Жыл бұрын
all that images-filling by scribbling inside the lines? How does that support your content (which in fine)?
@ceciliaedwards5523 ай бұрын
204 Oberbrunner Valleys
@KentPage-j8u3 ай бұрын
518 Skiles Shores
@sergeibogdanov572 Жыл бұрын
Hello, what is the name of the software you use to draw?
@loremipsum916Ай бұрын
+1
@FrankWilliams-r4b3 ай бұрын
Satterfield Hill
@ChristyLynn-j1t4 ай бұрын
458 McGlynn Circle
@AnneMorales-y2r4 ай бұрын
Hal Crescent
@pongtrometer Жыл бұрын
I’m trying to follow your recommendation of learning python, I’m not a programmer whatsoever, so just from watching this overview, which is great by the way. Is there LoRA-esque way of learning python , so that I can be creative ,as I learn python. Just like using LoRA’s in SDXL, in combinations to create new image recipes. I hope to learn python so that I can get involved with making LoRA’s for sound design, not grounding breaking, but definitely enabling sound designers to create with new sonic colours. Any advice would be much appreciated. Thanks in advance Wes / comments community.
@HansenHugo3 ай бұрын
50849 Klocko Plaza
@SherillBriner3 ай бұрын
290 Block Place
@CynthiaEvangelista-b8h4 ай бұрын
Jerome Groves
@GeorgeHarris-f8r3 ай бұрын
Elisabeth Field
@robertputneydrake Жыл бұрын
*powerful not powerfull :)
@Marcus_Berger170110 ай бұрын
Unliimmmmmited powaaaaaafulllllll 😁
@iainmackenzie19952 ай бұрын
Pieces of Lego are not called Legos
@halilxxxАй бұрын
its not true
@aaronsayeb65664 ай бұрын
just wasted 4 minutes on this
@cgqqqq3 ай бұрын
useless video, you can make like ~10 videos by just replacing LORA with anyother AI jargons