Пікірлер
@vidhigupta599
@vidhigupta599 3 күн бұрын
Does alpha have any role during the fine-tuning process; or is it only used during merging to scale the lora-weights?
@ArjoRoy-pe6tf
@ArjoRoy-pe6tf 24 күн бұрын
This is the first, and probably the last video I would ever need to watch, to get the intuition behind LoRA, thanks to the inventor. 🫡
@MoonlitRitual
@MoonlitRitual 26 күн бұрын
Thank you sir for creating the best thing since slice bread
@jmr6468
@jmr6468 28 күн бұрын
You brought so much value by publishing this video, thanks !
@RalphDratman
@RalphDratman Ай бұрын
This is tremendously helpful.
@harriehausenman8623
@harriehausenman8623 Ай бұрын
Anyone else thinking of Futurama's 'Brain Slugs' when hearing LoRa, or is it just me 😄
@harriehausenman8623
@harriehausenman8623 Ай бұрын
Thanks for doing this video! An interesting 'check-point' in history, so to speak 🤗
@LBSbrans
@LBSbrans Ай бұрын
Pls share the new video with details!❤
@ernestofootfighter
@ernestofootfighter Ай бұрын
Great teaser. I would love if you delved deeper into the theoretical side.
@dusky4151
@dusky4151 2 ай бұрын
Im going to use a comparison but tell me if ive got the general idea right: Lets pretend im playing an MMORPG and I choose "Archer" as my class. A LoRA is like fine tuning the armor of my archer, or maybe his bow, so that he is more specialized for a particular battle. A full checkpoint fine-tuning however is like changing class from "Archer" to "Warrior"
@schurrle27
@schurrle27 2 ай бұрын
Amazing explanation! Though expected when coming from the founder of course
@HadbbdbdDhhdbd
@HadbbdbdDhhdbd 2 ай бұрын
Really helpful and brief explanation. ty.
@AaronGoldman
@AaronGoldman 2 ай бұрын
Do we need the base model? Would it make sense to use a panel of experts? The final model just being the sum of many LoRA and leave them as decomposed matrices for cheaper matrix multiplies?
@saharshayegan
@saharshayegan 2 ай бұрын
This was very helpful! Thank you, Edward!
@KiWelten
@KiWelten 2 ай бұрын
Thank you for all your work!
@umeranwaar
@umeranwaar 2 ай бұрын
I am literally blown away by the quality of your explanation! I am a AI researcher myself, so I can really appreciate the beauty of explaining the technical concepts in "simple" language while not making it "simpler". 🙂
@AmanBansil
@AmanBansil 3 ай бұрын
It’s not often that I find the inventor of a technique explaining the technique. This is incredibly helpful. Thank you
@nowcastthedie
@nowcastthedie Ай бұрын
It's even rarer than someone can actually do both effectively, very talented guy & great explanation.
@MichaelBrody-o8g
@MichaelBrody-o8g 3 ай бұрын
Katarina Bridge
@DanielMoore-s3j
@DanielMoore-s3j 3 ай бұрын
Timmy Square
@DavidGreen-h2i
@DavidGreen-h2i 4 ай бұрын
Larson Parkway
@phdperson
@phdperson 4 ай бұрын
This is amazing and very valuable. Thank you!!!
@Jhonnyzilla
@Jhonnyzilla 4 ай бұрын
That is such a good explanation, thanks!
@EleaseNiebergall-e7g
@EleaseNiebergall-e7g 4 ай бұрын
Jeanne Station
@shklbor
@shklbor 4 ай бұрын
Awesome explanation and kudos for a great contribution to DL, please make a followup video on QLoRA
@tuhinmailme
@tuhinmailme 5 ай бұрын
These things existed for a lot time in vision research. Like only finetuning classifiers of large models on new tasks
@jimshtepa5423
@jimshtepa5423 5 ай бұрын
thank you for a great presentation. I am new to llm and would like to try to run the code on github. is my local machine (macbook m1) can handle it? or is it something for large enterprises with massive compute inventory?
@sorooshsohangir
@sorooshsohangir 5 ай бұрын
Great Job!!!
@EsZfW5f
@EsZfW5f 6 ай бұрын
Thanks!
@Krishna1729-z8v
@Krishna1729-z8v 6 ай бұрын
I have worked on Markov chain Monte Carlo algorithm, it took me 1 hour to map the posterior distribution and that’s not even close….looking forward to use this Gflownets
@shibohao8930
@shibohao8930 7 ай бұрын
Great video! Looking forward to your video explaining the relation between GFN and Max-Entropy RL
@tonywang7933
@tonywang7933 7 ай бұрын
3:26 That is the best explanation!!
@tectract
@tectract 7 ай бұрын
Very cool. I know some of these words.
@redthunder6183
@redthunder6183 7 ай бұрын
Thank you so much for explaining this clearly, everything I watch on KZbin is made by ppl who have no idea how the tech works, or don’t even know how to code outside of copy/paste/change inputs, but pretend like they do. Furthermore, there’s just so many useless libraries around LLMs that ppl claim are the next big thing, but in reality, they create code bloat, introduce more unknowns, make the code harder to work with since u now gotta learn the library, and don’t work as well as if u just wrote everything urself.
@ph10m
@ph10m 8 ай бұрын
This was a great intuitive explanation of it. I wish more people took the adaptability of lora seriously, though: everyone (and their dog) upload full models after doing small fine-tunes *with* lora, instead of just the adapters. Not only would it help experimentation, but time too, as we have to download unnecessary base models over and over...
@jett_royce
@jett_royce 8 ай бұрын
LoRA is such an unlock for resource-constrained creators looking to leverage models for specific domains. Thank you for this amazing work!
@houbenbub
@houbenbub 8 ай бұрын
Awesome video, thanks for making it :)
@BruceChar007
@BruceChar007 8 ай бұрын
能不能继续在微调后的LoRA模型上面微调,效果怎么样
@lophyre1380
@lophyre1380 9 ай бұрын
Very informative video, but please get a better mic
@user-wp8yx
@user-wp8yx 9 ай бұрын
Trying to teach a mistral7b model sanskrit. It already has Sanskrit characters as tokens and is the best performing 7b llama based model I can find. You seem like a knowledgable person in this area. Do you have any advice for lora? Rank, alpha? How about targeting of q,k,v? Other strategies? I have about 3gb of datasets that range from translations, corpus, to data tables. I wonder if I should use different strategies for different data types?
@bobbyparikh5690
@bobbyparikh5690 9 ай бұрын
Fantastic video Edward! In case someone wants a quick refresher on low-rank decomposition of matrices, here's a great video: kzbin.info/www/bejne/aKDKlaqmfalmjJo&ab_channel=ritvikmath
@arnoldpalmer-fv7pf
@arnoldpalmer-fv7pf 9 ай бұрын
So much groundbreaking research broken down into an easy to follow 7 minute video, I love it 🙏
@nathangonzales-hess6569
@nathangonzales-hess6569 9 ай бұрын
That was great. Thanks for sharing. I really appreciate the simple style, no distracting animated plots or fancy editing. Look forward to more!
@justinpresent
@justinpresent 9 ай бұрын
thanks edward for the gentle intro!
@ellielikesmath
@ellielikesmath 10 ай бұрын
i was trying to come up with something like this, in that i wanted to train a generator which would be the inverse of a classifier, and the classifier gave a score to how good a solution was drawn from some range. this looks miles and miles more sophisticated than what i was doing with tf and pytorch, but i definitely understand, at least on that level of abstraction, why such a development is necessary. i look forward to trying this, cheers.
@Bbb78651
@Bbb78651 10 ай бұрын
Thank you so much for the video Edward. Its an inspo seeing you make videos and take off. Im currently a masters of science student in data science and Im always excited about NN architectures and new ML algos. Whenever easy, could you please share 1-2 tips for writing good research papers in ML? I recently started in a lab that does neuroscience-ML, and rlly want to make an impact there
@faizanjaved1443
@faizanjaved1443 10 ай бұрын
Hey there! Can we talk about Q*, the AGI developed by Sam Altman? I'm excited to discuss this with you, it's one of the most interesting topics for me after Sora.
@candrewlee14
@candrewlee14 10 ай бұрын
This was fantastic! Thank you, it’s great to hear from a real expert in this AI mega-hype cycle.
@DB-Barrelmaker
@DB-Barrelmaker 10 ай бұрын
The audio is terrible, it's noticeable when you're dealing with a complex subject containing alot of niche phrases
@DigitalAlligator
@DigitalAlligator 10 ай бұрын
Shit, you invented LoRA 😮? How you come up with that idea works so good?!
@Seekerofknowledges
@Seekerofknowledges 10 ай бұрын
Thank you wholeheartedly.