This is really good. Easy to understand and implement.
@eduardoguevara78802 күн бұрын
8 minutes to start tapping on open source potential - great content, thanks 🫡
@taygundoganАй бұрын
Yeeeey Cedric, more videos from Cedric please please!!
@khairullahhabib3982Ай бұрын
Very good delivery! I can watch this guy explain stuff all day. Keep it up! I can't believe all this knowledge is just free out here
@josephmyalla361118 күн бұрын
Finally there is an open source tool for llm fine-tuning this amazing.
@ryansun82569 күн бұрын
A google search returns multiple opensource fine tuning tools like axolotl, llama factory. What makes this one different?
@stanTrXАй бұрын
Thanks. Does it work the same with ollama?
@volkovolko23 күн бұрын
Great Tool, I was waiting a full well made video of this tool and here it is ! A collab notebook would be great if possible 😉
@learnbydoingwithstevenАй бұрын
Very clear. Gonna try it.
@gokcerbelgusen1062Ай бұрын
I will try this, thank you
@SameerJatoi-w3e7 күн бұрын
What if I want to fine tune it on a language other than English ?
@PB-kx4vvАй бұрын
InstructLab presentations lead me to fantasize about training a model to shorten the learning curve for large open source projects. For example, the code-aster finite element package, with huge amounts of documentation and many documented test cases can many structural and dynamic and even thermal mechanical systems. However, the combinations of features which work compatibly with each other feels to a beginner like a fractal landscape. It is ok to go through an example, but it is easy to loose footing at near adjacencies. It would be nice to talk to a model about strategies to construct a new model, which can reference particular documents and examples, and identify prospective strategies as self conflicting. But when I imagine mapping this problem to instruct lab, I imagine it to be a more daunting task than just working with the program and gaining experience, and reading a lot.
@maneeshs3876Ай бұрын
Nice video !
@ml00000Ай бұрын
Excellent presenter!
@justwanderin847Ай бұрын
I was just wondering how they really train AI. This helps.
@광광이-i9t29 күн бұрын
Thanks!!
@kingshukbasak73634 күн бұрын
Where is the InstructLab channel?
@ZakinAbdulАй бұрын
That was awesome, and I was wondering, can we fine-tune that model with an RAG chatbot-like, chat with it and feed it new info through our chats?
@andrewcameron4172Ай бұрын
What version of ilab were you running in this demo?
@cloudnativecedricАй бұрын
Ah, so this was InstructLab v.17 when we recorded :)
@AmeerHamza-cy6kmАй бұрын
how can I train it on PHP programming language, and some php projects.
@aganithshanbhagАй бұрын
question answer set (vast training material on php programming)
@munawwarkhan1926Ай бұрын
This is a great video and a good intro to an amazing tool. Just one suggestion, it does need some knowledge and background of computer science and data structures. I don't think it is for people with zero knowledge or background as the video suggestsin the beginning. Amazing content IBM, learning a lot here.
@cloudnativecedricАй бұрын
Thank you very much for the feedback! That is true, there are some basics that are helpful in doing this, as well as terminal usage skills, but what we're working on as well is a user interface for the upstream InstructLab project, so it's essentially a simple form to include Q&A pairs, source documents, and attribution! Then the rest of the process like data generation and training is automated :)
@PregidthАй бұрын
@@cloudnativecedric If I understand correctly, by providing exact Q&A pairs during the fine-tuning process, we are effectively guiding the LLM to produce specific, deterministic answers to certain questions. Does this mean we are reducing the inherent randomness in the answers that LLMs typically generate based on their pre-trained weights? If so, wouldn’t this approach limit the model’s flexibility to incorporate its broader pre-trained knowledge into the context of the fine-tuned domain?
@LoVe-iu9rdАй бұрын
May I know what is your laptop spec?
@gauravmodi12Ай бұрын
How much data it need to do proper fine tuning ?
@george_davituriАй бұрын
impressive, need to try cool stuff
@ajaykumarpandey732725 күн бұрын
Which laptop is being used here
@nadoiz18 күн бұрын
You say that you have to link the data you created to a GitHub link, and then a pull is done. Is this mandatory?
@activewire-web5710Ай бұрын
What about hallucinations or guardrails
@rajavemula3223Ай бұрын
Can fine tuning can be done with cpu? I mean without gpu?
@philtoa334Ай бұрын
Nice.
@jacquesgasteboisАй бұрын
I want to do the same with a tiny model please
@vdpoortensamynАй бұрын
Our Granite models are quite tiny. 😊
@nazarmohammed568129 күн бұрын
Plz share the Github repo
@Jobfox6453 күн бұрын
Sorry but this is PEFT with Lora, not fine tuning the LLm to create a new base model