Learn the four steps to fine-tune GPT 3.5 Turbo using Python and LangChain: 1. Prepare your data 2. Upload your data 3. Train your model 4. Use your model For copy/pastable code check out: haihai.ai/finetune
Пікірлер: 49
@Branstrom9 ай бұрын
Since I had already seen two similar videos I fast forwarded to the end just to see what you had trained it to do. I was like hm the meaning of life... Totally thought it was going to answer with 42. Apparently we have different holy books!
@jordan-jones9 ай бұрын
This is awesome. Detailed but to the point. Thanks for sharing, looking forward to more AI videos
@hosseinbadrnezhad20197 ай бұрын
You answered many of my questions in this video
@joao.morossini5 ай бұрын
Thanks for the great video. It will be very useful
@gcash30748 ай бұрын
Love it. Thanks for this video!
@power-of-ai8 ай бұрын
thank you for sharing your knowledge
@Teawos989 ай бұрын
bro thank you so much for this video
@haihailabs9 ай бұрын
My pleasure
@Vincent-mx4rk9 ай бұрын
thanks for sharing, great job
@barryhoffman99569 ай бұрын
Awesome! You feed it the book of Proverbs and it spits out Ecclesiastes.
@tobiasabdon9 ай бұрын
Great work!
@haihailabs9 ай бұрын
Thank you!
@pypypy42289 ай бұрын
Thanks! Is number of epochs something you want to choose? If so, how many should I choose and how to come up with this number?
@huilinghuang78119 ай бұрын
nice to have this feature
@haihailabs9 ай бұрын
Agreed!
@LeonardoBoz9 ай бұрын
nice man! Good video.
@LeonardoBoz9 ай бұрын
I am creating a assistant to my whatsapp. Do you think that i can train a fine tune model with all the products that i have on my database?
@NamasteMax9 ай бұрын
What I'm really trying to figure out, is there a way to successfully train it on newer documentation, like say newer library docs like React? So when I'm using it to help me code I get more up to date syntax?
@p0gue239 ай бұрын
Do you need the step of generating a synthetic user question for each message, or can you leave the user strings blank, if, like in the present example you are training on a bunch of text and not an actual chat?
@Teawos989 ай бұрын
legend
@pypypy42289 ай бұрын
Liked and subscribed
@rajutukadya29199 ай бұрын
Hi. Very informative video. I had a quiestion. You have used langchain here, but when we don't use langchain and directly use openai, what should the system prompt be when using the fine tuned model?
@haihailabs9 ай бұрын
System message is optional.
@kingturtle67423 ай бұрын
Can the content for training be collected from ChatGPT-4? For example, after chatting with ChatGPT-4, can the desired content be filtered and integrated into ChatGPT-3.5 for fine-tuning? Is this approach feasible and effective? Are there any considerations to keep in mind?
@astera-pt9je9 ай бұрын
Do you need distinct user questions for each line of the jsonl file? I am preparing my data and most of the lines in the file contain "can you provide more details on this?" for the user content. In addition, if I remove entries with duplicate user questions, the data shrinks significantly which I think might be an issue since we should have as many lines as possible in our dataset.
@haihailabs9 ай бұрын
You’ll get better results with distinct questions
@oskarrost97749 ай бұрын
Hi i have a quick question... I prepared my data but iam getting an error while launhing the upload file saying that in Line 1 of my data there is no dictionary, even tho in line 1 i just have the message system text. Do you know how to fix that?
@XWSshow9 ай бұрын
Do I Need an paid Account for Fine Tuning? Because I tried it out today, and I got an error, that fine tuning cant be created with an exploring Account…😢 (I still have 4 Dollars on my exploring Account)
@Antonego643 ай бұрын
which Python version do you use in this video?
@user-qi9np9jk3r9 ай бұрын
Amazing video mate, you know i-v only one issue while doing this. While running FineTuningJob, the terminal returns this error: 'openai' has no attribute 'FineTuningJob' Which version of openai are you currently using?
@haihailabs9 ай бұрын
Make sure you pip install openai -upgrade to the latest version
@trackerprince67736 ай бұрын
What is the difference between custom gpts vs fine tuning gpts?
@v3teff4 ай бұрын
I will receive an email regarding the account that I use to access the OpenAI key?
@echofloripa8 ай бұрын
I did a test using questions and answers about the elections process in Brazil. It had 67 questions and answers. I tried the default 3 epochs, 5, 7 and even 12. In none of the cases I managed to get the same response I had trained on, for exact same system message and user message. I tried in Portuguese and English language, and the result was the same. Yes, it gave a different response compared to the base model, but yet, never a correct answer. For the English dataset test I trimmed the 67 questions to only 10. You can check the loss of the training using its api and the numbers was erratic. I guess that at least in gpt3.5-turbo fine tuning, it's not possible to get it increase it's knowledge. I did some tests with open-source llms, but I still have to train with llama2. Maybe fine-tuning isn't really fit for that, and you have to use embeddings and vector databases to achieve that.
@luiseduardo22496 ай бұрын
cara estou tendo problemas em adicionar o training data e o validation data do fine-tuning, ele pede em formato de prompt-completion, porem sempre que tento adicionar ele diz q esta em prompt-completion e precisa estar em chat-completion, oq acaba criando um loop de problemas, poderia me mandar algum modelo de validação e treinamento que saiba que funciona para eu testar?
@echofloripa6 ай бұрын
@@luiseduardo2249 mas deu erro nas chamadas da api da openai? Qual erro exatamente?
@luiseduardo22496 ай бұрын
@@echofloripa estava com um erro da plataforma msm, resolvi. saberia me dizer se há como enviar de alguma forma um input para o gpt (configurado com fine tuning) do power automate por exemplo, usando API, apenas vejo a glr usando chat bot
@echofloripa6 ай бұрын
@@luiseduardo2249 faz tempo que não mexo na integração com a openai, comecei um projeto pessoal de App mobile educacional em Flutter que não deu tempo para mais nada. Não conheço este power automate, mas imagino que possa programar uma integração.
@farahimad44327 ай бұрын
How do you get an email suggesting that your file is ready?
@JackVucivic9 ай бұрын
Thank you. Good to see you are using Proverbs. Very good Book of the Bible.
@shubhamtyagi62819 ай бұрын
I have OpenAI key, but I'm not able to follow your video, can you make some detailed blog, share snippets, I would like to learn and create something similar in my cultural sacred book. I'm a coding beginner. I follow Udemy videos, etc. However I am very new to AI stuff. It would be very helpful if you did that Thanks.
@haihailabs9 ай бұрын
Sure thing! haihai.ai/finetune
@timduck85066 ай бұрын
GPT 3.5 is floored as it is censored. so you need to find out were it is floored before you can ask the question which make's it 3 to 8 times slower to get the right answer you wanted if it comply's to the rules that gpt 3.5 go's by. else your going to get a error message or some thing that evades the answer to same face.
@sahil911729 ай бұрын
Hey Brother , really nice video! I was wondering if I could help you edit your videos and also make a highly engaging Thumbnail which will help your video to reach to a wider audience .
@Shahid_An-AI-Engineer9 ай бұрын
Sir would you please share this fine tuning dataset with me as I'm a student and i don't have a access to GPT-4. If you wanna help please reply.
@TheGuillotineKing9 ай бұрын
Fun Fact you can fine tune but you can't get the model so they keep you paying for the API your better off training locally on your own machine
@D3ADPIX8 ай бұрын
Good luck deploying something as powerful at scale at a cheaper price. If you have any luck let me know.
@TheGuillotineKing8 ай бұрын
@@D3ADPIX hugging Face will let you fine tune for free on a small dataset and for 20 a month you get a lot of resources you should look into it