Fine-Tune ChatGPT For Your Exact Use Case

  Рет қаралды 55,778

Matthew Berman

Matthew Berman

9 ай бұрын

In this video, I show you how to fine-tune ChatGPT 3.5 Turbo. This newly released fine-tuning feature lets you customize ChatGPT to your exact needs. Plus, I show you the easiest way to generate fine-tuning datasets, which is always the most challenging part of fine-tuning.
Enjoy!
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? ✅
forwardfuture.ai/
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew-berman-youtube
USE CODE "MatthewBerman" for 50% discount
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V
Links:
Google Colab - colab.research.google.com/dri...
Blog Announcement - openai.com/blog/gpt-3-5-turbo...
Matt Schumer - / mattshumer_

Пікірлер: 114
@tunestyle
@tunestyle 9 ай бұрын
Working on this now. You are an absolute wealth of information!!!
@Boneless1213
@Boneless1213 9 ай бұрын
This is a great video. I would love a follow up that tackles either a real use case or at least a usecase that isn't something I could just ask chatgpt to do for me already. Like maybe inventing a new concept and giving it understanding of it and being able to do intelligent tasks with it. I guess I just don't see a point to fine tuning unless it helps with something that just adding to the prompt couldn't do.
@zacharypump5910
@zacharypump5910 9 ай бұрын
Cool example but keep in mind without any fine tuning, GPT vanilla will likely yield the same results just by using that same system prompt. It’s very challenging to evaluate the benefits of fine-tuning without truly using a private and distinct data set that wouldn’t be part of its base training.
@jimmytaylor1279
@jimmytaylor1279 9 ай бұрын
I tried this. I did get a very similar result. I used the prompt before asking the question. I think the advantage is if you have a need for chatbot to respond the same way every time you won't have to input the prompt every time. the other advantage is you can have multiple setups and not have to change it in your settings every time. I can see this being useful with much larger prompts when you want a specific style of writing and later need to do some coding and even change coding languages. I think it has its uses.
@jimmytaylor1279
@jimmytaylor1279 9 ай бұрын
I tried this. I did get a very similar result. I used the prompt before asking the question. I think the advantage is if you have a need for chatbot to respond the same way every time you won't have to input the prompt every time. the other advantage is you can have multiple setups and not have to change it in your settings every time. I can see this being useful with much larger prompts when you want a specific style of writing and later need to do some coding and even change coding languages. I think it has its uses.
@avi7278
@avi7278 9 ай бұрын
Yeah this is like custom instructions for the API and without having to spend tokens on the prompt for every response, which brings down cost over time. l think that is the point and the biggest advantage, not that you can't sometimes get the same result via a prompt.
@raj_talks_tech
@raj_talks_tech 9 ай бұрын
Agreed. It depends on a very good quality data-set. Fine-tuning chatgpt doesn't make a huge difference on general topics.
@anshgoestoschool
@anshgoestoschool 9 ай бұрын
yes, however there is a limit to what you can fit in the context window for a standalone api call. Multi shot prompting can only get you so far.
@mokiloke
@mokiloke 9 ай бұрын
Yo, yo, thanks for the great videos. Makes my day.
@matthew_berman
@matthew_berman 9 ай бұрын
You got it!
@richardbasile
@richardbasile 9 ай бұрын
Hey Matthew! I love your videos, keep up the great work. I was wondering how I could deploy a fine-tuned LLM to a service or ChatBot like you mentioned at the end of the video? It seems like an interesting concept but I have yet to find any videos on it.
@VaibhavPatil-rx7pc
@VaibhavPatil-rx7pc 9 ай бұрын
Excellent, information thanks
@CoachDeb
@CoachDeb 7 ай бұрын
GREAT Video on Fine Tuning! one of the BEST! Now that GPT 4 TURBO was just released -- will we still be able to do this fine tuning programming? or will it be obsolete -- now that we will all have GPTs and assistants to do things like this for us?
@IrmaRustad
@IrmaRustad 9 ай бұрын
You are just brilliant!
@rochtm
@rochtm 9 ай бұрын
Thank you 👍
@matthew_berman
@matthew_berman 9 ай бұрын
You’re welcome!
@wiseshopinfo
@wiseshopinfo 9 ай бұрын
Hi Matthew, thank you so much for this tutorial, this is mindblowing I was wondering you could help me with the create a completion process so I could use the generated model as integration for other platforms?
@federicoloffredo1656
@federicoloffredo1656 9 ай бұрын
Great video, it worked for me and that's already great.. Just a question: now I've a "personal" model but in practice how can I use it? How can I change it? It's not so clear for me...
@Truevined
@Truevined 9 ай бұрын
As of right now, the models can only be used when making api call. This models don't show up on the chatGPT web UI - hopefully they add that in the future. I don't believe you can change your finetuned model, but only way to know for sure is to test ;) Also, keep in mind, you can achieve maybe 90% of what Matt showed in this video via prompting in the ChatGPT web UI (even the example in the video) - but this is a very powerful feature to have for more advanced use-cases (see the blog post Matt linked in the description for more details).
@p.c.336
@p.c.336 4 ай бұрын
Thank you very much for this compact and helpful video. Do you have any idea why OpenAI made this fine-tuning thing "too" structured and not flexible? Just like some other 3rd party tools, that you can just upload your files, Q&As etc. and start asking questions. I mean I can expect structured answers for my use case, giving some detailed instructions, but why should I feed it in a very strict way to get that result?
@Nick_Tag
@Nick_Tag 9 ай бұрын
Might be a silly question but for actually using the fine tuned model (either as a chat bot or within other apps) how would you achieve that? I guess there would be a unique API key and model name to put in relevant places for apps, but is there a recommended 'ChatGPT-like app' where you would just paste those in as variables and get a similar experience. Doing it inside of the same colab seems a little clunky?
@muh6131
@muh6131 8 ай бұрын
Very good. I have a question. Can I use gpt-3.5 in your code? I made an error. Thanks
@Aidev7876
@Aidev7876 9 ай бұрын
Just to understand. The data you have generated with the 50 entries. And the system prompt. Temperature etc. Everything is stored under your openAI account somewhere in the cloud? And gpt appends it as a context before running the query?
@marcfruchtman9473
@marcfruchtman9473 9 ай бұрын
Awesome.
@09jake12
@09jake12 9 ай бұрын
Hi Matthew, I'm looking for something like this that searches the internet for actual data to train on, rather than synthetic data, because my use case requires updated and recent data. (say 2021 and later) Could you point me in the right direction? Thanks!
@wiseshopinfo
@wiseshopinfo 9 ай бұрын
sorry for taking advantage of your goodwill but I have one more question. As I am trying to integrate the model trained through colab ( fine tune ) to a chat bot; the chat bot I am using currently only gives support to text models like davinci 003, is there a colab similar to the one you shared with us on youtube that does the same with models like davinci? Also, is there a way to delete or rename models that have been already created? I've been scratching my head trying to find that. And last one ( so so sorry ) I trained a model to act like an employee of an online commerce, using a detailed prompt with all the details and everything else, however when testing the model on playground, it does not act like it was supposed to be. Am I doing something wrong?
@mernsolution
@mernsolution 5 ай бұрын
After build model I use in Node js thanks
@johnnvula7246
@johnnvula7246 4 ай бұрын
That is cool. So can I reuse the same API to integra-te in some other platform like flutter to creat an App. Or I have to creat an other API according to the fine-tune.
@kingturtle6742
@kingturtle6742 3 ай бұрын
Can the content for training be collected from ChatGPT-4? For example, after chatting with ChatGPT-4, can the desired content be filtered and integrated into ChatGPT-3.5 for fine-tuning? Is this approach feasible and effective? Are there any considerations to keep in mind?
@AshiqKhan-ky5cu
@AshiqKhan-ky5cu 6 ай бұрын
Hey, thanks for this video. I have some large ppt file and I want to fine tune those content. How can I achieve this?
@drgnmsr
@drgnmsr 9 ай бұрын
Would there be a way to use this to fine tune a model based on a collection a pdfs?
@mirmisbahuddin9921
@mirmisbahuddin9921 9 ай бұрын
Please prepare a video on finetuning of llama-2-7B using colab
@Techonsapevole
@Techonsapevole 9 ай бұрын
+1
@otaviopmartins
@otaviopmartins 9 ай бұрын
Awesome
@anshgoestoschool
@anshgoestoschool 9 ай бұрын
can a fine tuned model be fine tuned further by adding more examples and training again further?
@vKILLZ0NEv
@vKILLZ0NEv 7 ай бұрын
Does the format of the dataset have to be system, user, assistant as shown here?
@chanansiegel834
@chanansiegel834 9 ай бұрын
Can i give tune on my own computer and then upload the model to open ai. I have some medical reports which I would like the ai to learn how to write but I have to be careful of who has access to those reports.
@mdnpascual
@mdnpascual 6 ай бұрын
I want to make a chatbot for a retailer. A customer can prompt "suggest me a gift item for an 8 year old girl who loves etc etc.", Is this the right solution for me? If yes, does the training data always needs to response in a question/prompt? I already have the dataset for this retailer of their catalogue plus their tags, like toys, education, kitchen, etc. How can I format the data that chatgpt can do it?
@thiagofelizola
@thiagofelizola 9 ай бұрын
What is the limit of data set for fine tuning?
@RichardGetzPhotography
@RichardGetzPhotography 9 ай бұрын
When fine-tuning, do I always have to use the roles format? Can I upload a bunch of docs and have it gain the voice from there? Say I want it to speak in an engineering tone, would uploading our engineering papers aid in that? If I do have to use the role formate, then how do I fine-tune on my data for knowledge?
@matthew_berman
@matthew_berman 9 ай бұрын
Wow thank you so much! Yes, you have to use the roles format. If you want it to have access to additional knowledge, such as from your engineering papers, you’re probably looking to do RAG, aka using a vector DB to store info and provide additional context at inference-time. Does that make sense? Feel free to shoot me an email (address in my bio) if you want more info.
@echofloripa
@echofloripa 9 ай бұрын
​@@matthew_bermanI have a similar question, I need chatgpt to be aware of the information about some laws. I know about the vector database, but wouldn't it be better to have both? Training with the law texts and also have access to the vector database with the exact text?
@joepropertykey3612
@joepropertykey3612 9 ай бұрын
@@matthew_berman Curious about this too. I wouldn't want to use one the usual suspects like Supabasse or what not, just because Postgres extensions are booming to create Vector databases. The Postgres extensions Pg-embeddings and pg-vector will let you install a vector db on a local machine with Postgres.
@rasterize
@rasterize 9 ай бұрын
In my understanding, you don't actually store specific data when you finetune. It's more kind of changing the flavor of how responses will be delivered back to you. It might remember bits and pieces when using a finetune, but it is a very inaccurate and cumbersome way to store data for retrieval. With RAG and vectorDB, the model will search and retrieve specific data that is most likely to be the data you look for. With a finetune, It will respond with data that seems most likely to look correct in the context you give it. So it will look like law data and feel very official, but references and numbers are just made to feel right. Hope this makes sense😊 @@echofloripa
@echofloripa
@echofloripa 9 ай бұрын
@@rasterize I understand your point yes, but still not totally convinced. 😀 I guess I'll have to test it out. I did with a small llm, I will try with llama2 and gpt3. 5
@echofloripa
@echofloripa 8 ай бұрын
I did a test using questions and answers about the elections process in Brazil. It had 67 questions and answers. I tried the default 3 epochs, 5, 7 and even 12. In none of the cases I managed to get the same response I had trained on, for exact same system message and user message. I tried in Portuguese and English language, and the result was the same. Yes, it gave a different response compared to the base model, but yet, never a correct answer. For the English dataset test I trimmed the 67 questions to only 10. You can check the loss of the training using its api and the numbers was erratic. I guess that at least in gpt3.5-turbo fine tuning, it's not possible to get it increase it's knowledge. I did some tests with open-source llms, but I still have to train with llama2. Maybe fine-tuning isn't really fit for that, and you have to use embeddings and vector databases to achieve that.
@kocahmet1
@kocahmet1 2 ай бұрын
awesome
@trackerprince6773
@trackerprince6773 7 ай бұрын
Mathew , if I have a database of PDF docs , and I want to fine tune gpt3.5 turbo on a private knowledge base ,how can I use gpt4 to create the training data? Also would I need to fine tune again if new docs are added to my knowledge base or can would I just add to vector DB and query my custom model
@p.c.336
@p.c.336 4 ай бұрын
I recently asked a similar question and noticed that you haven't received an answer yet. Could you share your experience if you've found a solution? If not, here's my plan for my use case that might inspire you or others: I plan to consolidate my knowledge base into a few PDF files and ask ChatGPT (4) to convert it into a dataset that I can use to feed my own fine-tuned model. And proceed with the rest as explained here.
@dameanvil
@dameanvil 4 ай бұрын
00:03 🛠 Fine-tuning Chat GPT allows customization for specific use cases, reducing costs and improving efficiency. 00:37 📄 Fine-tuning with GPT 3.5 Turbo offers improved steerability, reliable output formatting, and a custom tone for desired behavior. 01:07 📋 Three steps to fine-tuning chat GPT: Prepare data, upload files, and create a fine-tuning job. 01:42 🖥 Google Colab simplifies fine-tuning and synthetic dataset creation, making it easy with a few clicks. 01:52 🔄 Use a provided Google Colab script to generate synthetic datasets for fine-tuning, specifying prompts and parameters. 02:37 🤖 API key creation and system message generation are part of the Google Colab script to facilitate dataset creation. 03:19 🧾 Format your examples for fine-tuning with a system message, user message, and assistant output example. 04:32 🔄 Upload the formatted file to initiate the fine-tuning process, which takes about 20 minutes. 05:21 ⌛ Track the fine-tuning progress using the Google Colab script, getting updates on each step. 05:23 ✔ Successfully completed fine-tuning results in a new custom GPT 3.5 Turbo model with a specific name. 05:36 🧾 Save the model name for future API calls and testing. 05:40 🧪 Test the fine-tuned model by replacing sample values with custom prompts to get desired responses. 06:16 🚀 Customized models can be used for personal, business, or chatbot applications, offering a tailored AI experience.
@testadrome
@testadrome 9 ай бұрын
How is fine-tuning helping to reduce the cost of inference? Price per 1K tokens is 8x higher on fine-tuned vs. base models
@matthew_berman
@matthew_berman 9 ай бұрын
Because you can use less tokens (less explanation) but still get the result you want.
@thenoblerot
@thenoblerot 9 ай бұрын
The price compared to untuned 3.5 is higher, but the fine tuned model can perform as well as gpt-4, for cheaper.
@echofloripa
@echofloripa 9 ай бұрын
What about if I want to train with question answers of my niche (about specific law area) and after I'd like to train with several laws full text, can I do that?
@benmak5326
@benmak5326 9 ай бұрын
I am doing that, you want to get a specific case law or statute you want do a start to finish output Case breach remedy outcome precedent , give it high quality samples - text and refine :)
@echofloripa
@echofloripa 9 ай бұрын
@@benmak5326 could you detail that please?
@1242elena
@1242elena 9 ай бұрын
Please share once it's done
@echofloripa
@echofloripa 8 ай бұрын
​@@benmak5326I did a test using questions and answers about the elections process in Brazil. It had 67 questions and answers. I tried the default 3 epochs, 5, 7 and even 12. In none of the cases I managed to get the same response I had trained on, for exact same system message and user message. I tried in Portuguese and English language, and the result was the same. Yes, it gave a different response compared to the base model, but yet, never a correct answer. For the English dataset test I trimmed the 67 questions to only 10. You can check the loss of the training using its api and the numbers was erratic. I guess that at least in gpt3.5-turbo fine tuning, it's not possible to get it increase it's knowledge. I did some tests with open-source llms, but I still have to train with llama2. Maybe fine-tuning isn't really fit for that, and you have to use embeddings and vector databases to achieve that.
@jackiekerouac2090
@jackiekerouac2090 9 ай бұрын
Great video, which brought 4 questions: 1) Can you use that process with free accounts? 2) How secure is it, especially if you upload personal files? 3) Let's say I have a 300-page novel in draft mode, can I "securely" upload it in GPT? 4) Is there a way to use ChatGPT as a standalone tool, for your own stuff only?
@colecrum3542
@colecrum3542 9 ай бұрын
I recommend you research tokens and how this language model ingests lots of data. It can’t take a 300 page novel all at once. In theory you need a way to break up your book into word chunks for GPT to understand. Also look into Pinecone and vector data storage to do that. OpenAI says they do not use private documents to tune their models so it’s safe to upload your own info. Using GPT as your own tool is a vague question but if you mean as your own personal chatbot, kinda. You see in this video that you can tune your bot to give specific answers but feeding it your own data and then asking it to take action with your data is difficult for it to do. I again refer you back to pincecone and data storage with LLMs.
@p0gue23
@p0gue23 9 ай бұрын
Is there a tool anywhere that will convert text to system/user/assistant JSON format for fine-tuning?
@mungojelly
@mungojelly 9 ай бұрын
um ask gpt4 to write you a converter that fits your data, give it examples of the data and what format you want and it should be able to write you a script to convert it
@p0gue23
@p0gue23 9 ай бұрын
@@mungojelly Interesting. I've actually used gpt to convert text to JSONL, but I didn't consider having it write a script. Time to experiment...
@jeffnall5206
@jeffnall5206 21 күн бұрын
The Colab example no longer functions. I get an APIRemovedInV1 exception. I tried to run "openai migrate" as suggested but that won't work here.
@SRSGMAG
@SRSGMAG 9 ай бұрын
Why can't you just type the same command in the chat prompt instead of all this?
@nor3299
@nor3299 9 ай бұрын
Yeah that was amazing but is there a method through which we can create the dataset using the data from a pdf file etc
@matthew_berman
@matthew_berman 9 ай бұрын
Yes. But likely you just want to use a vector db instead of fine-tuning.
@GenericMeme42
@GenericMeme42 9 ай бұрын
@@matthew_berman I’d much rather train a model on my data. The vector db seems like it would fail on large topics where the relevant answer requires a significant return from the vector db that may exceed the context window or cost a lot in tokens.
@gajyapatil5224
@gajyapatil5224 9 ай бұрын
What is differnece between using chatgpt fine-tuning and using langchain? I find langchain more general purpose and useful for fine tuning models not only gpt models but also open source models.
@thenoblerot
@thenoblerot 9 ай бұрын
Lang chain is for chaining together prompts and for doing information retrieval. Fine-tuning aligns the model for your desired style of output.
@BlissfulBloke
@BlissfulBloke 9 ай бұрын
@@thenoblerot so one could perhaps use both these methods for a refined output style on a specific data set? Set of podcast transcript PDF's using fine-tuning to have the response sound like Joe Rogan for example?
@thenoblerot
@thenoblerot 9 ай бұрын
@@BlissfulBloke Sure. Although you don't NEED langchain. Imho, it adds unnecessary abstraction and complexity. It seems rude to directly link to another KZbinr here lol, but search for "create Data Star Trek fine tuning gpt-3.5" for a demo on how to fine tune a persona from a script.
@p0gue23
@p0gue23 9 ай бұрын
Useful to see the process, but how is fine-tuning gpt-3.5 with it's own output any different than just using stock gpt-3.5 with the same training system message? The fine-tuned version costs 8x more to run.
@mungojelly
@mungojelly 9 ай бұрын
you've got to actually train it on something with a zillion examples enough that it becomes more than 8x easier to get the right answers ,, that's a high bar ,, you can train it to be concise and get to the point so that helps a bit ,, but mostly i'd think what it's most useful for is things that you really can't get to within the context window, even if you give a bunch of style notes for a character that's not going to be as effective as training on a big corpus that sounds right & it can pick up a zillion little details
@jimigoodmojo
@jimigoodmojo 9 ай бұрын
It's not, BUT... Specialized fine-tuned GPT-3.5 can replace expensive general models like GPT-4 and get equal or better performance for certain tasks, saving on per-token costs. Shorter prompts enabled by specialization directly reduce input token costs. Higher quality outputs from fine-tuned models reduce wasted tokens from failures/retries.
@truefrontier
@truefrontier 2 ай бұрын
When, where or how can we use ChatGPT-4-turbo instead of 3.5-turbo?
@hqcart1
@hqcart1 9 ай бұрын
I dont get it, what was the data you trained the model for??
@matthew_berman
@matthew_berman 9 ай бұрын
What do you mean?
@TzaraDuchamp
@TzaraDuchamp 9 ай бұрын
It’s synthetic data that is based on the prompt that you create in the Colab Notebook by calling GPT-4 (recommended, but more costly) or 3.5. In this video, he used number_of_examples = 50. That means 50 synthetic examples are created. Why use GPT-4 for this? Because that model is more advanced than 3.5 and gives more consistent and expected output. When creating synthetic data for fine-tuning you want it to conform to your standards as much as possible.
@aadityamundhalia
@aadityamundhalia 9 ай бұрын
how can you do the same with llama2 on local
@moon8013
@moon8013 9 ай бұрын
nice
@trailblazer7108
@trailblazer7108 9 ай бұрын
Please can you test Phind code lama 32b model - apparently better than chat GPT 4.
@ramsesmendoza8951
@ramsesmendoza8951 9 ай бұрын
I was really excited about this, until I found out that the training is also censured. I even got an email from them giving me 2 weeks to fix the problem. BTW the data was not even NSFW. So, right now is great for business and family friendly stuff.
@mattbarber6964
@mattbarber6964 9 ай бұрын
Same here. What a bummer. It's basically nothing more than a dumbed down version of what langchain or Code Interpreter can already produce from a PDF upload.
@blacksage81
@blacksage81 9 ай бұрын
I had a feeling it would end up like this, it looks like people who need a fine tuned LLM for off the "beaten path" purposes will need to grab a 7b Model with at least 10gb of VRam to make it comfortable locally, or rent a gpu. Hm, I wonder if there will be a market for 1b, or 3b LLM's?
@eyemazed
@eyemazed 9 ай бұрын
What's the difference between fine-tuning and just using custom instructions?
@mungojelly
@mungojelly 9 ай бұрын
custom instructions just adds something automatically to every prompt ,, "fine-tuning" as we're euphemistically calling it means actually waking the robot up and letting it see some data and learn something ,, you can show it any dataset, including things that aren't in what it was already trained on at all, or things shaped in a very different way, and it'll learn from that new facts and ideas and processes and formats-- eventually, given a VERY large dataset of examples, it won't really learn much from fifty examples it needs more like thousands, ideally hundreds of thousands or millions and then it'll actually grok things
@eyemazed
@eyemazed 9 ай бұрын
@@mungojelly but for practical purposes it's much the same as careful prompt engineering, or custom instructions, as in - you can achieve same or very similar responses by prefixing your prompts instead of fine tuning the entire model?
@andreaswinsnes6944
@andreaswinsnes6944 9 ай бұрын
Can you make a video about how to fine-tune an LLM for game modding?
@matthew_berman
@matthew_berman 9 ай бұрын
Can you clarify ?
@ZuckFukerberg
@ZuckFukerberg 9 ай бұрын
I would like to know if you are referring to the model roleplaying as an NPC or if you want the model to help you create mods for a game. I'm interested in both cases hehe.
@andreaswinsnes6944
@andreaswinsnes6944 9 ай бұрын
Would be nice to have an AI co-pilot that can quickly search and replace things in AAA games that are based on any significant engine, like Unity or Unreal for instance. For example, if I want to find all instances of the word “Sevastopol” in an Aliens game and replace it with “Sierra Leone”, then it can be very tedious to do this manually, since that word can be found on walls in the game, in texts discovered in the game, in spoken NPC dialogues and in subtitles. Would be amazing to have an LLM that can do all this “search and replace” after a single prompt. Similarly, it would be cool if an LLM can find all instances of a certain type of vehicle in a game, maybe in Stalker or Fallout 4, and replace it with another type of vehicle, after having been given a single prompt. Is it possible to fine-tune an LLM for this kind of modding?
@MetaphoricMinds
@MetaphoricMinds 9 ай бұрын
There seems to be an awful lot of work to get this to work. How hard would it be to create an application that actually requires no code? You just open it, input your requirements in the fields provided, and viola..
@MetaphoricMinds
@MetaphoricMinds 9 ай бұрын
The only problem I have with some of your videos is, they are too high-level. Sometimes you rush through sections that you may have explained in a previous video. I understand you can't go all the way into each aspect due to time, but maybe a quick reference on "how to do that". Just a suggestion. Otherwise, great content!
@glenraymond379
@glenraymond379 9 ай бұрын
i think its on purpose so we feel the need to buy is patron..
@achille_king
@achille_king 9 ай бұрын
I mean, this is nice but I wanted to fine tune chat GPT with the knowledge I got in PDFs.
@oryxchannel
@oryxchannel 9 ай бұрын
0:46 OpenAI can make us behave the way they want us to behave, lol.
@mrmastr
@mrmastr 4 ай бұрын
This doesn't work anymore
@mastamindchaan387
@mastamindchaan387 9 ай бұрын
Im sorry but the tutorial is really bad. He just read out what's on the page, and I can read on my own. I still don't know what I'm supposed to do next. I don't understand. So, when I click on upload, my model is automatically uploaded to OpenAI's servers? and then I can use my model on the chatgpt site just like usual? What should I put in the Tokens field? The default is 1000 Tokens. But what if I want to train with more than 1000 words? Did I change it when i want to train with 200k words to 200k tokens or not? I also don't get the "Role, System, Content" part. So, do I have to set everything up beforehand? Like, if I input "dog" as the prompt, should ChatGPT respond with "nice,"?! Example: Role: You are my teachter User: Tell me how much is 5+5 Role/Answer/output should be: sure i will help u 5+5 is 10 Role: You are answering me as someone interested in books User: Name from the guy from harry potter? Role/output should be: Her name is harry Potter I still have no idea how to train ChatGPT, for example, to learn the content of a book and then ask it questions. I can't preconfigure every question and answer beforehand?
@thenext9537
@thenext9537 9 ай бұрын
Fine tuning is a train wreck, if you get really deep you’ll find it’s not that hot. I don’t appreciate being treated like a child with some data that’s been lobotomized. The pace is accelerating and yet all I keep finding is walls. I keep hitting the great idea, try to execute something and realize I’m going to need a 11 step flow to accomplish something. Fragmented reality. I know there are people out there with a lot of experience thinking the same thing. It’s frustrating.
@dhananjaywithme
@dhananjaywithme 9 ай бұрын
so are you saying the fine tuning output wasnt that great? #FragmentedRealityIndeed
@thenext9537
@thenext9537 9 ай бұрын
@@dhananjaywithme If you spend enough time, and ask something like is 27 a prime number. Then tell it it isn't, then tell it is. Of course, you shall provide evidence of both outcomes and it will change it's mind constantly. This was 2 weeks ago I did this, and I tried it now and it's still *SORT OF* true, I think they updated it a bit and these types of things are less so now. Ie, I tell it is NOT a prime number, it apologizes. I say it IS a prime number I get "Of course, my apologies". When that happened, I lost everything on it and realized this is a tool that needs heavy vetting. What my point is, exploit and learn. You find something? Keep it close until you can prove it.
@claudiodev8094
@claudiodev8094 9 ай бұрын
You don't want to finetune your models, it's not worth it. What little you gain in stability and save in prompting is lost immediately since you now have a static model that is what it is and nothing else. Invest in proper prompts and validation instead
@428manish
@428manish 8 ай бұрын
getting error: Exception has occurred: InvalidRequestError Resource not found File "C:\D-Drive\AI-ChatBot\ChatBot-chatGpt\fineTune-upload.py", line 22, in response = openai.FineTuningJob.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ openai.error.InvalidRequestError: Resource not found
Cute Barbie Gadget 🥰 #gadgets
01:00
FLIP FLOP Hacks
Рет қаралды 43 МЛН
Каха инструкция по шашлыку
01:00
К-Media
Рет қаралды 8 МЛН
GPT4o: 11 STUNNING Use Cases and Full Breakdown
30:56
Matthew Berman
Рет қаралды 108 М.
26 Incredible Use Cases for the New GPT-4o
21:58
The AI Advantage
Рет қаралды 691 М.
GPT-4o is WAY More Powerful than Open AI is Telling us...
28:18
MattVidPro AI
Рет қаралды 250 М.
"okay, but I want Llama 3 for my specific use case" - Here's how
24:20
Ex-OpenAI Employee Reveals TERRIFYING Future of AI
1:01:31
Matthew Berman
Рет қаралды 82 М.
Training Your Own AI Model Is Not As Hard As You (Probably) Think
10:24
Steve (Builder.io)
Рет қаралды 430 М.
Stop paying for ChatGPT with these two tools | LMStudio x AnythingLLM
11:13
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
pixegami
Рет қаралды 106 М.
Интереснее чем Apple Store - шоурум BigGeek
0:42
Мечта Каждого Геймера
0:59
ЖЕЛЕЗНЫЙ КОРОЛЬ
Рет қаралды 1,1 МЛН
How charged your battery?
0:14
V.A. show / Магика
Рет қаралды 4,9 МЛН
Настоящий детектор , который нужен каждому!
0:16
Ender Пересказы
Рет қаралды 222 М.
Cadiz smart lock official account unlocks the aesthetics of returning home
0:30