Code to Fine-tune ChatGPT w/ synthetic GPT-4 dataset

  Рет қаралды 4,269

code_your_own_AI

code_your_own_AI

9 ай бұрын

A Jupyter Notebook in Python to fine-tune ChatGPT. Plus a synthetic dataset will be created by GPT-4 for a user specific task. Multi GPT system interaction and advanced prompt engineering. Plus Matt Shumer's GPT-LLM-Trainer explained in detail.
All rights with Matt Shumer for his Jupyter NB on fine-tuning LLama 2 model:
colab.research.google.com/dri...
See also Matt Shumer's Github repo for the GPT-LLM-Trainer:
github.com/mshumer/gpt-llm-tr...
Recommend: Also check out this informative YT video by ‪@SophiaYangDS‬ for her detailed code implementation on how to fine-tune ChatGPT with a different dataset from HuggingFace:
• Fine-tune GPT3.5 Turbo...
#ai
#chatgptprompts
#code

Пікірлер: 7
@PoornimaDevi-yx9oh
@PoornimaDevi-yx9oh 9 ай бұрын
Cool work & showcasing ! Thanks for sharing this here.
@echofloripa
@echofloripa 8 ай бұрын
I did a test using questions and answers about the elections process in Brazil. It had 67 questions and answers. I tried the default 3 epochs, 5, 7 and even 12. In none of the cases I managed to get the same response I had trained on, for exact same system message and user message. I tried in Portuguese and English language, and the result was the same. Yes, it gave a different response compared to the base model, but yet, never a correct answer. For the English dataset test I trimmed the 67 questions to only 10. You can check the loss of the training using its api and the numbers was erratic. I guess that at least in gpt3.5-turbo fine tuning, it's not possible to get it increase it's knowledge. I did some tests with open-source llms, but I still have to train with llama2. Maybe fine-tuning isn't really fit for that, and you have to use embeddings and vector databases to achieve that.
@sameekhan1936
@sameekhan1936 9 ай бұрын
excited to try this out!
@yannickpezeu3419
@yannickpezeu3419 9 ай бұрын
Thanks a lot for all your work, I have a question: All the fine tuning I see is done with a dataset of question answers. Is it possible to give the model some raw text ? I wish to give the model all the scientific publications and courses of my institution (EPFL) So that the model would know everything about the research and courses done at EPFL. In an ideal world I would then be able to ask it stuff about who is expert in which field, what is the best course to learn this or that. Have you heard of something like this ?
@yannickpezeu3419
@yannickpezeu3419 9 ай бұрын
thx
@cecilsalas8721
@cecilsalas8721 9 ай бұрын
🤩👍
@code4AI
@code4AI 9 ай бұрын
Great to hear!
WizardCoder - Python OUTPERFORMS Code LLama - Python 34B
9:56
code_your_own_AI
Рет қаралды 5 М.
WizardCoder 34B: Complex Fine-Tuning Explained
27:31
code_your_own_AI
Рет қаралды 4,2 М.
Китайка и Пчелка 4 серия😂😆
00:19
KITAYKA
Рет қаралды 3,7 МЛН
UFC 302 : Махачев VS Порье
02:54
Setanta Sports UFC
Рет қаралды 1,4 МЛН
Её Старший Брат Настоящий Джентельмен ❤️
00:18
Глеб Рандалайнен
Рет қаралды 7 МЛН
Универ. 10 лет спустя - ВСЕ СЕРИИ ПОДРЯД
9:04:59
Комедии 2023
Рет қаралды 1,1 МЛН
Fine-Tuning GPT Models with Python
23:14
NeuralNine
Рет қаралды 7 М.
Easily Fine Tune ChatGPT 3.5 to Outperform GPT-4!
25:16
Tech-At-Work
Рет қаралды 11 М.
New LLaVA AI explained: GPT-4 VISION's Little Brother
44:18
code_your_own_AI
Рет қаралды 6 М.
🐐Llama 3 Fine-Tune with RLHF [Free Colab 👇🏽]
14:30
Whispering AI
Рет қаралды 13 М.
ChatGPT for Data Analysts | Best Use Cases + Analyzing a Dataset
31:07
Alex The Analyst
Рет қаралды 441 М.
Fine Tuning ChatGPT is a Waste of Your Time
9:40
Stable Discussion
Рет қаралды 18 М.
Fine-tune my Coding-LLM w/ PEFT LoRA Quantization
29:06
code_your_own_AI
Рет қаралды 3,7 М.
Fine-tune LLama2 w/ PEFT, LoRA, 4bit, TRL, SFT code  #llama2
14:36
code_your_own_AI
Рет қаралды 16 М.
OpenAI GPT-4 Function Calling: Unlimited Potential
23:49
sentdex
Рет қаралды 226 М.
APPLE совершила РЕВОЛЮЦИЮ!
0:39
ÉЖИ АКСЁНОВ
Рет қаралды 757 М.
TOP-18 ФИШЕК iOS 18
17:09
Wylsacom
Рет қаралды 700 М.
ВЫ ЧЕ СДЕЛАЛИ С iOS 18?
22:40
Overtake lab
Рет қаралды 112 М.
AI от Apple - ОБЪЯСНЯЕМ
24:19
Droider
Рет қаралды 114 М.
Mi primera placa con dios
0:12
Eyal mewing
Рет қаралды 719 М.