I Taught an Open Source AI to Think Like OpenAI o3 (here’s how)

  Рет қаралды 3,186

Ai Austin

Ai Austin

Күн бұрын

Пікірлер: 14
@bunbox
@bunbox Күн бұрын
Going to give this a shot as soon as I get home. Thanks Austin, you make some of the best and most actually useful tutorials out there when it comes to opening up Open-Source AI space. Everyone else doesn't really give anything functionally useful, where as you give great stuff others can then build ontop of.
@SCHaworth
@SCHaworth Күн бұрын
This is precisely how i thought it should be done, but i had no idea how to do it. Appreciate you.
@xAgentVFX
@xAgentVFX 12 сағат бұрын
GOAT. I tried a very crude version months ago when you dropped the Memory Agent tut. Just made it reflect on the prompt and it's initial thoughts 3 times before the final response was sent, but hooking it up to 405B was basically triple the price
@gnsdgabriel
@gnsdgabriel Күн бұрын
Nice video. Thank you for sharing.
@flow.philosophy
@flow.philosophy Күн бұрын
I'd be really interested to see how this performs compared to the vanilla model, compared to o1, etc. I realize 3.17b isn't sota, but I wonder how far just the virtue of the cot process will carry it
@mixmax6027
@mixmax6027 Күн бұрын
Deepseek and others. Hope it breaks openAI
@JustinJohnson13
@JustinJohnson13 Күн бұрын
With Deepseek R1 out now, how does this compare?
@DantePowell
@DantePowell Күн бұрын
probably have to build this out yourself and do some testing to see. i am going to do this myself.
@Mono_Autophobic
@Mono_Autophobic Күн бұрын
So basically : Website : censored, o1 quality but censored Locally run (671b) : everything happened upto July 2024, literally everything you can ask and it will answer with o1 quality (yes even illegal subjects) but only problem is you either need 1tb of RAM or 700gb of VRAM, which is hella expensive
@Mono_Autophobic
@Mono_Autophobic Күн бұрын
Alternatively if you want something that can run at 24gb vram (4090), you can use 16b or 32b sized deepseek r1 models, but those are at level of gpt 4o not o1
@lio1234234
@lio1234234 Күн бұрын
Have you attenpted this with the non-finetuned llama model? I'd have thought when it comes to training a model specifically for generating the reasoning steps, training off the pretrained model would be better, no?
@Ai_Austin
@Ai_Austin Күн бұрын
i have not. it would require a much larger dataset and won't be able to work with Ollama. but could absolutely yield better performance if you wanted to put in the time to create that large diverse dataset.
@Mono_Autophobic
@Mono_Autophobic Күн бұрын
So basically : Website : censored, o1 quality but censored Locally run (671b) : everything happened upto July 2024, literally everything you can ask and it will answer with o1 quality (yes even illegal subjects) but only problem is you either need 1tb of RAM or 700gb of VRAM, which is hella expensive Alternatively if you want something that can run at 24gb vram (4090), you can use 16b or 32b sized deepseek r1 models, but those are at level of gpt 4o not o1
@websurfer-x2o
@websurfer-x2o Күн бұрын
Deepseek r1 lovers like here😂
Local UNLIMITED Memory Ai Agent | Ollama RAG Crash Course
27:15
AI Is Making You An Illiterate Programmer
27:22
ThePrimeTime
Рет қаралды 76 М.
We Attempted The Impossible 😱
00:54
Topper Guild
Рет қаралды 56 МЛН
The Best Band 😅 #toshleh #viralshort
00:11
Toshleh
Рет қаралды 22 МЛН
How to run DeepSeek on your computer (100% private)
20:11
David Ondrej
Рет қаралды 37 М.
Build a LOCAL AI Web Search Assistant with Ollama
26:57
Ai Austin
Рет қаралды 10 М.
DeepSeek R1 Explained to your grandma
8:33
AI with Alex
Рет қаралды 999 М.
The OpenAI Team Finally Reveals The BEST OpenAI o1 Use Cases
14:56
OpenAI's o1 just hacked the system
26:31
AI Search
Рет қаралды 394 М.