Do you use a special setting when uploading your videos? I am blind, and when I pause your videos on my Mac, I can read the output of your screen in plain text, so I can copy commands "and stuff" from your terminal. I haven't seen this before and I think it's really cool
@davidhardy3074Ай бұрын
Bro you're blind? This is easily one of the coolest youtube comment's i've ever seen. Knowing you're blind and able to comment and interact is humbling and a bit awe inspiring. There should be some damn cool tech coming in the next 10 years with AI that will be beneficial to a lot of people. I can't wait for everyone to be able to interact agentically with systems using voice and language models. I think that will be soon.
@AICodeKingАй бұрын
I'm really flattered to hear this. I don't do anything.. It's just that I always have a simple looking screen on the video.. So, I guess that would help in the screen reading.
@다루루Ай бұрын
대단하십니다 항상 행복하세요~ 😊
@simeonnnnnАй бұрын
Its a Mac thing. On mac's, you can higligg and copy the text off the screen in a youtube video. I noticed the same.
@antoniofuller2331Ай бұрын
Jesus Christ, how did you even write this LMAO
@warlockassim4240Ай бұрын
always like when i hear the open source
@miquelladesuАй бұрын
waiting for Polo-O1 now
@makelvinАй бұрын
I was surprised by how bad this model is as well. Qwen 2.5 is way better. In fact, I use Qwen 2.5 even more than Llama 3.2. How is it possible that both Marco o1 and Qwen 2.5 came from the same company with Marco o1 been the later version? Don’t they check the results and compare them ahead of time?
@ZhenyuXiao-f6zАй бұрын
Marco and qwen are two completely different technical routes like gpt and o1. Marco emphasizes using MCTS enhancement, which is actually a kind of technical exploration? They said in their paper 'This work aims to explore potential approaches to shed light on the currently unclear technical roadmap for large reasoning models.'
@ArgimkoАй бұрын
Thanks! Tell me please what app are you used for running local chat client 5:00?
@AICodeKingАй бұрын
It's OpenWebUI
@MouradifАй бұрын
I love the part at 4:33. This AI is exactly like me
@warlockassim4240Ай бұрын
yep
@COC-ys5irАй бұрын
dude is a overthinker fr lol just like me fr
@xzatechАй бұрын
I love all your stuff I don't know if it's the music, the cool voice the Panda not only is a very entertaining but your material is actually educational and helpful and I love the sarcasm also😅
@NourishedBrainsАй бұрын
You mentioned training your own model with this. How do you get a model and train your own? I'm still a bit new to this
@Windswept7Ай бұрын
@@NakedSageAstrologySomeone who bases their world view on the absurdity of Hindu astrology being disrespectful?! Such shock. 👁️ 👄 👁️
@Windswept7Ай бұрын
Your best bet is using an AI model like ChatGPT or Claude to walk you through a step by step guide.
@NourishedBrainsАй бұрын
@NakedSageAstrology replied: "I just came here to leave a react, so you know I saw your message. Then I will intentionally ignore your question." The since deleted message from NakedSageAstrology. Truly an enlightened individual @Windswept7 thank you, I think I figured it out.
@Windswept7Ай бұрын
@ Best of luck on your journey of discovery friend! ^^
@ricodyellowАй бұрын
what spec's do you need to run this smoothly ? CPU RAM GPU SSD
@wikittywhacktvАй бұрын
the marco polo team should get the axe
@user-me8jl2xi6vАй бұрын
So is there actually any good llm that I can run locally for coding? Of course I'm not looking for sonnet quality, but something that would work well for small easy tasks or in a chain before passing a prompt to sonnet
@minimidget0073Ай бұрын
Qwen 2.5 coder is the best one even 14b gets good results which seems to run quite good on most computers although it takes longer to create things
@AICodeKingАй бұрын
Qwen 2.5 coder will be better
@danaaron5883Ай бұрын
@@AICodeKing Hey man, if I am using an RTX 3070, may I please know which Qwen25 model I should choose? Thank you and great video as always :)
@zznabil8109Ай бұрын
@@danaaron5883 QWEN2.5 CODER 3B Q8
@josecabralcarl4522Ай бұрын
What if I had a 36x GPU Farm? Which model should I be running Deepseek coder or is there anything better?
@Cloudways-AIАй бұрын
Who give a toss. It is a crap model. I have still to see something that impresses me.
@ernestuzАй бұрын
You're being too hard, evidently the picture was a rectangular butterfly! Now, seriously, it's amazing to see how AI is evolving in front of our own eyes.
@keeperofthelight9681Ай бұрын
Need a tutorial how you a got a beautiful setup for chat
@kenny-kvibeАй бұрын
wooo very interesting! thank you for the share, nice video!
@jacquesdupontdАй бұрын
Again, thank you for your great work.
@cc98-oe7ol24 күн бұрын
The design behind Marco o1 is ingenious. But it performs poorly on this apple problem. Llama 3.2 3B can handle this well, though. I've also tried a harden version of this problem, "I bought 10 apples and use a two fifths of them to bake a delicious pie. After this I ate half of the pie. Yummy! But wait I'm getting worried, since there're not many apples left for my mum. So, can you, the almighty AI agent, tell me how many apples are left?" The correct answer is 6. I tried this on Marco o1, but it failed. I also tried on Qwen 7b, and it also failed. The [main-stream] model with [smallest parameters] that passed [without hint] is Llama-3.2-8b instruct. With hint, both marco and qwen can pass, but not for Llama-3.2-3b.