Want to HIRE us to implement AI into your Business or Workflow? Fill out this work form: td730kenue7.typeform.com/to/WndMD5l7 💗 Thank you so much for watching guys! I would highly appreciate it if you subscribe (turn on notifcation bell), like, and comment what else you want to see! 📆 Book a 1-On-1 Consulting Call WIth Me: calendly.com/worldzofai/ai-consulting-call-1 🔥 Become a Patron (Private Discord): patreon.com/WorldofAi 🧠 Follow me on Twitter: twitter.com/intheworldofai Love y'all and have an amazing day fellas. Thank you so much guys! Love yall!
@alals6794Ай бұрын
Prior to this I was locally running qwen2.5 coder 7B bf16 and it was great, for its size. Can't wait to locally run qwen2.5 coder 32B!
@intheworldofaiАй бұрын
The 7b model was quite impressive for it's size. This 32b model will surely blow your mind!
@lckillahАй бұрын
What kind of workstation are you running to be able to run 32b? I’m new to ML and just now learning so wondering if I’d need an upgrade from m3 18gb Mac Pro.
@KaalkianАй бұрын
@@lckillah m4 pro/max atleast 48gb perferabbly 64gb. 64gb would let your qwen2.5 32b with good quant and long context andhave spare ram to use comp lol atleast as of nov week2 things are going bezerk day by day
@DickerehikariDuckАй бұрын
@@Kaalkian so basically to run this model, the system needs to have at least 32MB RAM?
@thisisashanАй бұрын
As a consistent viewer, I would really love to see a "best of" each category update instead of the constant litany of clickbait. It is getting hard for me to want to watch every "Best ever!" video, but I would really love to know what completes what thee best. Best LLM router. Best anime image gen. Best realistic image gen. Best logic LLM. Best prrogramming LLM. etc. People do these, but I seldom get any information about the specific LLMs and fine-tunes that you have on this channel. Just saying, since you $$$ is based off how much of each video I want. Not meant as judgment. Thanks for what you do.
@intheworldofaiАй бұрын
Thanks for the feedback! I really appreciate your input. I’ll definitely consider providing more details on the specific LLMs and fine-tunes featured in the videos. Your support means a lot, and I’m always looking to improve the content for you all!
@Foxy_proxyАй бұрын
What kinda specs do you need to run this locally?
@DanaRami93Ай бұрын
Follow
@investfoxyАй бұрын
What will you recommended between LM studio vs Pinokio for running LLMs?
@intheworldofaiАй бұрын
Microsoft's AI Toolkit - VS Code: FREE AI Extention BEATS Cursor! (GPT-4o + Sonnet 3.5 FREE!: kzbin.info/www/bejne/gIrMnJiHad6Gm9U
@delta-ggАй бұрын
what hardware would you suggest for running the Qwen-2.5 Coder 32B? Like what graphics card minimum, and what system memory?
@paulyflynnАй бұрын
M4 Max 128GB
@johndaily9869Ай бұрын
4090
@antonivanov5782Ай бұрын
RTX 3090 24GB, 32GB RAM
@alals6794Ай бұрын
Actually, you can run it on a 8GB VRAM GPU but you have have ALOT of RAM, about 64GB DDR4 for about $100 usd or 64GB DDR5 for less than $200. If you split the load between your low VRAM and massive RAM you do need custom code to make it run. I might do a yt video on that, use it to launch my future AI channel.
@andrepaes3908Ай бұрын
If you run it in max specs (32b parameters at fp32+32k context size) I estimate you need 192 of VRAM. This means a 8x3090 Nvidia card config which is not feasible in any consumer grade hardware. But you can run it at 8 integer quant with small quality loss using 48gb VRAM. A 2x3090 config would be enough and I estimate speed at 15 tokens / sec. A recently launched Mac M4 pro mini with 64gb ram would also do it but at half the speed of the 2x3090 config (7.5 tokens/sec).
@WolfCat787Ай бұрын
so, as 32b, how many memory required to run this model? does single 4090 can handle it?
@chucky_genzАй бұрын
For LLM 4090 is ok bro
@eado9440Ай бұрын
Epic, on par with deep seek 2.5(which is amazing for the price, just a little slow, and has cache). if faster , and hopefully just as cheap, im might just switch over.
@intheworldofaiАй бұрын
Hopefully the qwen 3 model series will improve on inference speeds
@abc_cbaАй бұрын
Does anyone know how does it compare to : 1) Nvidia/llama-3.1-nemotron-70b-instruct 2) Qwen-2.5-72B-Instruct
@djds4rceАй бұрын
Better than qwen 72b at coding
@avalagum7957Ай бұрын
Possible to use it with Jetbrailns IDE's? If yes, how?
@Dom-zy1qyАй бұрын
Might need to buy some tesla m40s and put together a rig. Suprisingly, there seems to be reputable listings on ebay for cheap. 24GB vram for ~$100? Wonder how many token/sec a single card would get.
@Matthew-s5x7dАй бұрын
when will a local LLM work with cline that is worth it. thats all the matters!!
@BeastModeDR614Ай бұрын
does ollama have it?
@johndaily9869Ай бұрын
yes, 51 minutes ago
@alals6794Ай бұрын
Don't get the quantized version. I think Ollama only offers quantized versions but for coding you want max precision/accuracy aka non quantized. Huffingface has them without quantization, I think, and might even offer it via free API if you register on their site. Ok, you need basic python to use it via API, true.
@latlovАй бұрын
2:15
@mazinngostosoАй бұрын
Make a video listing the 5 best AI APIs that are completely free 👍
@BeastModeDR614Ай бұрын
Akash network has a free AI API
@williamcase426Ай бұрын
AI gonna kill us
@JackQuark4 күн бұрын
so qwen 32b is still not on the level of Sonnet, but good for local private development though
@AK-ox3mvАй бұрын
Qwen 2.5 coder 32b Q4_km gguf that has %99 accuracy compared to fp16 version, is just 20GB and any graphic card with over 20GB vram like nvidia 3090 from year 2020 can run it with about 20~30 tokens per seconds which is like online models.
@intheworldofaiАй бұрын
Deepseek-R1-Lite: BEST Opensource LLM EVER! Beats Claude 3.5 Sonnet + O1! - (Fully Tested): kzbin.info/www/bejne/l2eUg2N-iLiqi7c
@intheworldofaiАй бұрын
Gemini Exp 1114: The BEST LLM Ever! Beats o1-Preview + Claude 3.5 Sonnet!: kzbin.info/www/bejne/fpTbfqqDZc2qpZI
[Must Watch]: Qwen-2.5: The BEST Opensource LLM EVER! (Beats Llama 3.1-405B + On Par With GPT-4o): kzbin.info/www/bejne/r5WTnJp6rNCZsJIsi=Uh2eCpIWYpcY54Hq DeepSeek-v2.5: BEST Opensource LLM! (Beats Claude, GPT-4o, & Gemini) - Full Test: kzbin.info/www/bejne/o6fTnI1nrqusbdEsi=NR9ChO50-HKJW9Cb Bolt.New + Ollama: AI Coding Agent BEATS v0, Cursor, Bolt.New, & Cline! - 100% Local + FREE!: kzbin.info/www/bejne/kKDSoJ2Mab93g9k
@null-db6orАй бұрын
You use so old lmstudio version
@chucky_genzАй бұрын
Ollama is still the king 😊
@intheworldofaiАй бұрын
Aider UPDATE: The BEST AI Coding Agent BEATS v0, Cursor, Bolt.New, & Cline!: kzbin.info/www/bejne/r6iuhat3bquIlZY
@francoisjunior.Ай бұрын
my mine is slowly when intalled , why that's mean?
@yzw8473Ай бұрын
Task #1 is way too easy. Even qwen2.5-coder-0.5B-instruct can handle it right.
@Matelight_ITАй бұрын
32b on 24gb vram works like magic because the default quantization weight only 20gb so you can also add something like ~30 minute of video subtitles (8k? tokens)
@climateireland7546Ай бұрын
Would 16GB cut it ?
@gog2462Ай бұрын
it is not working with 99% of apps for coding because it has no tools insructed do do stuff and no prompts for code checking etc... basically it knows not much also about diff and normall stuff that is needed when you try to code with it
@gog2462Ай бұрын
without retraining for coding with coding apps like vsc... it is useless :) on maybee if you have "developer" that dont know nothing about coding then it can be used for snake game and nothing more
@latlovАй бұрын
How about WordPress plugins development?
@cstephen4695Ай бұрын
i was able to make it draw a butterfly svg. but still look ugly. 🤣🤣
@Windswept7Ай бұрын
This is concerning, a warning should be given regarding the ties to the Chinese Communist Party.
@neoreignАй бұрын
wow wow wo, dude go slow man lol not all of us are coders. you speak so fast!