Qwen-2.5 Coder 32B: BEST Opensource Coding LLM EVER! (Beats GPT-4o + On Par With Claude 3.5 Sonnet!)

  Рет қаралды 28,429

WorldofAI

WorldofAI

Күн бұрын

Пікірлер: 64
@intheworldofai
@intheworldofai Ай бұрын
Want to HIRE us to implement AI into your Business or Workflow? Fill out this work form: td730kenue7.typeform.com/to/WndMD5l7 💗 Thank you so much for watching guys! I would highly appreciate it if you subscribe (turn on notifcation bell), like, and comment what else you want to see! 📆 Book a 1-On-1 Consulting Call WIth Me: calendly.com/worldzofai/ai-consulting-call-1 🔥 Become a Patron (Private Discord): patreon.com/WorldofAi 🧠 Follow me on Twitter: twitter.com/intheworldofai Love y'all and have an amazing day fellas. Thank you so much guys! Love yall!
@alals6794
@alals6794 Ай бұрын
Prior to this I was locally running qwen2.5 coder 7B bf16 and it was great, for its size. Can't wait to locally run qwen2.5 coder 32B!
@intheworldofai
@intheworldofai Ай бұрын
The 7b model was quite impressive for it's size. This 32b model will surely blow your mind!
@lckillah
@lckillah Ай бұрын
What kind of workstation are you running to be able to run 32b? I’m new to ML and just now learning so wondering if I’d need an upgrade from m3 18gb Mac Pro.
@Kaalkian
@Kaalkian Ай бұрын
@@lckillah m4 pro/max atleast 48gb perferabbly 64gb. 64gb would let your qwen2.5 32b with good quant and long context andhave spare ram to use comp lol atleast as of nov week2 things are going bezerk day by day
@DickerehikariDuck
@DickerehikariDuck Ай бұрын
@@Kaalkian so basically to run this model, the system needs to have at least 32MB RAM?
@thisisashan
@thisisashan Ай бұрын
As a consistent viewer, I would really love to see a "best of" each category update instead of the constant litany of clickbait. It is getting hard for me to want to watch every "Best ever!" video, but I would really love to know what completes what thee best. Best LLM router. Best anime image gen. Best realistic image gen. Best logic LLM. Best prrogramming LLM. etc. People do these, but I seldom get any information about the specific LLMs and fine-tunes that you have on this channel. Just saying, since you $$$ is based off how much of each video I want. Not meant as judgment. Thanks for what you do.
@intheworldofai
@intheworldofai Ай бұрын
Thanks for the feedback! I really appreciate your input. I’ll definitely consider providing more details on the specific LLMs and fine-tunes featured in the videos. Your support means a lot, and I’m always looking to improve the content for you all!
@Foxy_proxy
@Foxy_proxy Ай бұрын
What kinda specs do you need to run this locally?
@DanaRami93
@DanaRami93 Ай бұрын
Follow
@investfoxy
@investfoxy Ай бұрын
What will you recommended between LM studio vs Pinokio for running LLMs?
@intheworldofai
@intheworldofai Ай бұрын
Microsoft's AI Toolkit - VS Code: FREE AI Extention BEATS Cursor! (GPT-4o + Sonnet 3.5 FREE!: kzbin.info/www/bejne/gIrMnJiHad6Gm9U
@delta-gg
@delta-gg Ай бұрын
what hardware would you suggest for running the Qwen-2.5 Coder 32B? Like what graphics card minimum, and what system memory?
@paulyflynn
@paulyflynn Ай бұрын
M4 Max 128GB
@johndaily9869
@johndaily9869 Ай бұрын
4090
@antonivanov5782
@antonivanov5782 Ай бұрын
RTX 3090 24GB, 32GB RAM
@alals6794
@alals6794 Ай бұрын
Actually, you can run it on a 8GB VRAM GPU but you have have ALOT of RAM, about 64GB DDR4 for about $100 usd or 64GB DDR5 for less than $200. If you split the load between your low VRAM and massive RAM you do need custom code to make it run. I might do a yt video on that, use it to launch my future AI channel.
@andrepaes3908
@andrepaes3908 Ай бұрын
If you run it in max specs (32b parameters at fp32+32k context size) I estimate you need 192 of VRAM. This means a 8x3090 Nvidia card config which is not feasible in any consumer grade hardware. But you can run it at 8 integer quant with small quality loss using 48gb VRAM. A 2x3090 config would be enough and I estimate speed at 15 tokens / sec. A recently launched Mac M4 pro mini with 64gb ram would also do it but at half the speed of the 2x3090 config (7.5 tokens/sec).
@WolfCat787
@WolfCat787 Ай бұрын
so, as 32b, how many memory required to run this model? does single 4090 can handle it?
@chucky_genz
@chucky_genz Ай бұрын
For LLM 4090 is ok bro
@eado9440
@eado9440 Ай бұрын
Epic, on par with deep seek 2.5(which is amazing for the price, just a little slow, and has cache). if faster , and hopefully just as cheap, im might just switch over.
@intheworldofai
@intheworldofai Ай бұрын
Hopefully the qwen 3 model series will improve on inference speeds
@abc_cba
@abc_cba Ай бұрын
Does anyone know how does it compare to : 1) Nvidia/llama-3.1-nemotron-70b-instruct 2) Qwen-2.5-72B-Instruct
@djds4rce
@djds4rce Ай бұрын
Better than qwen 72b at coding
@avalagum7957
@avalagum7957 Ай бұрын
Possible to use it with Jetbrailns IDE's? If yes, how?
@Dom-zy1qy
@Dom-zy1qy Ай бұрын
Might need to buy some tesla m40s and put together a rig. Suprisingly, there seems to be reputable listings on ebay for cheap. 24GB vram for ~$100? Wonder how many token/sec a single card would get.
@Matthew-s5x7d
@Matthew-s5x7d Ай бұрын
when will a local LLM work with cline that is worth it. thats all the matters!!
@BeastModeDR614
@BeastModeDR614 Ай бұрын
does ollama have it?
@johndaily9869
@johndaily9869 Ай бұрын
yes, 51 minutes ago
@alals6794
@alals6794 Ай бұрын
Don't get the quantized version. I think Ollama only offers quantized versions but for coding you want max precision/accuracy aka non quantized. Huffingface has them without quantization, I think, and might even offer it via free API if you register on their site. Ok, you need basic python to use it via API, true.
@latlov
@latlov Ай бұрын
2:15
@mazinngostoso
@mazinngostoso Ай бұрын
Make a video listing the 5 best AI APIs that are completely free 👍
@BeastModeDR614
@BeastModeDR614 Ай бұрын
Akash network has a free AI API
@williamcase426
@williamcase426 Ай бұрын
AI gonna kill us
@JackQuark
@JackQuark 4 күн бұрын
so qwen 32b is still not on the level of Sonnet, but good for local private development though
@AK-ox3mv
@AK-ox3mv Ай бұрын
Qwen 2.5 coder 32b Q4_km gguf that has %99 accuracy compared to fp16 version, is just 20GB and any graphic card with over 20GB vram like nvidia 3090 from year 2020 can run it with about 20~30 tokens per seconds which is like online models.
@intheworldofai
@intheworldofai Ай бұрын
Deepseek-R1-Lite: BEST Opensource LLM EVER! Beats Claude 3.5 Sonnet + O1! - (Fully Tested): kzbin.info/www/bejne/l2eUg2N-iLiqi7c
@intheworldofai
@intheworldofai Ай бұрын
Gemini Exp 1114: The BEST LLM Ever! Beats o1-Preview + Claude 3.5 Sonnet!: kzbin.info/www/bejne/fpTbfqqDZc2qpZI
@intheworldofai
@intheworldofai 29 күн бұрын
Athene-v2 72B: NEW Opensource LLM Beats Sonnet & GPT-4o! (Free API): kzbin.info/www/bejne/sHWwf4Bnq8eAhbs
@intheworldofai
@intheworldofai Ай бұрын
[Must Watch]: Qwen-2.5: The BEST Opensource LLM EVER! (Beats Llama 3.1-405B + On Par With GPT-4o): kzbin.info/www/bejne/r5WTnJp6rNCZsJIsi=Uh2eCpIWYpcY54Hq DeepSeek-v2.5: BEST Opensource LLM! (Beats Claude, GPT-4o, & Gemini) - Full Test: kzbin.info/www/bejne/o6fTnI1nrqusbdEsi=NR9ChO50-HKJW9Cb Bolt.New + Ollama: AI Coding Agent BEATS v0, Cursor, Bolt.New, & Cline! - 100% Local + FREE!: kzbin.info/www/bejne/kKDSoJ2Mab93g9k
@null-db6or
@null-db6or Ай бұрын
You use so old lmstudio version
@chucky_genz
@chucky_genz Ай бұрын
Ollama is still the king 😊
@intheworldofai
@intheworldofai Ай бұрын
Aider UPDATE: The BEST AI Coding Agent BEATS v0, Cursor, Bolt.New, & Cline!: kzbin.info/www/bejne/r6iuhat3bquIlZY
@francoisjunior.
@francoisjunior. Ай бұрын
my mine is slowly when intalled , why that's mean?
@yzw8473
@yzw8473 Ай бұрын
Task #1 is way too easy. Even qwen2.5-coder-0.5B-instruct can handle it right.
@Matelight_IT
@Matelight_IT Ай бұрын
32b on 24gb vram works like magic because the default quantization weight only 20gb so you can also add something like ~30 minute of video subtitles (8k? tokens)
@climateireland7546
@climateireland7546 Ай бұрын
Would 16GB cut it ?
@gog2462
@gog2462 Ай бұрын
it is not working with 99% of apps for coding because it has no tools insructed do do stuff and no prompts for code checking etc... basically it knows not much also about diff and normall stuff that is needed when you try to code with it
@gog2462
@gog2462 Ай бұрын
without retraining for coding with coding apps like vsc... it is useless :) on maybee if you have "developer" that dont know nothing about coding then it can be used for snake game and nothing more
@latlov
@latlov Ай бұрын
How about WordPress plugins development?
@cstephen4695
@cstephen4695 Ай бұрын
i was able to make it draw a butterfly svg. but still look ugly. 🤣🤣
@Windswept7
@Windswept7 Ай бұрын
This is concerning, a warning should be given regarding the ties to the Chinese Communist Party.
@neoreign
@neoreign Ай бұрын
wow wow wo, dude go slow man lol not all of us are coders. you speak so fast!
@SantiagoAyalef-f6u
@SantiagoAyalef-f6u Ай бұрын
You can slow the video bro
Qwen Just Casually Started the Local AI Revolution
16:05
Cole Medin
Рет қаралды 120 М.
Try this prank with your friends 😂 @karina-kola
00:18
Andrey Grechka
Рет қаралды 9 МЛН
VIP ACCESS
00:47
Natan por Aí
Рет қаралды 30 МЛН
Qwen2.5 Coder 32B on Ollama - Run Locally on Less VRAM
15:27
Fahd Mirza
Рет қаралды 5 М.
Linus On LLMs For Coding
17:06
ThePrimeTime
Рет қаралды 304 М.
No Code App Development is a Trap
9:31
Coding with Dee
Рет қаралды 346 М.
Qwen2.5 Coder 32B vs GPT4o vs Claude 3.5 Sonnet (new)
14:17
Volko Volko
Рет қаралды 5 М.
Cursor + Windsurf Settings to 5x AI's Output Quality (Works with VS Code too)
13:53