🔥Qwen2.5 Coder 32B Instruct - Best Coding Model To-Date - Install Locally - kzbin.info/www/bejne/qn7HYXmZhbiYn5Ysi=BET-0lYt68gUO25I
@mostafamostafa-fi7kr12 күн бұрын
i did download qwen2.5-coder-32b-instruct-q4_0.gguf on my lm studio how to make my ollama run it ? i dont know if i have to move my model to somwhere else or i have to tell ollama run it the model is in my C:\Users\user\.cache\lm-studio\models\Qwen\Qwen2.5-Coder-32B-Instruct-GGUF folder
@Vlknbstnci-oj5em5 күн бұрын
Qwen2.5 Coder 32B is also finetuned for other tasks like translation. i do not know why but Coder was better then Qwen2.5 32b. but absloulently amazing. generaly whatever you suggest is very good level. Because you isntalling and trying yourself. thank you for your efforts.
@andrepaes390812 күн бұрын
Thank you very much for this review! Good to know quantization has not affected much the quality of the model. I will start using it for my coding endevours :)
@DigitalDesignET12 күн бұрын
Nice info!!! What is the context window you are using? Also, do you think the 15b version is as good?
@bamit197912 күн бұрын
Hehe..it does run on my 3060 but with 3-4 t/s.
@metaltech394412 күн бұрын
Wow! that is very impressive. I wonder how much better the unquantized version is.