btw QwQ can totally do multi-turn. Set it to 32k context and 16k output tokens so its thinking isn't cut before he's done. llama.cpp has much more settings.
@volkovolko9 күн бұрын
Oh okay, I didn't knew that. I thought it cannot do multi turn because it's single turn only in the QwQ Space ^^ Thanks a lot for the precision !
@UCs6ktlulE5BEeb3vBBOu6DQ9 күн бұрын
Tetris game is often my coding test and they all struggle with it.
@volkovolko9 күн бұрын
Yes, tetris is quite difficult for LLMs. Only Claude 3.5 Sonnet and Qwen2.5 Coder 32B got it right on my tests. Even gpt4o didn't got it in my test (but i think it has more related to luck)
@SoM3KiK12 күн бұрын
hey! Would it work with a 3060ti and 32gb ram?
@hatnis11 күн бұрын
I mean, you can't fit the required 24 gb of VRAM on your graphics card, but hey, only one way to find out if it works right.
@SoM3KiK11 күн бұрын
@@hatnis well, it was free to ask 😅
@volkovolko10 күн бұрын
Yes, but you will have to offload a lot in your CPU/RAM. It will run pretty slow but it will work 👍
@volkovolko10 күн бұрын
In the video, I ran it in my 24Go of VRAM. I think it is q4_k_m
@Timely-ud4rm10 күн бұрын
I was able to get it working on my new Mac mini base m4 pro chip model. QwQ-32B-Preview-GGUF bartowski repo. IQ3_XS quantization. the only one I could download as this one is 13.71 gb of ram. Note because I am using a Mac mini apples ram is unified so my 24gb of ram is shared between the gpu and cpu. if I spent spent a extra 300$ from the 1.4k I spent for the m4 pro model I could of loaded the max quantization model but I don't really do AI locally as I use online Ai services more. I hope this helps!