Thanks brother, I also have a 6750xt. I installed the llama3.2 11b today and it was very slow and I realized that it was using 100% CPU. After this fix it is around 50% for both CPU and GPU. Thanks a lot.
@TigerTriangleTech4 күн бұрын
Good deal! Glad to hear it!
@KarmezzАй бұрын
This video was easy to understand and proffesional. Worked on rx 6600 no problems keep up!
@TigerTriangleTechАй бұрын
Good deal! Thanks for your feedback!
@nome_de_usuario12 күн бұрын
You are awesome, you saved my life and my graduation project and a lot of money LOL
@TigerTriangleTech12 күн бұрын
Glad to hear it! Best of luck with your project!
@Senorkawai2 ай бұрын
Here is the translation: "Thank you, I followed the steps in the video and finally Ollama ran on my GPU RX 6600. I just had to restart for it to work (before the restart, it was using the CPU). Now I will check if I can use Docker and WebUI to use it with a better interface. Thanks again!"
@TigerTriangleTech2 ай бұрын
Great! I'm glad it worked for you. I also have videos on Docker and Web UI if you need help with that. Thanks for watching!
@MichelBertrand2 ай бұрын
I have been wanting to run ollama on my 6750XT ever since AMD announced they were supporting their GPUs. THANK YOU! Just installed and it seems to work perfectly. So much faster than running on my 13700k !
@TigerTriangleTech2 ай бұрын
I'm glad to hear it. Thanks for watching!
@anjinho_ruindade_puraАй бұрын
Great video! It worked perfectly
@TigerTriangleTechАй бұрын
Glad it helped! Thanks for the feedback!
@abdullahal-jauni3013Ай бұрын
perfect explanation video, it worked on amd 5700 xt. thank you
@TigerTriangleTechАй бұрын
You're welcome! Thanks for the feedback!
@iamnobody-0012 ай бұрын
hi, can you make similar video, but for discrete gpu, i have old gtx 1050 and ollama is using cpu only, i want to know how to activate the gpu, also how to disable it, to see the difference. thanks
@TigerTriangleTech2 ай бұрын
Hi there! Actually, the video I made was for a discrete GPU, but it was AMD rather than NVIDIA. You shouldn't have to install any work around. Ollama supports Nvidia GPUs with compute capability 5.0+. You should be in good shape because the GTX 1050 has a compute capability of 6.1. If this is not working correctly you might want to update your drivers? Not sure. To force CPU usage, set an environment variable of CUDA_VISIBLE_DEVICES to -1. To enable GPU usage just remove that variable and it should utilize your GPU again. For more info view this page: github.com/ollama/ollama/blob/main/docs/gpu.md
@ThermoIdiotАй бұрын
can it run on rx 6500m laptop gpu
@TigerTriangleTech29 күн бұрын
Hi, you will need to determine the LLVM target of your GPU and then use the related ROCBLAS packages. I would recommend looking at your log file and determine that target which should start with gfx. In your case it might be GFX1033 but I'm not 100% sure. Maybe someone else can chime in here. Here is the place in the video that will show you in the Ollama log file: kzbin.info/www/bejne/fV7Ooamiq7CAZ8k Here is the latest release (currently) of the ROCBLAS packages: github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU/releases/tag/v0.6.1.2 Here is a video I made to show how to look at the log file: kzbin.info/www/bejne/pZ-Wo6evhbyGnaM Hope that helps!
@Byrex_Lorence2 ай бұрын
It works! thank you so much. you did a very good video
@TigerTriangleTech2 ай бұрын
Great! Thank you for watching and for your feedback!
@coolcha14 күн бұрын
Can this work with integrated graphics?
@TigerTriangleTech12 күн бұрын
Like with the AMD APUs? I'm not sure. Even if it was supported, I doubt if it could perform very well. Too many limitations.
@DiegoThoms16 күн бұрын
Thanks! I made it work with my 6600!
@TigerTriangleTech16 күн бұрын
Very cool! Thanks for the feedback.
@louisgamercool2324Ай бұрын
Hi, I have checked your other video regarding whether or not it is using gpu and ollama says that GPU is being used. Moreover, adrenaline shows that my 5700xt is at 99% usage. However, I am getting slower output compared to CPU. Is there a fix for this? Thanks
@louisgamercool2324Ай бұрын
Otherwise Great video, thanks a lot.
@TigerTriangleTechАй бұрын
I have run into this as well. You might try a smaller model (3B parameter like Llama 3.2 or maybe Phi). I'm wondering if it's a bottleneck between transferring data between the CPU and GPU and it could be a matter of needing more RAM. I address this in my video on performance: kzbin.info/www/bejne/omHXlGWKiN2ehZo
@TigerTriangleTechАй бұрын
Thanks for your feedback!
@vikoscharger242720 күн бұрын
Thank you!It works with my 5700xt! Night and day istead of running on my CPU!
@TigerTriangleTech19 күн бұрын
Great! Thanks for the feedback!
@kevinmioleАй бұрын
But I have rx6600 can I use ollama?
@TigerTriangleTechАй бұрын
It should work. That is an LLVM target of gfx1032. rocm.docs.amd.com/projects/install-on-windows/en/latest/reference/system-requirements.html#supported-gpus-win github.com/likelovewant/ollama-for-amd#windows
@varunaeeriyaullaАй бұрын
Yes you can. Follow the steps
@joshuaosei562826 күн бұрын
This worked on RX 5500M! Thanks
@TigerTriangleTech25 күн бұрын
Good deal. You're welcome!
@RomanMondragon-lh5fh21 күн бұрын
worked on the rx 6700xt, thank you so much
@TigerTriangleTech19 күн бұрын
You're welcome! Glad it worked!
@iljaruppel3047Ай бұрын
This is the first video that realy simple, well explained and also works. Thank you👌 But i have the Problem that mistral is much slower with GPU than with the CPU. I use a rx 6600XT. I dont know what is the Problem.
@TigerTriangleTechАй бұрын
Thanks, I'm glad it was helpful. The problem you are having usually points to a resource problem. Switching to smaller models like Llama 3.2 3B (or models that are around that size) might work better. Just keep in mind it's a trade off because you do lose some quality. If you haven't yet watched my video on performance, you might want to check it out as it demonstrates a similar problem I had when using Llama 3 on one of my systems. kzbin.info/www/bejne/omHXlGWKiN2ehZo