Thanks for sharing this. What the performance compared to the GPU based machines?
@EduardoRodriguezRocks21 күн бұрын
is not this the deault behaviour when using llamacpp
@armanshirzad44096 ай бұрын
great thanks! so question can i use this to run a downloaded llm , instead of accessing hugging face ?
@rakeshreddy27918 ай бұрын
hey rishab great video.. can we fine tune the model using local-llm??
@Heet108 ай бұрын
Hello Sir, Thank you for reading my message Sir, I just finished my UG(BSc.IT) and I'm interested in the Cloud Computing Field as a fresher should I start preparing for DevOps or Cloud engineering to land a Job in Cloud Computing as soon as possible, And any advice that you would help me in career growth wise.
@rishabhjain51007 ай бұрын
It’s only for the inferencing ??
@armantech59266 ай бұрын
Yes, I also thought that is not real local LLM. It's inference.
@bradkeane12463 ай бұрын
can you talk to the models in english and have them answer you? you did not even demonstrate this?