It's a cool setup to use to run a RAG setup locally as well - nice going.
@oscarandresdiazmorales718011 күн бұрын
Excelente video! Me encantó cómo explicaste el proceso de implementar Ollama en Kubernetes. Gracias por compartir tu conocimiento!
@Techonsapevole12 күн бұрын
I use docker compose but i was curious about k8s
@zulhilmizainudin9 күн бұрын
Looking forward for the next video!
@mathisve3 күн бұрын
You can find the video here: kzbin.info/www/bejne/j6DQoGV6o7FshKM
@zulhilmizainudin2 күн бұрын
@@mathisve thanks!
@MuhammadRehanAbbasi-j5w10 күн бұрын
Would really like the video on how to add a GPU to this, both locally and on the cloud.
@mathisve9 күн бұрын
Stay tuned for that video! I'm working on it as we speak, should be out later this week!
@HosseinOjvar11 күн бұрын
Helpful tutorial thank you
@samson-olusegun13 күн бұрын
Would using a k8s job to make the pull API call suffice?
@mathisve12 күн бұрын
Yes and no! On paper, if you only had one pod this could work. But the API call needs to be made every time a new Ollama pod is scheduled (unless you're using a PVC mounted to the pod to store the model). As far as I'm aware it's not possible to start a Kubernetes job at the creation of a new pod without using an operator.
@Sentientforce12 күн бұрын
Can you please advise how to run ollama in k3d cluster in wsl2- windows 11 and docker desktop environment. The issue I’m not able to solve is making gpu visible in a node.
@unclesam0076 күн бұрын
here i cant deploy a simple laravel app on k8s🤒
@mathisve2 күн бұрын
Do you need help with deploying Laravel on Kubernetes?