In this episode we learn how to scale pods with the horizontal pod autoscaler. To scale your cluster nodes, checkout the Cluster Autoscaler 👉🏽 kzbin.info/www/bejne/oH6WZ4BpbrJ0aas
@ibrahemazad Жыл бұрын
the best video I ever watched on the internet explaining HPA
@tiagomedeiros79353 жыл бұрын
I read many articles on many sites and watch many videos to understand pod autoscaler, but all this time, I just needed to watch this video. Thank you.
@yovangrbovich35774 жыл бұрын
Great content as usual, and the production quality is constantly getting better too! Awesome
@emergirie2 жыл бұрын
Nice discover I like the way you explaining dude thanks for effort.I subscribe and will let other people know you
@prabhatnagpal Жыл бұрын
Thank you so much for making this concept easy to understand. Actually, I was also struggling setting the values of cpu requests and limits in the deployment, because in my Kubernetes even when the replicas increase, it starts running all pods with same load and didn't distribute evenly among the pods to make it come down and I have faced bad behaviour of scaling in my cluster. I have no clue what is happening
@DevsLikeUs4 жыл бұрын
Not having to provision infrastructure is awesome, thank you for the great video.
@5happy14 жыл бұрын
Such a well-done video! Can't believe you haven't gone huge yet. I don't usually comment on KZbin but I felt compelled this time. Looking forward to going through more of your library of content as I get more into Kubernetes and DevOps in general.
@Gandolfof4 жыл бұрын
Thank you very much! Please make a video on kubernetes e2e testing.
@vuhaiang20772 жыл бұрын
Congrats on the excellent and well-explained video. However as your example at 7:39, the only resource scaled is CPU, not MEMORY (after scaling up to 4 replicas the memory of each pod remain unchanged). I wonder is this something obvious? And if so how can we actually scale base on memory consumed?
@MarcelDempers2 жыл бұрын
Kubernetes hpa supports memory as well. In the demo I used CPU as its the most common one
@vuhaiang20772 жыл бұрын
@@MarcelDempers I understand. Thank you very much
@dangvu5342 жыл бұрын
Clearly explained and really useful for beginners, excellent work! May you kindly reply my small question: how can we estimate the resources request and limit for some specific pods?
@MarcelDempers2 жыл бұрын
The vertical pod autoscaler in recommendation mode can make recommendations on request values. There's a video on that in the channel. Also the latest monitoring video will also help 💪🏽
@torbendury43744 жыл бұрын
Again, great content delivered in an easy way and also essy to reproduce. Thanks!
@AmjadW.3 жыл бұрын
You're awesome! kudos to your efforts
@janco3333 жыл бұрын
How do you select a good minimum pod count for the hpa? I see this constant oscillation of it scaling up and down. Should i set my minimum above my normal load?
@elmeroranchero4 жыл бұрын
Amazing, thank you very much, loved the edition and the concise way o explaining
@parasprince20014 жыл бұрын
can you provide some sort of breakdown of which autoscaling API supported in which k8s version?
@MarcelDempers4 жыл бұрын
I don't see this formally documented anywhere, however, you can run `kubectl api-version" in your cluster to see what API versions it supports. I would also recommend looking at the HPA documentation of Kubernetes to see features of what's coming in future versions
@nikoladacic98004 жыл бұрын
Good lecture. Good presentation. Interesting fast and to the point. Good job man!!! Keep it coming and thanks. Deserved my SUB definitely. :)
@inf2223 жыл бұрын
Such great work deserves like and comment))
@ankitguhe50153 жыл бұрын
Absolutely useful video, you saved my job 🤣 thanks a ton mate!
@vinayaknawale10154 жыл бұрын
Nicely explained can you make video on eks with cluster autoscaler + hpa + ingress
@MarcelDempers4 жыл бұрын
Thank you 💪🏽 You should be able to follow the sequence: EKS 👉🏽kzbin.info/www/bejne/h4XLkpeJaLiin8k CA 👉🏽kzbin.info/www/bejne/oH6WZ4BpbrJ0aas HPA 👉🏽kzbin.info/www/bejne/fJenemNuqMylj7s Ingress 👉🏽kzbin.info/www/bejne/q2qXaXaLh7F3gKM
@gouterelo4 жыл бұрын
In an HA cluster, metrics needs another modification... but i dont remmeber where...
@creative-commons-videos3 жыл бұрын
hey there, can you please tell me how can i use nginx ingress in my cluster, I am using IBM cloud computing for cluster but the problem is currently i am on Lite plan which does not allow to create LoadBalancer, so how can i deploy my website using domain name on IBM ???
@MarcelDempers3 жыл бұрын
It will be an issue im afraid. Kubernetes allows NodePort but its not recommended for production workloads and will give you other issues like restrictions to port range which is not 80 or 443. Also make it hard to run your pods behind multiple nodes. If its your personal website, I would highly recommend Linode or Digital Ocean. Kubernetes is cheap to run there, their UI and UX is brilliant and an LB is around $10 a month too.
@creative-commons-videos3 жыл бұрын
@@MarcelDempers thanks buddy
@derekreed6798 Жыл бұрын
Nice vid
@sachin-sachdeva4 жыл бұрын
Thanks Marcel. This all is load based - is there a way where I can define it time based e.g. if there is a heavylifting job runs on my cluster between 2-4 AM and I can not afford to miss it?
@MarcelDempers4 жыл бұрын
Maybe checkout a project called Keda. it may support exactly what you need 💪🏽
@maratbiriushev78703 жыл бұрын
Thank you!
@imranarshad2214 жыл бұрын
Thanks for great Demo. Quick question, How come Pod could go to 1493m CPU when we allocated 500m ? Isn't that hard limit ?
@MarcelDempers4 жыл бұрын
Thank you 💪🏽, no the 500m is the requested value which is used for scheduling
@imranarshad2214 жыл бұрын
@@MarcelDempers Thank you makes sense . If I Only need one pod , Is there a way to put hard limit so single pod doesn't eat up all the memory ?
@MarcelDempers4 жыл бұрын
@@imranarshad221 Sure there is. Just remember if that pod hits its limit, it will be terminated by the OOM killer and restarted kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits
@yuryzinovyev61863 жыл бұрын
Thank you so much!
@martinzen4 жыл бұрын
Absolutely killer video my man, much appreciated. Noob question: does the metrics server require a separate node for a production deployment? Or does it just run in the same k8s service process, the way a plugin would? It would be useful to have a better idea of how this maps to actual cloud infra in terms of VMs/nodes, etc.
@MarcelDempers4 жыл бұрын
Thanks for the kind words 💪🏽For production, Metric server can run on any node where it can be scheduled. Many cloud providers have metric server already installed in the kube-system namespace