That was the cluster autoscaler. Checkout the Pod autoscaler 👉🏽 kzbin.info/www/bejne/fJenemNuqMylj7s
@김도형-g2i4 жыл бұрын
This is just pure gold in youtube. I feel like I have found a goldmine.
@carloslfu4 жыл бұрын
Me too
@jaymo1074 жыл бұрын
This couldn't have come at a better time, we had pods getting evicted due to insufficient memory and couldn't figure out why, this helped a lot. Thank you!
@remus-tomsa9 ай бұрын
Great video man, very easy to understand and follow! Congrats!
@testuserselvaraj4 жыл бұрын
Like the way you present it, you make it simple to grasp and understand :)
@CareerDelTorro4 жыл бұрын
Sweet stuff! Awesome editing, very pleasant to watch :)
@rahulmarkonda2 жыл бұрын
Holy smokes….. I learnt a lot in 12 mins.
@minhthinhhuynhle91032 жыл бұрын
As usual, damn good contents MrDemper.
@partykingduh4 жыл бұрын
I’ve been looking for someone like you for months. Loved the presentation!
@rudypieplenbosch67528 ай бұрын
This is great info, very well explained, thank you.
@cristinasanchezjusticia100619 күн бұрын
Great video
@rajan8decАй бұрын
Thanks for the great video and it was really helpful for quickly refresh. Correct me if i am wrong but Cluster Autoscaler does not use Metrics Server even through API server. Although Metric server is used by HPA, VPA and top command. Cluster Autoscaler focuses on cluster scheduling state specially pending pods due to insufficient resource. It relies on scheduling information, instead of actual usage (Metrics server is real time). If pod is pending because of insufficient resource than cluster autoscaler determines if adding a new node would resolve the issue. Using Autoscaling group configuration it does that. Actual CPU and Memory usage are irrelevant because the decision is based on requested resources (as defined in pod specs) and scheduling state. That means AutoScaler will work regardless of Metrics Server is deployed in control plane or not.
@MarcelDempersАй бұрын
Yes you are correct, cluster autoscaler purely adds nodes to satisfy resource requests during scheduling 💪🏽
@rajan8decАй бұрын
@@MarcelDempers Thanks for clarification.
@harikrishna32582 жыл бұрын
Superb. Very concise and helpful. Thank you for sharing these insights
@ericansah525 Жыл бұрын
Amazing video with great practical examples.
@vmalj894 жыл бұрын
Excellent explanation. Quick, crisp and neat.
@narigina6414 Жыл бұрын
Great explanation, thank you
@ThotaMadhuSudhanRao4 жыл бұрын
good one. thanks for your effort to make a quality tutorial
@exit-zero4 жыл бұрын
Awesome video as always
@cristiancontreras29244 жыл бұрын
Awesome video, greetings from Chile.
@flenoir343 жыл бұрын
This is really interesting. I also found that pods can be evicted regarding lack of ephemeral storage. This is related to OS disk of my node instance which only has 30GB. I was wondering if there's a way to handle the storage parameter to avoir pods being evicted ? thanks for these videos and very nice channel
@saransabarishs43823 жыл бұрын
Beautifully explained. thanks Bro !!
@XEQTIONRZ4 жыл бұрын
Great video Sir. Very informative.
@BobWaist Жыл бұрын
Excellent video, you really make a good job to explain things in a crisp and concise way. One question that has remained, however, is the following: you describe that a certain part of the CPU gets allocated for each of the pods, although this isn't necessarily in use. Doesn't this totally break with the idea of scalability, because now each pod has completely overprovisoned resources (i.e., they are allocated but idle)? I somehow assumed that this would be part of the autoscaling, which vertically scales the conteiners depending on the load, or was this part of your video and I missed it?
@bhdr1112 жыл бұрын
Great tutorial, thank you. The music/ambiance is sometimes disturbing but still okay.
@Jadeish014 жыл бұрын
This is beyond helpful. Thank you!
@yovangrbovich35774 жыл бұрын
Keen for the next vid! Thanks Marcel
@f.55287 ай бұрын
Very interesting video. Thank you.
@martinzen4 жыл бұрын
Excellent video my man, thanks a lot
@vishnukr63752 жыл бұрын
You are really great :), and thanks for the information. Please keep going ahead...............
@karthikrajashekaran2 жыл бұрын
I have K8 using EKS , Do you have steps to implement autoscaling into Kubernetes cluster?
@robinranabhat3125 Жыл бұрын
Great Video :) I was just curious. Typical usecase I imagine is for nodes to scale up or down fully automatically based on number of requests. But here, we need are manually changing the number of pods.
@mayureshpatilvlogs3 жыл бұрын
Excellent explanation. Keep it up. My doubt is if we auto scalling have to scale down node after we have sufficient resources. Then what will happen to the pods which are already in running state .? Thanks
@MarcelDempers3 жыл бұрын
Thanks for the kind words The cluster autoscaler will only scale down if the node is not utilised. Kubernetes will not interrupt pods to scale nodes down.
@mayureshpatilvlogs3 жыл бұрын
@@MarcelDempers Thanks for the reply it really mean a lot. but let's see we have one pod which serving request and controller know that node is under utilizes. They will that pod dies or it will wait until it finishes the request.
@szymonf55544 жыл бұрын
Thanks for another awesome video
@devops_scholar4 жыл бұрын
Hi Marcel, thanks for the great content firstly. I have seen your video twice but confused in one thing - as when you scaled to 12Pods, and you mentioned that yr computer has 4 core - all exhausted in (almost 8 pods running instance) - then howcome Autoscaler would add 1 more node to yr K8S cluster when core machine is not having any CPU left
@MarcelDempers4 жыл бұрын
A cluster autoscaler will only add a node when total requested CPU exceeds available CPU in that node. Pods would usually wait in a 'Pending' state until either CPU is freed up or a node is available thay satifies CPU request requirement for that Pod. Hope that helps 💪🏽
@hobbes68323 жыл бұрын
I was wondering why kubernetes didn't go the CNI/CSI route of abstracting away the platform specific aspects of node addition ... also It seems there's no support in kubernetes for Intel RSD PODM based dynamic node composition.. great vid!
@mateustanaka6824 жыл бұрын
Congrats, excellent video. I have a question about CPU units, in your example you said 4 Cores equals 4096m. Should not be 4000m? Do we measure millicpu as well as memory?
@MarcelDempers4 жыл бұрын
Yes you're right, my bad. It should be 4 cores = 4000m :)
@flenoir344 жыл бұрын
very interesting. As i try to use "memory limits" i get sometimes "Out of memory" and my pod process is killed. I thought it would trigger another node instead. should i remove the limits ?
@MarcelDempers4 жыл бұрын
limits are more for last resort protection. You should ideally use resource request values and set the pod autoscaler around that. Checkout my pod autoscaler video for more info about scaling pods. The node autoscaler is only triggered when there is no space to schedule another pod.
@flenoir344 жыл бұрын
That DevOps Guy yes, will check this. Thanks a lot. Love your youtube channel, all my support !
@jameeappiskey58303 жыл бұрын
You are Legend and you must know it
@ict73342 жыл бұрын
Hi there. This is a very informative and comprehensive video. Thanks for that. I was wondering something you probably have an answer to. The cluster autoscaling, how much time you reckon it would take from the point where you ran out of space on your node, to the point where the new node is operational? And for the pod autoscaling, from running out of space to a new operational pod?
@MarcelDempers2 жыл бұрын
This depends, there's a thing called scaling lag. 1) metrics are 30sec delayed 2) time for scheduler to determine a new node is needed, can take few min 3) new node scaling time can take 3-5min 4) pod create , depends on what you are running
@darkwhite22474 жыл бұрын
why are you assigning 1 core to two process? assigning 500millicore to a process has some advantage over allowing the process to use the entire core?
@MarcelDempers4 жыл бұрын
There is no advantage over using an entire core or splitting them unless you know the required consumption of your workload. It really depends on understanding how much CPU your pod needs. If you don't know, i recommend you start with as little as possible. And use your monitoring to figure out best recommended CPU. There is good app called Goldilocks which is great at finding recommended CPU usage based on actual consumption over time . github.com/FairwindsOps/goldilocks
@fdghjvgf4 жыл бұрын
loved it ! :)
@AllanBallan4 жыл бұрын
Awesome topic Marcel! Keepem comin. Have you tried K9s to visualize stuff in the cluster? Was thinking of givin it a spin myself...
@szymonf55544 жыл бұрын
I can't image working without K9s, but anyway it's useful to get grasp with kubectl commands
@raheelmasood86569 ай бұрын
If I have to understand really what is happening behin the scene. This channel I have to come.
@georgezviadgoglodze78103 жыл бұрын
Awesome
@TrueTravellingCoder4 жыл бұрын
I am first one to like the video :)
@whooo714 жыл бұрын
if you talking about scaling then you should talk about billing usage. scaling is a good but it could be a bad also for your money.
@IoneHouten4 жыл бұрын
i am using kubernetes 1.19 I get an error like this unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http: heapster :) how to solve it?