How do you scale your apps and #Kubernetes clusters?
@adikztv637111 ай бұрын
I don't
@mateuszszczecinski82412 жыл бұрын
Dziękujemy.
@DevOpsToolkit2 жыл бұрын
Thanks a ton.
@viniciosantos2 жыл бұрын
Great video as usual! This channel is very underrated
@DevOpsToolkit2 жыл бұрын
I'm terrible at marketing :(
@kemibrianolimba682 Жыл бұрын
Brilliant...That was a great explanation. Keep up the great work
@romankrut7038 Жыл бұрын
Hey, I want to leave my feedback. Your videos are very usefull and explanation is very good. Keep going man!
@DevOpsToolkit Жыл бұрын
Thanks
@akhil-ph2 жыл бұрын
Thank you for this awesome video 👍, we all would like to see a video of HPA combined with Prometheus.
@hamstar70312 жыл бұрын
Great video on teaching and as a refresher for me on HPA and VPA! I would like to learn and understand how to utilize metrics from Prometheus as another means for the autoscaling use-case.
@DevOpsToolkit2 жыл бұрын
It's coming... :)
@DrorNir2 жыл бұрын
@@DevOpsToolkit can't wait! I need it for a project like right now
@DevOpsToolkit2 жыл бұрын
@@DrorNir If everything goes as planned, that one should go live thrid Monday from now.
@hiteshsmit8 ай бұрын
is the video made/available for - using Prometheus for custom metric monitoring and using it for HPA
@adiavanth3692 жыл бұрын
Very nice presentation as always. Looking forward to know hpa using custom metrics from prometheus
@Levyy19882 жыл бұрын
Great video as always! I think that it would also be useful to introduce KEDA autoscaler along with Prometheus base HPA. I am using KEDA and it is working great (in my case with RabbitMQ) since I can scale from zero pods which is huge cost saving.
@arns90062 жыл бұрын
We do Keda + Karpenter .. Magic
@DevOpsToolkit2 жыл бұрын
Yeah! KEDA is awesome.
@johnw.87822 жыл бұрын
Can I ask if you're using KEDA with GKE? I've had issues with intermittent metrics server availability. I love KEDA and want to use it, but it's def a blocker.
@DevOpsToolkit2 жыл бұрын
@@johnw.8782 I haven't used it in GKE just yet. So far, most of my experience with KEDA is on other providers.
@iposipos93427 ай бұрын
thanks for your videos, yes i would like to know to scale pods with HPA based on metrics in Prometheus. Thank you very much
@DevOpsToolkit7 ай бұрын
I'm planning to release a video that explores different types of scaling on July 8.
@javisartdesign2 жыл бұрын
Many thanks! I did not heard ever used VerticalPodAutoescaler!! There are many ways to describe scaling for applications, I also like the Scale Cube that it ir more from the point of view how microservices can be scaled.
@ioannisgko2 жыл бұрын
Thank you for the video!!! Question: how do we horizontally autoscale databases in Kubernetes? What are the challenges and what would be the proper way to overcome them? (Maybe an idea for a future video)
@DevOpsToolkit2 жыл бұрын
Adding it to the TODO list for a future video... :) Until then... If designed well, DB should come with an operator that takes care of common operations including scaling and all you really have to do is change the number of replicas (unless you enable autoscaling which is still not a common option).
@bules1210 ай бұрын
Gist is not well documented in the description! Can you fix it please?
@DevOpsToolkit10 ай бұрын
Sorry for that, and thanks for letting me know. It should be fixed now.
@bules1210 ай бұрын
@@DevOpsToolkit thanks for the quick response, ur the best!
@acartag72 жыл бұрын
I started using jsonnet and it has been a pain to use and a steep learning curve. Few months later we moved to ytt as it was easier to manage but now we are going for Kustomize for all new projects. Jsonnet is really powerful but when bringing someone new to the team and you show them jsonnet, they can easily feel overwhelmed.
@DevOpsToolkit2 жыл бұрын
That's my main issue with Jsonnet. It's too easy to over-complicate it and confuse everyone.
@salborough210 ай бұрын
Hi Victor thanks for a great video :) Just a question from my side - do you know how gitops (ie with ArgoCD) handles auto-scaling as I assume the replica count on the deployment yaml will no longer conform to the declared yaml in an autoscaling setup?
@DevOpsToolkit10 ай бұрын
Yeah. You should remove hard coded replicas or nodes when using scalers. That's not directly related to gitops. Argo CD and similar tools only sync manifests into clusters. If you do specify replicas and a scaler, the former will be overwritten by the later.
@salborough210 ай бұрын
@@DevOpsToolkit thanks so much Victor - ahh ok gotcha I didnt realise I could leave out the replica count in the deployment manifest - thanks :) Im going to look into this more. Also going to checkout your videos on Argo events and rollout to see how to deal with progressing a release through different environments while still using gitops.
@VinothKumar-ej2jc2 жыл бұрын
When scale in/down happens how does k8s make sure there is no traffic being served by those pods.. will there be a chance where user experience interruption due to scale in of pods
@DevOpsToolkit2 жыл бұрын
When Kubernetes decides to kill a Pod, among other things it does the following. 1. Stop all new incomming traffic from going to that Pod 2. Send SIGTERM signal to the process inside the containers in that Pod 3. Wait until the processes respond with OK to SIGTERM or it times out (timeout is configurable). 4. Destroy the Pod Assuming that SIGTERM is implemented in the app, all existing requests will be processed before the Pod is shut down. SIGTERM itself is not specific to Kubernetes but a mechanism that is applied to any Linux process (it might work on Windows as well, but I'm not familiar with it enough to confirm that). That means that if an app is implementing "best practices" that are independent of Kubernetes, there should be no issues when shutting down Pods. As a side note, the same process is used when upgrading the app (spin up new Pods and shut down the old ones) so you need to think about those things even if you never scale down.
@VinothKumar-ej2jc2 жыл бұрын
May I know why you have deployment.yaml and ingress.yaml in overlay directory though you dont have any changes/patches to them.. you can keep them in base directory itself right.
@VinothKumar-ej2jc2 жыл бұрын
Also how is replicaset is different from hpa
@DevOpsToolkit2 жыл бұрын
You're right. I should have placed those inside the base directory. I copied those tiles from another demo and failed to adapt them for this one.
@jarodmoser55882 жыл бұрын
Great video, would it be possible to run the VPA in recommend mode while relying upon the HPA to ensure scaling of pods? Can that combination be used to fine tune the autoscaling policies?
@DevOpsToolkit2 жыл бұрын
It could, but I would not rely on that. VPA recommendations might easily be incorrect due to HPA activities. I recommend using Prometheus instead.
@PhilLee196910 ай бұрын
Great video - as a complete beginner to Kubernetes it's helped me to understand what I want to with a particular project that I'm working. I currently have a long-term process that runs under Python but runs in a single thread. Up until know I've scaled vertically by moving to more powerful machines but also horizontally by runnning additional copies of the process on different processor cores and then dividing the clients up geographically. If I've understood correctly, with Kubernetes it looks like I could run one copy but get it to spread across multiple cores or even multiple servrers as required whilst to my clients it just looks like one machine ? Do I need to do anything to my process to ready it for deployment on Kubernetes or is it just a case of setting the resource limits and scaling parameters ?
@DevOpsToolkit10 ай бұрын
Assuming that it is a stateless application, all you have to do is define HPA that will scale it for you or, if scaling is not frequent, set manually the number of replicas in the deployment.
@PhilLee196910 ай бұрын
It's stateless (I think) as nothing is left once the application exits other than some log files. I'm definitely going to have to put together a cluster and have a go. Thanks again !
@CrashTheGooner2 жыл бұрын
Master ❤️
@naresing2 жыл бұрын
Hey Viktor.. this video is very helpful. Please make a video on HPA with Prometheus monitoring solution.
@DevOpsToolkit2 жыл бұрын
Already added to my TODO list :)
@sahilbhawke6052 жыл бұрын
Hey doing a great job waiting for your videos and the notification bell to buzz everytime ❤️ just a question hpa with respect to memory do we have any information for reference than it would be helpful also can we use them both simultaneously in our hpa manifest
@DevOpsToolkit2 жыл бұрын
Don't use vpa together with hpa. They are not aware of each other and might do conflicting actions. If you're wondering how to deduce how much memory to assign to a deployment managed by hpa, explore Prometheus. It should give you the info about memory utilization or anything else.
@sahilbhawke6052 жыл бұрын
@@DevOpsToolkit sure thanks for the information 💯 can you please come up with video more precise on cluster autoscaling in gke cluster and how it works like poddistributionbudget the annotation safe to evict pods how it's used the correct way would be great help of you 💯
@DevOpsToolkit2 жыл бұрын
@@sahilbhawke605 Adding it to my TODO list... :)
@sahilbhawke6052 жыл бұрын
@@DevOpsToolkit Sure i would be eagerly waiting ;)...Thanks for being such a great spot by sharing your valuable 💯 knowledge for us from your videos always waiting for your new video #devops 💯
@allengooch72 жыл бұрын
Good stuff. I believe the units for describing CPU limits should be called millicores instead of milliseconds, however.
@arns90062 жыл бұрын
whatever you say, based on your avatar, you're right
@Martin-sr8yb2 жыл бұрын
I would like to see a futrue video talking about metrics of auto-scaling like what you mentioned in the video. (Prometheus Kabana)
@DevOpsToolkit2 жыл бұрын
It's coming... :)
@unixbashscript95862 жыл бұрын
Hi Victor, thanks for this! I'd also really appreciate a video on how to hpa based on metrics from Prometheus Edit: I also have a question about Karpenter. Does it scale both horizontally and vertically?
@DevOpsToolkit2 жыл бұрын
Great! Adding it to the TODO list... :)
@Levyy19882 жыл бұрын
Karpenter scale horizontally but it have this advantage that it will add node that will handle all of your pods in pending state and not only randomly add node in one of your autoscaling groups that can be to big for your current needs.
@unixbashscript95862 жыл бұрын
@@Levyy1988 hey, thanks
@DevOpsToolkit2 жыл бұрын
@@Levyy1988 Exactly. That's why i said in the video that vertical scaling of nodes is typically combined with horizontal (new node, new size). Karpenter is a much better option than the "original" Cluster Autoscaler used in EKS. It provides similar functionality like GKE Autopilot.
@snehotoshbanerjee19389 ай бұрын
Does Kubernetes support scaling to zero?
@DevOpsToolkit9 ай бұрын
It does but that is rarely what you want. There's almost always something you need to run.
@snehotoshbanerjee19389 ай бұрын
@@DevOpsToolkit question is for running LLM app which is costly to run 24*7.
@DevOpsToolkit9 ай бұрын
If that is the only thing you're running in that cluster, the answer is yes. You can scale down worker nodes. However, controle planes nodes will have to keep running. Actually, now that i think of it, why don't you just create a cluster when you need it and destroy when you don't?
@swapnilshingote87732 жыл бұрын
First to comment...yooo
@owenzmortgage8273 Жыл бұрын
Demo don’t just talking about it, everybody can google 100 answers about this topic. Show people what you did at an enterprise environment. What you did in real world. Don’t just read white paper
@DevOpsToolkit Жыл бұрын
Have you seen any other video on this channel? Almost all are with demos with a small percentage being how something works (like this one). If anything, i might need to less demos.