[ Kube 35 ] Using Horizontal Pod Autoscaler in Kubernetes

  Рет қаралды 38,340

Just me and Opensource

Just me and Opensource

5 жыл бұрын

In this video, I will show you how to set up and use a horizontal pod autoscaling in your Kubernetes cluster.
You will have to deploy a metrics server first without which the auto scaling won't work. Then you have to set the resource limits in your pod definition.
The demo I will be doing in this video, is to setup a pod autoscaler that automatically scales out when cpu utilization increases beyond certain threshold value set and automatically scales down when the cpu utilization drops.
Github: github.com/justmeandopensourc...
Metrics server
github.com/kubernetes-incubat...
Kubernetes Docs:
kubernetes.io/docs/tasks/run-...
For any questions/issues/feedback, please leave me a comment and I will get back to you at the earliest. If you liked the video, please share it with your friends and don't forget to subscribe to my channel.
If you wish to support me:
www.paypal.com/cgi-bin/webscr...
Hope you found this video informative and useful. Thanks for watching this video.

Пікірлер: 315
@omarsh5269
@omarsh5269 Жыл бұрын
Your videos are awesome and to the point, thank you so much!
@justmeandopensource
@justmeandopensource Жыл бұрын
Hi Omar, thanks for watching.
@HeyMani92
@HeyMani92 3 жыл бұрын
Great Session Venkat bro
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Thanks for watching.
@rampradyumnarampalli
@rampradyumnarampalli 3 жыл бұрын
Excellent Demo Venkat
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Thanks for watching Ram
@sitapriyanka5452
@sitapriyanka5452 3 жыл бұрын
Hi sir, nice video. It gave me a very good idea about HPA. A minor thing I noticed is that you were mentioning scaling out and scaling down. I think it is scaling in.
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Sita, thanks for watching. You are right. Either scaling up/down or scaling out/in. Just language but you got the point. Cheers.
@milindchavan007
@milindchavan007 4 жыл бұрын
Awesome!!! Really this is very helpful.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Milind, thanks for watching.
@robertocalderon3376
@robertocalderon3376 3 жыл бұрын
Well done...well explained...good job!
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Roberto, thanks for watching. Cheers.
@gazellemanavipour5595
@gazellemanavipour5595 Жыл бұрын
very well explained . thank you,
@justmeandopensource
@justmeandopensource Жыл бұрын
Hi Gazelle, Thanks for watching.
@anandjoshi8331
@anandjoshi8331 3 жыл бұрын
Very cool. Thank you so much for this!
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Anand, thanks for watching.
@harishdara7600
@harishdara7600 4 жыл бұрын
Awesome video!!!! Helped lot. Could you please do a video on affinity and anti-affinity.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Harish, thanks for watching. Affinity and anti-affinity is in my list. I will get that done soon. Cheers.
@RaviPrakash-rt9vx
@RaviPrakash-rt9vx 3 жыл бұрын
Great video!!!!
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Ravi, thanks for watching. Cheers.
@kuljeetkumar4657
@kuljeetkumar4657 3 жыл бұрын
Great video man
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Kuljeet, thanks for watching.
@kuljeetkumar4657
@kuljeetkumar4657 3 жыл бұрын
Can we get one video on for grpc services. Or what should i explore for grpc service any idea
@kuljeetkumar4657
@kuljeetkumar4657 3 жыл бұрын
For kafka also
@amineboumaraf4573
@amineboumaraf4573 5 жыл бұрын
good work keep it up, i'm just waiting for you to make that video about installing gitlab and runner on kubernetes cluster using a persistant storage like you did with jenkins
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Amine, thanks for watching this video. I will definitely do a video on as you requested. Currently I am enjoying my vacation and on my return I will make arrangements for this video. Thanks for your interest in these topics.
@amineboumaraf4573
@amineboumaraf4573 5 жыл бұрын
@@justmeandopensource have a good time
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@amineboumaraf4573 Thanks.
@muhamadalfatih7375
@muhamadalfatih7375 3 жыл бұрын
Hi, I couldn't found deploy's directory on metric server directory, where is the path?
@deekshant56
@deekshant56 3 жыл бұрын
nice explanation .. thanks
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Nishant, thanks for watching. Cheers.
@anamikakhanna8069
@anamikakhanna8069 4 жыл бұрын
Very helpful video . Could you please help me understanding, do we need to apply affinity or anti affinity for hpa?
@chaitanyayadav5097
@chaitanyayadav5097 3 жыл бұрын
So informative! Can you do video on Cluster Autoscaler as well ?
@thesamuelseggs
@thesamuelseggs 2 жыл бұрын
Your video was very helpful and insightful in understanding HPA. In my own scenario, I have Prometheus already running for metrics and alerts. I'm not sure if i install metric server( as shown in this video) will break my current installation. I have looked up using Prometheus as Metrics server, By using KEDA. But i cannot seem to continue this. Could you shed more light on Using Prometheus as Metric Server for K8s HPA
@ramesh150585
@ramesh150585 3 жыл бұрын
Hi, Thank you for this wonderful video... I am using single node cluster with microk8s .. Is it possible to try this exercise HPA with single node cluster ?
@hrvojetonkovac6519
@hrvojetonkovac6519 5 жыл бұрын
amazing channel
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Hrvoje, thanks for watching this video and taking time to comment. Cheers.
@agrawalbansal88
@agrawalbansal88 4 жыл бұрын
Hi Thanks for Video, I could not find the cooling period getting effect as all replicas are getting started together. Not waiting for 3 minutes.
@samratvlogs9098
@samratvlogs9098 4 жыл бұрын
good explanation...
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Samrat, thanks for watching.
@sandeepmishal2376
@sandeepmishal2376 3 жыл бұрын
Hi Venkat.. Thanks for sharing knowledge.. Could you re visit HPA with custom and external metrics?
@naveengogu6539
@naveengogu6539 3 жыл бұрын
Very helpful video, Can you please record video on Cluster Autoscalling for kubeadm cluster on AWS EC2?
@sergeibatiuk3468
@sergeibatiuk3468 8 ай бұрын
Hi, at 4:50 you mention the cooling periods of 15 seconds, 3 minutes and 5 minutes. But later in the video when you demo the live system pods scale up/down much faster. So what are these 15 seconds/3 minutes/5 minutes are for?
@sachasmart7139
@sachasmart7139 Жыл бұрын
Bit of a random question: I'm finding that when demand goes up on a service (say CPU), the HPA deploys the additional pods, Traefik will sometime route to those pods that are being created, thereby giving me 404s or errors. Any suggestions on how to handle this? I'm assuming that the configuration should actually be on the ingress route not the HPA, but I could be running. Appreciate all the help, Venkat!
@speakspectre
@speakspectre 3 жыл бұрын
Great video, thank you! I have a question: Why do you use Linux AND Chrome?
@Praveen-xx7iy
@Praveen-xx7iy 4 жыл бұрын
Hi Venkat, thanks for HPA in Kubernetes video. While executing slege command you showed all the 3 terminals in a single screen. May I know how it is possible windows?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Praveen, I used tmux and opened multiple panes. Most people use putty to ssh into a Linux System and then use tmux. Cheers.
@anamikakhanna8069
@anamikakhanna8069 4 жыл бұрын
Sometimes I face as the load increases, the number of pods are not increased instead the 2 pods mentioned earlier gets restarted, I could see the age as 1 hour , 2 hour .. seems it to be restating
@Siva-ur4md
@Siva-ur4md 5 жыл бұрын
Hi Venkat, Thanks for the video nice information. How to play HPA with memory, connections, more requests cases. Could you please make a video on it. because every time, might not be an issue with CPU, sometimes it could be a memory, over requests, or more connections.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Siva, thanks for watching this video. I will surely play with other resources and make videos. Currently I am enjoying my vacation. And on my return will pick this. Thanks, Venkat
@sidwar
@sidwar 3 жыл бұрын
Getting an error when i am editing the deployment file and adding the resources limits and requests , i am not able to save the file and its giving the error "[yaml: line 44: found a tab character that violates indentation, invalid character 'a' looking for beginning of value]" i did exactly what you did
@bharathquest
@bharathquest Жыл бұрын
It was knowlegeful...! & i have a question: I have Horizontal autoscaler with max 4 pods and when there are more than 1 pods running i see same application server log in all the pods My scenario Secanrio: My Application server is Jboss 5.1 When there are 3 pods - Pod1, pod2, pod3 if i access Jboss server log file through the terminal in each of these pod's, server.log file is exact replica in all 3 pods Is this how the server log will be when it comes to Horizontal auto scaling of pods ?
@casimirrex
@casimirrex 4 жыл бұрын
Hi Venkat, Have you created custom metrics server for HPA in k8s?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Antony, thanks for watching this video. I have only tried cpu and memory based autoscaling. Haven't explored any custom metrics. May be I should do it. Let me play with it and record a video if I get anywhere with it. Thanks.
@Praveen-xx7iy
@Praveen-xx7iy 4 жыл бұрын
Hi Venkat hpa is working fine if pod contains single container but if it contains 2 containers hpa is not working. I am creating sidecar container for apps for log aggregation..any idea how to make it work with 2 containers in a pod
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Praveen, thanks for watching this video. I didn't know that it is not working when more than one container is deployed. I will have to test this. Thanks for pointing this out.
@rossc5140
@rossc5140 3 жыл бұрын
What linux distro are you using? And what is your stats thing on the right for system performance?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Ross, the status thing on the right is the conky tool. You have a conkyrc which is the config file for conky program. You configure what needs to be displayed and start conky. I distro hop every now and then. Currently I am on Gentoo. This one in this video is probably Manjaro Linux. Cheers.
@kuljeetkumar4657
@kuljeetkumar4657 3 жыл бұрын
hi venkat the pods autoscale, but the requests do not get sent to newer ones, any idea?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Thats strange. So your pods are autoscaling. I mean more replicas of your pods are getting created as the load increases and the new ones are not taking any requests? That won't be possible. All of them should be behind a service which load balances the incoming requests among those replicas. How do you know that they are not taking requests?
@HeyMani92
@HeyMani92 3 жыл бұрын
Thanks a lot
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Most welcome
@sureshuppapalli
@sureshuppapalli 2 жыл бұрын
How to bring down hpa? please tell me Thank you
@windowsrefund
@windowsrefund 3 жыл бұрын
Awesome job. FYI, kubectl run is not going to create a deployment on recent versions of Kubernetes. Instead, you need to create or apply with a valid yaml spec.
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi, thanks for watching and for the tip on kubectl run. Yes. I am aware of it but this video was done very long time ago. Cheers.
@prasadshivan79
@prasadshivan79 4 жыл бұрын
Hi Venkat, I just came across this video while searching for autoscaling of nodes, is it possible to have a autoscaling of nodes in an on premises cluster
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi, thanks for watching. There is no ready to use solution. But you xan achieve it through scripts and automation. I have seen it as a github project, but cant remember it now.
@prasadshivan79
@prasadshivan79 4 жыл бұрын
@@justmeandopensource Thank you for th quick response
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@prasadshivan79 you are welcome.
@manmohanmirkar1
@manmohanmirkar1 2 жыл бұрын
Hello Venkat, Will it be possible for you to demo on Karpenter autoscaler in EKS?
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Hi, thanks for watching. I will add it to my list but honestly I don't think I can get to it anytime sooner.
@hesammohammadi2960
@hesammohammadi2960 2 жыл бұрын
You're awesome bro, would you please make a video about Horizontal auto scaling with custom metric and prometheus adapter?
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Hi Hesam, thanks for watching. I have that (autoscaling on custom metrics) in my list. I will get to it at some point though. Cheers.
@hesammohammadi2960
@hesammohammadi2960 2 жыл бұрын
@@justmeandopensource thanks a lot bro
@justmeandopensource
@justmeandopensource 2 жыл бұрын
@@hesammohammadi2960 no worries.
@zulhilmizainudin
@zulhilmizainudin 4 жыл бұрын
Hi Venkat. Questions: 1. How to determine the cpu limit and request correctly? Why you picked 100m in your example? Does 100millicpu means 0.1% of cluster CPU (0.1% of combined CPU from worker 1 and worker 2 nodes)? Would be great if you can make a video about this. 2. From my understanding, this HPA will only autoscale the number pods in limited number of physical nodes. How about autoscaling part of the physical nodes when the pods need more physical host resources? Will HPA be able to handle this situation as well? Thanks.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Zulhilmi, thanks for watching. 1. As application developer, you should have an idea of how much your application will need in terms of memory and cpu. You would also have some form of monitoring to monitor cpu and memory utilization of all your applications running in kubernetes cluster. Start with something low for cpu and memory, monitor the utilization and adjust it accordingly. 2. HPA is for pod autoscaling. As long as the underlying nodes have enough resources, the HPA can scale up the number of containers. What you want is node autoscaling. This can be setup easily if you are running your cluster in cloud environment. You just set up an autoscaling policy saying, if the cluster reaches 80% cpu utilization or 80% memory utiliztion, add another worker node to the cluster.
@zulhilmizainudin
@zulhilmizainudin 4 жыл бұрын
@@justmeandopensource thank you
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@zulhilmizainudin no worries.
@palanisamy-dl9qe
@palanisamy-dl9qe 3 жыл бұрын
Hi Buddy Thank for the video. do u have same kind of video for memory utilization?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Thanks for watching. I can't recall if I actually did one for memory utilization or if I covered that as part of another video. But the concept should be relatively identical. Cheers.
@palanisamy-dl9qe
@palanisamy-dl9qe 3 жыл бұрын
@@justmeandopensource Just one quick question about memory limits let say - type: Resource resource: name: memory targetAverageUtilization: 80 when i have above scenario (with memory) what is the case whether my pod will count will increase or decrease? and how we can test this in real case, For CPU utilization we can increase the load using siege way, but how we can test this memory utilization ?
@NomadicGujju
@NomadicGujju 4 жыл бұрын
Hi Do HPA kills pods if pod are serving some requests during scale down time ?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Nirav, thanks for watching. I believe the Kubernetes controllers will wait for any ongoing requests to complete/drain before taking a pod down. Cheers.
@NomadicGujju
@NomadicGujju 4 жыл бұрын
@@justmeandopensource Thanks for replying. And thanks for sharing great videos.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@NomadicGujju You are welcome. Cheers.
@ameyamarathe6613
@ameyamarathe6613 4 жыл бұрын
Great video!!!! How is the terminal behind fetching live details of "kubectl get all"?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Ameya, thanks for watching. Not sure which terminal you are referring to. The background is my desktop and on the right corner I am running conky script that shows my system details like cpu/mem/disk. I use watch command to continuously watch for output from any command. Eg: watch -x kubectl get all, will run this command every 2 seconds. Cheers.
@ameyamarathe6613
@ameyamarathe6613 4 жыл бұрын
@@justmeandopensource Awesome got it thanks.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@ameyamarathe6613 You are welcome.
@oyee3900
@oyee3900 4 жыл бұрын
Hi Venkat, thanks a lot for your videos. Can you please do a video on vertical pod autoscaling (vpa) in Kubernetes...
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Oyebimpe, thanks for watching this video. I haven't looked at VPA before but after you mentioned it, I had a look and it looks interesting to try. I will definitely explore that and make a video. I already have 5 videos waiting to be released and I will add this VPA video to the list. My videos are released once every Monday. So you can expect the VPA video in 5 weeks time. Unfortunately you will have to wait until then. But anyways many thanks for suggesting that topic. Thanks.
@oyee3900
@oyee3900 4 жыл бұрын
OK Venkat, thanks in advance!
@justmeandopensource
@justmeandopensource 4 жыл бұрын
No worries. You are welcome.
@himansusekharmishra5114
@himansusekharmishra5114 2 жыл бұрын
Thank you for this video. I want to configure HPA for other namespace so shall I have to install metrics server for that namespace. If yes then how to install matric server for other namespace instead of default.
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Hi, i believe you don't need metrics server per namespace. Just one metrics server per cluster is what you need. Cheers.
@himansusekharmishra5114
@himansusekharmishra5114 2 жыл бұрын
@@justmeandopensource Thank you again. But after applying the same I am getting a error of "failed to get cpu utilization: missing request for cpu" and it is showing unknown "resource cpu on pods (as a percentage of request): / 80%" and its happening in another namespace.
@himansusekharmishra5114
@himansusekharmishra5114 2 жыл бұрын
Hi I got the solution, actually there was some indentation error and lastly I want to ask how to maintain a zero down time using HPA as because rolling updates can not work with HPA.
@mohammedsaif934
@mohammedsaif934 4 жыл бұрын
Hi Venkat metric server is not working with new repo, I am unable to see kubectl top nodes after applying menifest from github repo
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Mohammed, thanks for watching. Let me know how you deployed metrics-server? Things might have changed and I will have to re-test it.
@romanvolovyk968
@romanvolovyk968 3 жыл бұрын
@@justmeandopensource github.com/kubernetes-incubator
@rameshbeee
@rameshbeee 5 жыл бұрын
Nice video, I was trying to install spinnaker in my local cluster provisioned through kubespray, if you have any experience on this please share or make a video
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Ramesh, thanks for watching this video. I am currently in a vacation and will surely make a video on Spinnaker on my return. Have you tried deploying Spinnaker using Helm?
@rameshbeee
@rameshbeee 5 жыл бұрын
@@justmeandopensource yes I tried with helm but the cluster is crashing for some reason
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@rameshbeee it depends on how you setup your cluster. I too tried kubespray on my laptop which has 16G RAM. It deploys a production grade kubernetes cluster with multi master and multi etcd in high availability setup. But it required lots of cpu and memory resource to run the cluster. See if you have got enough RAM and cpu. I guess your problem is insufficient RAM in your cluster. Thanks, Venkat
@VinuezaDario
@VinuezaDario 4 жыл бұрын
Hi, Can you help me with an error I am having in the workers 'logs in the kubernets' installation: http: superfluous response.WriteHeader call from k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader (httplog.go:197)
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Darvin, can you give me more context around this error.
@shanmugamn7916
@shanmugamn7916 4 жыл бұрын
HPA control all namespaces within cluster ? Or only for kube system namespace .
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Shanmugam, thanks for watching this video. May I know what you mean by "contorl all namespaces" please?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
HPA is cluster wide and can be used with deployments in any namespace.
@tamilselvan8343
@tamilselvan8343 4 жыл бұрын
hi venkat sir, i need a small clarification. 1. how the communication was happened from metrics-server to hpa?becoz how HPA knows which replicaset needs to scale more and scale down?? 2. you are saying cooling period 3 mins and 2 mins.my doubt is i'm setting max-replicas=3 means it will create one by one pod with 3 mins interval and scale down one by one pod with 2 mins interval.Is it correct?? 3. As per my understanding metrics-server collect both node utilization and pod utilization that will be sent to kubelet.Is it correct.?? please clarify my doubts..
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Tamil, 1. HPA resource will query the metrics for the given pod at a periodic interval usually 10 or 15 seconds. The metrics for nodes/pods are collected by the metrics-server pod. 2. Cooling period is the period between two scaling. It doesn't mean that one pod at a time. If its scaled up by 2 pods and still the threshold is not within the limit, it won't immediately scale again. It will wait for the cooling period to finish before staring the next scaling operation. But in one scaling operation it can scale more replicas as necessary. 3. Metrics server collects node/pod metrics. Using Kubectl you can query the api server to get the metrics. (eg: kubectl top pods, kubectl top nodes) Cheers.
@tamilselvan8343
@tamilselvan8343 4 жыл бұрын
@@justmeandopensource hi sir thanks for ur quick response for my doubts.can u make videos for readiness and liveness for kubernetes.its my request.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@tamilselvan8343 I have already recorded video for that and it will be released on 27th April. If you are not in the Slack workspace for this channel, you can join with below invitation. There are lots of like minded people there for any questions you may have. The calendar channel in this workspace contains details about upcoming videos. kzbin.info?redir_token=Jojwxt5qSKllmUaFwgR36c-XKb58MTU4NjM2MjY0OEAxNTg2Mjc2MjQ4&event=channel_banner&q=https%3A%2F%2Fjoin.slack.com%2Ft%2Fjustmeopensource%2Fshared_invite%2FenQtODg4NDcxMTg5Mjk2LTgyNWJkMzlmNzRlMDdhYzc2ZjA3NjA0N2ZkODg5NzAyMzNjMGY4OGJjZDkzNmRhNDU1ZjEzZjc1NzhiMjBlMmI
@teknoweuteuh7550
@teknoweuteuh7550 4 жыл бұрын
what Operating System do u use ?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Manjaro Linux with I3 tiling window manager and zsh, oh-my-zsh, zsh-autosuggestions. Cheers.
@rebelmoon-aj
@rebelmoon-aj 4 жыл бұрын
Good on
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Ajit, thanks for watching.
@nguyenanhnguyen7658
@nguyenanhnguyen7658 2 жыл бұрын
Cool ! Thanks man. Can it scale Deployment or even Node ?!
@justmeandopensource
@justmeandopensource 2 жыл бұрын
No.. its pod autoscaler.
@VinuezaDario
@VinuezaDario 4 жыл бұрын
Hi, I execute kubectl top pods, but this is my response: Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
@justmeandopensource
@justmeandopensource 4 жыл бұрын
HI Darvin, thanks for watching. This method should work. I have tested it. Check the logs of the metrics-server pod. I have also recorded a video for installing metrics-server using helm which will be released next week. Thanks.
@MuhammadAhmad-ok2gu
@MuhammadAhmad-ok2gu 4 жыл бұрын
Hi Venkat, what is the tool/script you are using to complete a pod name?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Ahmad, thanks for watching this video. I use ZSH shell and oh-my-zsh. I use kubectl plugin in my .zshrc configuration for kubectl command auto-completion. oh-my-zsh: github.com/ohmyzsh/ohmyzsh thorsten-hans.com/autocompletion-for-kubectl-and-aliases-using-oh-my-zsh Cheers.
@iammrchetan
@iammrchetan 4 жыл бұрын
refer this kubernetes.io/docs/reference/kubectl/cheatsheet/ kubectl autocomplete
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Thanks for sharing.
@manikandans8808
@manikandans8808 5 жыл бұрын
Siege is not working for me. I couldn't install it on Ubuntu 19.04. is there any other tool other than siege?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Can you be more specific please? What you tried and how its not working? Thanks
@yuvraj-6560
@yuvraj-6560 3 жыл бұрын
Had a question, can we restrict the pods with just one request? I mean one pod should handle just one request at a time. Can we incorporate that task in hpa? Thanks in advance.
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Yuvraj, thanks for watching. I don't think that is possible. It depends on the application you are running inside the pod as containers. Anyways HPAs can't do what you expect. Limiting number of requests that a pod can accept can be done using some service mesh framework or simple if you are just running a web server like apache or nginx in your pod container, you can configure the webserver to serve just one request at maximum.
@yuvraj-6560
@yuvraj-6560 3 жыл бұрын
@@justmeandopensource Thanks for the info, Venkat. Your video resources have been very helpful as this is what someone would need to understand the system better. I’m new to Devops community currently working to setup kubernetes for our product. I’m pretty sure I’ll be getting insight through your other videos going forward. Keep making em. Cheers 🥂
@justmeandopensource
@justmeandopensource 3 жыл бұрын
@@yuvraj-6560 Cool. Wish you all the success in your adventurous Devops role.
@chetansogra8452
@chetansogra8452 4 жыл бұрын
after rebooting my master node. there is getting this error "The connection to the server 172.31.31.113:6443 was refused - did you specify the right host or port?" I did already disabled swap.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Chetan, thanks for watching. How is your cluster provisioned? May be its not running the api-server on port 6443.
@chetansogra8452
@chetansogra8452 4 жыл бұрын
@@justmeandopensource Exactly this is the problem. When i reboot master server api working stopped. How can we start api server manually on port 6443. I saw no services are running there.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@chetansogra8452 How is your cluster provisioned?
@multiview5520
@multiview5520 Жыл бұрын
Hi, I want create a cron job for deleting the complete and evicted pods., can you please give me idea about that.
@justmeandopensource
@justmeandopensource Жыл бұрын
Hi, see if this video helps kzbin.info/www/bejne/nWHHnpqaZ5x3iMk
@multiview5520
@multiview5520 Жыл бұрын
@@justmeandopensource thanks for the reply 👍
@justmeandopensource
@justmeandopensource Жыл бұрын
@@multiview5520 no worries.
@anandjoshi8331
@anandjoshi8331 3 жыл бұрын
I have one question for you - What Dashboard App are you using that is on your desktop display that shows CPU and stats. Please let me know when you get a chance
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Anand, thanks for watching. The system status you are seeing on the right side of my desktop is from conky. I used to use conky and had a custom conkyrc config file. But I don't use it these days and unfortunately I didn't save that config file anywhere. You can install conky and hunt for conkyrc file online and you will find lots of cool configs. Cheers.
@anandjoshi8331
@anandjoshi8331 3 жыл бұрын
@@justmeandopensource perfect will try that :) thank you Sir!
@justmeandopensource
@justmeandopensource 3 жыл бұрын
@@anandjoshi8331 You are welcome
@thatiparthi.madhusudhanrao3989
@thatiparthi.madhusudhanrao3989 4 жыл бұрын
hi I have installed metrics server in the cluster i try to test kubectl top nodes, below is error Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi, thanks for watching. I am not entirely sure why you are getting that error. I just tried it as exactly shown in this video and working fine. Please try it on a different cluster if you have access to it.
@chinmaykulkarni3078
@chinmaykulkarni3078 4 жыл бұрын
Hey.. If you working on minikube, Make sure minikube addons - metrics-server is enabled. and try. Check this page for further issues - github.com/kubernetes/kubernetes/issues/67702
@andresdiaz1749
@andresdiaz1749 3 жыл бұрын
Can I do it with Docker containers?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Andres, do you mean if this can be done in kubernetes clusters running in docker containers? Yes you can.
@thannasip8001
@thannasip8001 4 жыл бұрын
how do we find set limit and request for cup ,memory in case of spring boot ?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
HI, its just a case of trial and error. In real world scenario, you will have different environments like dev, staging and production. You will have to know how much memory and cpu your app requires under normal load and at peak times. You will have these information usually through some form of monitoring infrastructure. Otherwise start by setting these values low and adjust as needed.
@thannasikumar2096
@thannasikumar2096 4 жыл бұрын
@@justmeandopensource thanks bro, but it takes higher cpu on startup after that it comes to ver low value, that's where am struggling,any help will be appreciated
@justmeandopensource
@justmeandopensource 4 жыл бұрын
​@@thannasikumar2096 That will be tricky. During starup it requires more resources. So in the manifest, if you specify just the limits for cpu and memory without requests, it will be fine. If requests not specified, the limits will be the requests as well. SO for example, if you specify a memory limit of 1G, the pod will be launched only any of the node has atleast 1G free memory. Lets say your app requires 750MB during startup and then it only requires 100 MB for normal operation. In this case, you can specify a request of 750MB and a limit of 1G. After 1G the pod will be terminated. If you specify a request of 100MB, then it can be launched on a node that has atleast 100MB free memory. It can land on a node with 200MB free memory but it hasn't got enough free memory for initial startup. So go for requests 750MB and limit 1G (modify as per your need.)
@vinaykatkam
@vinaykatkam 4 жыл бұрын
Hi Venkat, Nice for detailed explanation, I have tried to install metrics server in my cluster, but it is crashing., as you said I disabled tls verification as well, but still no luck. Metrics server pod is still crashing. Can I have your suggestion here. Thanks.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Katukam, thanks for watching. I know few people experiencing this problem. I too had this problem. So I decided to do a video on installing metrics-server using Helm. The video will be released this Thursday (Dec 26th). This video was originally intended to be released in the new year but I have brought it forward as few people having issues deploying metrics-server. So stay tuned for this video on Thursday. If you have enabled notification, you will receive it otherwise please search in my playlist on Thursday. Cheers.
@vinaykatkam
@vinaykatkam 4 жыл бұрын
@@justmeandopensource Good to hear that, which pod network you will prefer in Cluster environments and which one is good. Suggestion please. Thank you.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@vinaykatkam I don't manage any production grade clusters. Just local dev clusters for KZbin videos. I use either Flannel (for simplicity) or Calico (for advanced features like pod security policies). There are others as well like WeaveNet which is also advanced in terms of networking features they provide. So you will have to analyze your requirements and choose one appropriately. Bear in mind that if you provisioned your cluster with one overlay network and decided to move to another at a later time, it will be quite complex and require downtime. Cheers.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Suggestion from another viewer. Add below to metrics-server deployment yaml under args section. That'll work if you don't have any networking/dns problem in your cluster. - --kubelet-preferred-address-types=InternalIP
@jayeshthamke7860
@jayeshthamke7860 4 жыл бұрын
Hi Venkat, Nice tutorial. - Does HPA scales up and down always in binary way e.g. 1 or 5 in this case? Will the number of running pods between 1 & 5, if yes what are the scenarios? - What are the other resources that I can set for watch on deployments for HPA? Can u pls link the docu? Thanks, Jayesh
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Jayesh, thanks for watching. Not sure what you mean by binary way. It just scales up until the desired metrics (cpu/memory) falls under the set threshold. Also I don't get your second question. Can you try to put it in another way? Sorry.
@jayeshthamke7860
@jayeshthamke7860 4 жыл бұрын
@@justmeandopensource - does HPA create pods 1 and 5 or also in between 1 and 5 i.e. 3 or 4 depending on load ?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@jayeshthamke7860 it will create any number of pods (upto maximum specified) in an increasing order until the threshold falls under the limit.
@jayeshthamke7860
@jayeshthamke7860 4 жыл бұрын
@@justmeandopensource ok it is clear now. So it all depends on load threshold.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@jayeshthamke7860 Yeah.
@anamikakhanna8069
@anamikakhanna8069 4 жыл бұрын
I cannot find deploy folder under this git repo, says no such file or directory
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Anamika, thanks for watching. Is it during the metrics server installation? If so please follow one of the below videos to get metrics-server deployed. kzbin.info/www/bejne/jYfbfGShlMefhrM kzbin.info/www/bejne/hnbWY5aZpL9mpdk
@anamikakhanna8069
@anamikakhanna8069 4 жыл бұрын
The issue for the metric server is resolved , but another issue that I am facing is when the 2nd pod is generated and the load comes back to normal less than 20% , the newly created pod is deleted and the application running on the pod stops working, I mean it redirects me to the login / starting page
@sahaniarunitm
@sahaniarunitm 3 жыл бұрын
does the scaling increases in step 1-2-3-4-5 or directly to 5 when cross the threshold
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Arun, thanks for watching and thats a great question. HPA will autoscale the pods anywhere between the configured minimum and maximum. It uses an algorith to determine the number of replicas. So it doesn't necessarily have to be linear like 1-2-3-4-5, but it can be 1-4-5. The below documentation will give you more details. kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details
@sahaniarunitm
@sahaniarunitm 3 жыл бұрын
@@justmeandopensource agree and clear now
@justmeandopensource
@justmeandopensource 3 жыл бұрын
@@sahaniarunitm cool
@rahulmalgujar1110
@rahulmalgujar1110 5 жыл бұрын
Is it possible to apply custom values for cooling period? I mean I want to change scaling up and scaling down period from 3min to 1 min and from 5 min to 3 min respectively, if yes please reply in comments asap.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay
@rahulmalgujar1110
@rahulmalgujar1110 5 жыл бұрын
@@justmeandopensource thanks for your reply, but I have already seen this page and I am not clear how to use such flags can you please explain
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Rahul, I read through the documentation and its a kube controller manager configuration.If you used kubeadm method to provision your cluster, then you can find the kube-controller-manager static manifests in the directory /etc/kubernetes/manifests. Update the manifest and add the flags to the command. You can configure both downscale and upscale delay there. You may also find the below StackOverflow link useful. stackoverflow.com/questions/50469985/horizontal-pod-autoscaling-in-kubernetes Thanks.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
I just tested it and its working as expected.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Rahul, did you get a chance to test the upscale/downscale delay configuration?
@VinuezaDario
@VinuezaDario 4 жыл бұрын
Hi, I execute kubectl create -f metrics-server-deployment.yaml, but I have a this error on the log metrics: Error: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot get resource "config maps" in API group "" in the namespace "kube-system", I don't understand
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Darvin, i will release an updated video next week on how to deploy metrics server. You can follow that for metrics-server part and then continue with this video.
@VinuezaDario
@VinuezaDario 4 жыл бұрын
@@justmeandopensource This is my error, only execute kubectl create -f metrics-server-deployment.yaml. This is correct kubectl create -f . Thanks!
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@VinuezaDario You are welcome. Cheers.
@aprameyakatti6187
@aprameyakatti6187 4 жыл бұрын
hi venkat, can you please tell me how to edit the deploy? i.e, kubectl edit deploy nginx
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Aprameya, kubectl edit deploy nginx is the right command. What is it you want to edit in the deployment? Thanks.
@aprameyakatti6187
@aprameyakatti6187 4 жыл бұрын
@@justmeandopensource yes i want to edit in that..
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@aprameyakatti6187 kubectl edit deploy nginx will open up an editor where you can make your desired changes and save it, your pod(s) will be re-created with updated configuration. Cheers.
@aprameyakatti6187
@aprameyakatti6187 4 жыл бұрын
@@justmeandopensource Thanks Venkat, for your video and also for your immediate response.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
You are always welcome. Cheers.
@Nimitali
@Nimitali 4 жыл бұрын
hi,i tried downloading metric-server from github path as yours....but somehow i couldnt find deploy folder under metrics-server and all the yaml files related to 1.8+.can you cross check once again may be this video was prepared longback i guess.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Keenjal, thanks for watching. Please watch the next two videos in this series Kube 35.1 and Kube 35.2 where I have shown different methods of installing metrics server. This one is the recent one. kzbin.info/www/bejne/hnbWY5aZpL9mpdk
@Nimitali
@Nimitali 4 жыл бұрын
@@justmeandopensource Hi, Kube 35.2 procedure worked perfect...metric-server pod is in running state but kubectl top nodes & kubectl top pods -n kube0system gives error Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io) . Current setup of mine is 1 Master 2 worker(using vagrant/VBox). Any possible reason causing this error
@Nimitali
@Nimitali 4 жыл бұрын
checked the kubectl logs, $ kubectl logs -n kube-system metrics-server-7f6d95d688-gnw8z I0525 10:39:16.797054 1 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key) I0525 10:39:19.139041 1 secure_serving.go:116] Serving securely on [::]:4443
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hmm. Have you checked the metrics server logs?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
See if it works after a while. It usually takes few minutes to scrape metrics.
@subhashboddu3445
@subhashboddu3445 4 жыл бұрын
Hi Venkat I want to get loadbalancer from aws in my local kubeadm services Can you please make a video on this Thanks in advance
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Subash, thanks for watching. Can you explain your requirement a bit more please? Thanks
@subhashboddu3445
@subhashboddu3445 4 жыл бұрын
@@justmeandopensource i mean if we launch loadbalancer service in kubeadm it will not ge the loadbalancer from aws how will i get this
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@subhashboddu3445 Where is your cluster provisioned? Only if you have your cluster in the Cloud, you can make use of cloud load balancer.
@subhashboddu3445
@subhashboddu3445 4 жыл бұрын
Okay got it my cluster is in my local pc (kubeadm)
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@subhashboddu3445 Okay. You could use MetalLB for local load balancing. kzbin.info/www/bejne/rorMinygoaaafrs Cheers.
@shanmugamn7916
@shanmugamn7916 4 жыл бұрын
Nice video, is possible to make deployment using dynamic values
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Shanmugam, thanks for watching this video. Dynamic value for what you are referring to?
@shanmugamn7916
@shanmugamn7916 4 жыл бұрын
@@justmeandopensource giving memory min , max & cpu- min , max , replica count , in seperate yaml , when spinning the deployment file it will need take the value from that yaml file .
@shanmugamn7916
@shanmugamn7916 4 жыл бұрын
We using Jenkins to deploy , it first load that customize file where we mention the data , cpu, mem, replica count . Then deploy the deployment file .
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@shanmugamn7916 The process you mentioned is exactly what Helm does. You create a template file with some default values (placeholder) and while deploying that you can override the values from a different file.
@shanmugamn7916
@shanmugamn7916 4 жыл бұрын
Kindly share the major comparison HPA vs VPA .
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Shanmugam, HPA and VPA are very fundamentally different. HPA is horizontal scaling which means adding more replicas when needed and VPA is vertical scaling which means adding more resources to the existing replica. Cheers.
@shanmugamn7916
@shanmugamn7916 4 жыл бұрын
@@justmeandopensource thanks
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@shanmugamn7916 you are welcome
@ameyamarathe6613
@ameyamarathe6613 4 жыл бұрын
Why is the resource limit being set for the deployment exactly? and the exact relation between resource limits and the target percentage which we are setting for hpa( in your example 20%)
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Ameya, thanks for watching. As per the documentation, you have to set the resource limit in order to use auto scaling. If you don't want to set resource limit at the pod level, you can define one at the namespace level.
@ameyamarathe6613
@ameyamarathe6613 4 жыл бұрын
@@justmeandopensource If I set the limit at say 100m as well as the requests, won't it be something like the pod regularly gets loads above that capacity, each time the load exceeds the limit it hangs, gets destroyed and a new pod gets deployed in it's place. The time between termination of failed pod and creation of the new will make the service miss some requests. What do you think?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@ameyamarathe6613 If you set requests to 100mi core of cpu, then the pod will get scheduled to a worker node that has atleast 100mi core of cpu available. This is to say that the pod requires a minimum of 100mi core of cpu to work. If you don't set limit, it can use the maximum available for that worker node which will cause problems to your other workloads running on that node. So its always a best practise to set both. If you set requests to 100mi core and limits to 500mi core, then your pod can use up to 500mi core and gets terminated if it exceeds that limit.
@ameyamarathe6613
@ameyamarathe6613 4 жыл бұрын
@@justmeandopensource My actual question was something like this ibb.co/L1Pg4qz . So inorder for it to work we need to set actual values based on the load we are getting. How do I set the perfect resource limits and the perfect target percentage?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@ameyamarathe6613 "Target percentage is 50% which equals 400m". I don't understand this. How did you come up with this 400m? How many pods are running in your deployment? First of all, if you know your pod is going to use 120m then why setting the resource limit to 100m?
@karteekkr
@karteekkr 5 жыл бұрын
how to scale up based on memory utilization ? we never faced issues with cpu utilization, it's always the memory causing us the problem
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Kartheek, Try this one (update according to your needs), although I haven't tested it personally. pastebin.com/YwBV8bG0 Please let me know if it worked. I will add the code to my github repo. Thanks, Venkat
@karteekkr
@karteekkr 5 жыл бұрын
@@justmeandopensource thanks for the reply, but no luck. getting below error error validating data: ValidationError(HorizontalPodAutoscaler.spec): unknown field "metrics" in io.k8s.api.autoscaling.v1.HorizontalPodAutoscalerSpec
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@karteekkr Okay. I will play with it when I get some time this weekend and make a video of it. Thanks.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@karteekkr I tested the autoscaling based on memory utilization and its working fine. Also released a video. Please check the below link if you are interested. Thanks. kzbin.info/www/bejne/gYSYfq2Baap3nZo
@aprameyakatti6187
@aprameyakatti6187 4 жыл бұрын
hi venkat, can you please tell what this error is! sudo siege -q -c 5 -t 2m ip-192-168-77-52.us-west-2.compute.internal:31476 [error] descriptor table full sock.c:119: Too many open files [error] descriptor table full sock.c:119: Too many open files [error] descriptor table full sock.c:119: Too many open files libgcc_s.so.1 must be installed for pthread_cancel to work Aborted (core dumped)
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Aprameya, You will get this error where there is a limit on number of file you can open. You need to increase the "number of open files" ulimit for the user I guess. Try running the below command and then see if siege is failing. $ ulimit -n 10000 If it still doesn't work, try installing gcc-multilib. $ sudo apt-get install gcc-multilib Thanks.
@aprameyakatti6187
@aprameyakatti6187 4 жыл бұрын
@@justmeandopensource Hi venkat, The above error got cleared but after clearing those errors now I am gettin some other types of errors please let me know if you had these types of errors aprameya@kmaster:~/kube$ sudo siege -q -c 5 2m ip-192-168-111-178.us-west-2.compute.internal:31357 done. siege aborted due to excessive socket failure; you can change the failure threshold in $HOME/.siegerc Transactions: 0 hits Availability: 0.00 % Elapsed time: 108.31 secs Data transferred: 0.00 MB Response time: 0.00 secs Transaction rate: 0.00 trans/sec Throughput: 0.00 MB/sec Concurrency: 0.00 Successful transactions: 0 Failed transactions: 1028 Longest transaction: 0.00 Shortest transaction: 0.00 FILE: /var/log/siege.log You can disable this annoying message by editing the .siegerc file in your home directory; change the directive 'show-logfile' to false...
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Aprameya, This is the command you are using. $ sudo siege -q -c 5 2m ip-192-168-111-178.us-west-2.compute.internal:31357 I see you are setting the concurrent limit to 5 (-c 5). Thats fine. And the next option is 2m. So you want to run the test for 2 minutes. If so, you missed the -t option. Check again with the below command. $ sudo siege -q -c 5 -t 2m ip-192-168-111-178.us-west-2.compute.internal:31357 But I don't think that's is the cause of your problem. Please try this first. Thanks
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Aprameya, Can you also try this to increase the maximum socket connections. Just the run the below command on the machine where you are running siege. $ sudo sysctl -w net.core.somaxconn=1024 Now try the siege command. Thanks.
@aprameyakatti6187
@aprameyakatti6187 4 жыл бұрын
@@justmeandopensource HI venkat, As you told I'm getting same problem but this time I'm not getting "siege aborted due to excessive socket failure" .I also tried with this "$ sudo sysctl -w net.core.somaxconn=1024" but the problem remains.And thank you very much for your response..
@devadev3214
@devadev3214 3 жыл бұрын
in github link there is no deploy foldeer now its not showing
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Dev, thanks for watching. Yeah in the recent master branch the deploy folder is missing. I believe I used v0.3.2 tag of this repo when I recorded this video. Try github.com/kubernetes-sigs/metrics-server/tree/v0.3.2 Cheers.
@devadev3214
@devadev3214 3 жыл бұрын
@@justmeandopensource tq :)
@justmeandopensource
@justmeandopensource 3 жыл бұрын
@@devadev3214 You are welcome.
@selfmeditation6156
@selfmeditation6156 5 жыл бұрын
Where is link in description
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Srikanth, which link are you referring? Thanks
@ronitjha2672
@ronitjha2672 5 жыл бұрын
I am trying to use metric-server with mini-kube instance. when i do a describe on metric running which is in running state, i get the below error. can anyone help me in fixing it. I0709 20:09:39.172618 1 serving.go:312] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) I0709 20:09:39.809396 1 secure_serving.go:116] Serving securely on [::]:443 E0709 20:10:39.818255 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:minikube: unable to fetch metrics from Kubelet minikube (192.168.64.2): request failed - "403 Forbidden", response: "Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=stats)" E0709 20:11:39.809979 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:minikube: unable to fetch metrics from Kubelet minikube (192.168.64.2): request failed - "403 Forbidden", response: "Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=stats)" E0709 20:12:30.573245 1 reststorage.go:128] unable to fetch node metrics for node "minikube": no metrics known for node E0709 20:12:39.804909 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:minikube: unable to fetch metrics from Kubelet minikube (192.168.64.2): request failed - "403 Forbidden", response: "Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=stats)" E0709 20:12:54.236805 1 reststorage.go:128] unable to fetch node metrics for node "minikube": no metrics known for node E0709 20:12:56.438404 1 reststorage.go:128] unable to fetch node metrics for node "minikube": no metrics known for node E0709 20:12:59.574128 1 reststorage.go:128] unable to fetch node metrics for node "minikube": no metrics known for node E0709 20:13:02.064729 1 reststorage.go:128] unable to fetch node metrics for node "minikube": no metrics known for node E0709 20:13:04.403233 1 reststorage.go:128] unable to fetch node metrics for node "minikube": no metrics known for node E0709 20:13:23.360153 1 reststorage.go:147] unable to fetch pod metrics for pod kube-system/coredns-fb8b8dccf-rgw2r: no metrics known for pod E0709 20:13:23.360405 1 reststorage.go:147] unable to fetch pod metrics for pod kube-system/kube-controller-manager-minikube: no metrics known for pod E0709 20:13:23.360501 1 reststorage.go:147] unable to fetch pod metrics for pod kube-system/storage-provisioner: no metrics known for pod E0709 20:13:23.360689 1 reststorage.go:147] unable to fetch pod metrics for pod kube-system/etcd-minikube: no metrics known for pod E0709 20:13:23.360834 1 reststorage.go:147] unable to fetch pod metrics for pod kube-system/kube-proxy-sjvbt: no metrics known for pod E0709 20:13:23.360961 1 reststorage.go:147] unable to fetch pod metrics for pod kube-system/coredns-fb8b8dccf-h7mpl: no metrics known for pod E0709 20:13:23.361077 1 reststorage.go:147] unable to fetch pod metrics for pod kube-system/metrics-server-7bddf85f5c-qjs5x: no metrics known for pod E0709 20:13:23.361143 1 reststorage.go:147] unable to fetch pod metrics for pod kube-system/kubernetes-dashboard-d7c9687c7-bzmbh: no metrics known for pod E0709 20:13:23.361163 1 reststorage.go:147] unable to fetch pod metrics for pod kube-system/kube-addon-manager-minikube: no metrics known for pod E0709 20:13:23.361173 1 reststorage.go:147] unable to fetch pod metrics for pod kube-system/kube-scheduler-minikube: no metrics known for pod E0709 20:13:23.361287 1 reststorage.go:147] unable to fetch pod metrics for pod kube-system/kube-apiserver-minikube: no metrics known for pod E0709 20:13:39.809731 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:minikube: unable to fetch metrics from Kubelet minikube (192.168.64.2): request failed - "403 Forbidden", response: "Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=stats)" E0709 20:13:54.667271 1 reststorage.go:128] unable to fetch node metrics for node "minikube": no metrics known for node E0709 20:13:57.036690 1 reststorage.go:128] unable to fetch node metrics for node "minikube": no metrics known for node E0709 20:13:59.057707 1 reststorage.go:128] unable to fetch node metrics for node "minikube": no metrics known for node E0709 20:14:00.914951 1 reststorage.go:128] unable to fetch node metrics for node "minikube": no metrics known for node
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Ronit, Thanks for watching this video. I haven't tested this video on a minikube k8s environment. And I am not sure how you installed metrics-server in your minikube. You don't have to follow what I did in this video. Minikube has a metrics-server addon which you can enable. $ minikube addons list $ minikube addons enable metrics-server See if that helps. Thanks.
@ronitjha2672
@ronitjha2672 5 жыл бұрын
@@justmeandopensource Thanks for getting back to me. I will try this option.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@ronitjha2672 Cool. Let me know how you get on with that. Thanks.
@ashishkarpe
@ashishkarpe 4 жыл бұрын
getting error # deployments "kube-state-metrics" was not valid: # * : Invalid value: "The edited file failed validation": [couldn't get version/kind; json parse error: invalid character 'a' looking for beginning of value, [invalid character 'a' looking for beginning of value, invalid character 'a' looking for beginning of value]] # while I tried to add command : spec: containers: - image: quay.io/coreos/kube-state-metrics:v1.9.5 - command: - /metrics-server - --metric-resolution=30s - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
@somnathpandey3692
@somnathpandey3692 4 жыл бұрын
@Ashish, Hi please install matrics-server using help3. it's easy to install. URL for helm: helm.sh/docs/intro/install/
@somnathpandey3692
@somnathpandey3692 4 жыл бұрын
it's helm3
@ashishkarpe
@ashishkarpe 4 жыл бұрын
@@somnathpandey3692 no it's yaml deployment
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@somnathpandey3692 thanks for your response.
@rahulmalgujar1110
@rahulmalgujar1110 5 жыл бұрын
I am doing exactly as you have explained, but after setting the limits and requests and when i create HPA and when I run kubectl get hpa then also I am getting /50% in targets. What i need to do please explain and Thanks for explaining autoscaling in such a simple manner.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Rahul, thanks for watching this video. Hope you installed the metrics server. You have to wait a little while before the metrics server can gather cpu metrics from your pods. Have you checked the output of "kubectl top pods". It should give you current cpu and memory utilization. Thanks
@rahulmalgujar1110
@rahulmalgujar1110 5 жыл бұрын
@@justmeandopensource yes I have installed metrics server as well and it also gives current CPU utilization after running kubectl top pods.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
When I was doing a video on autoscaling based on Memory utlization, I noticed the same effect (unknown/50%). kzbin.info/www/bejne/gYSYfq2Baap3nZo But after I deployed the resource that needs to be auto scaled, it showed real utilization percentage.
@rahulmalgujar1110
@rahulmalgujar1110 5 жыл бұрын
@@justmeandopensource I have 2 services deployed in default namespace, one is just a demo application and the other is the actual one, when I try to autoscale the actual service using HPA it shows unknown only, but the demo app is autoscaled properly and shows 5%/50% when I execute kubectl get hpa command, I don't understand why this is happening I have tried doing auto scaling by .yaml file and by commands also. can you please share your email address
@justmeandopensource
@justmeandopensource 5 жыл бұрын
HI Rahul, So autoscaling is not completely broken. It works for the demo app but not for the other one. In that case, the issue is with the HPA resource. The HPA resource you created is only for one deployment (which is identified in the ScaleTargetRef section of the HPA resource). If other app is not autoscaling, I think you didn't create another hpa. So you have to create another hpa resource with "ScaleTargetRef" pointing to the correct deployment. Hope this makes sense. Thanks.
@amankawre5365
@amankawre5365 3 жыл бұрын
hi I am doing same steps as you mentioned the only difference is i have used the single pod instead of deployment. metric server is working fine .please help me to resolve it Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: the server could not find the requested resource Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 6s (x2 over 21s) horizontal-pod-autoscaler the server could not find the requested resource
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Aman, thanks for watching. The error suggests that the HPA controller could not get metrics from metrics server. Can you see metrics output when you do "kubectl top nodes" or "kubectl top pods"?
@amankawre5365
@amankawre5365 3 жыл бұрын
@@justmeandopensource yes i am able to see the memory and cpu utilization using kubectl top nodes and top pods
@amankawre5365
@amankawre5365 3 жыл бұрын
Sorry for the trouble ..i got it runing with some changes ..thankyou so much venkat.. keep up the good work.
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Well done 👍
@iammrchetan
@iammrchetan 4 жыл бұрын
Hi Venkat, Mine cluster is setup using Kelsey Hightower's Hard way. I had to add below in my metrics-server deployment. Still I'm facing the problems & below are the logs I'm getting in metrics-server's pod. *kubectl top nodes/pods* doesn't work and gives error as described below. I would be happy if you can help me. Not sure but I guess I need to enable the aggregation layer in my cluster as in --> kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/ args: - /metrics-server - --metric-resolution=30s - --requestheader-allowed-names=aggregator - --cert-dir=/tmp - --secure-port=4443 - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP k8sadm@devops:~$ kubectl top pods Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io) I1224 07:56:39.864682 1 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key) W1224 07:56:40.421425 1 authentication.go:296] Cluster doesn't provide requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work. I1224 07:56:40.466042 1 secure_serving.go:116] Serving securely on [::]:4443 E1224 07:57:10.549194 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:worker-5: unable to get a valid timestamp for metric point for container "metrics-server" in pod kube-system/metrics-server-6c4fbdc8f9-587v7 on node "192.168.140.139", discarding data: no non-zero timestamp on either CPU or memory
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Did you manage to resolve this issue?
@HungHoang-yt3xx
@HungHoang-yt3xx 4 жыл бұрын
Hi venkat, I followed step by step in your video, but I have a problem here: root@cseiu:~# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% cseiu 196m 9% 2548Mi 32% k8s-node1 30m 1% 1856Mi 23% k8s-node2 The status of node2: unknow - I don't know how to fix this problem. Thank you
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Hung, thanks for watching. So it works on node1 but not on node2. Can you try deleting and re-deploying metrics server. Thanks.
@HungHoang-yt3xx
@HungHoang-yt3xx 4 жыл бұрын
@@justmeandopensource Thank for your answer, I try deleting and re-deploying metrics-server many times, but It cannot fix that problem. Thanks.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@HungHoang-yt3xx I believe the problem is with the node and not the metrics server deployment itself. There is something wrong with that worker node.
@somnathpandey3692
@somnathpandey3692 4 жыл бұрын
Hi Venkat, thanks for your all nice videos. I tried to edit my deployment but got error and it's not edit and saved the edited file to /tmp/with-some-name.yaml. Below is the saved yaml file, please look on it and please help to know the reason. # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # # deployments.apps "nginx-basic" was not valid: # * : Invalid value: "The edited file failed validation": ValidationError(Deployment.spec.template.spec.containers[0]): unknown field "requests" in io.k8s.api.core.v1.Container # apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" creationTimestamp: "2020-04-24T22:55:15Z" generation: 1 labels: run: nginx-basic name: nginx-basic namespace: default resourceVersion: "11520538" selfLink: /apis/apps/v1/namespaces/default/deployments/nginx-basic uid: 0696e59c-e6ee-4513-844f-14c1a65e6ab4 spec: progressDeadlineSeconds: 600 replicas: 2 revisionHistoryLimit: 10 selector: matchLabels: run: nginx-basic strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: run: nginx-basic spec: containers: - image: somnath.cssp.lab:5000/cssp/nginx:28 imagePullPolicy: IfNotPresent name: nginx-basic resources: limits: cpu: "100m" requests: cpu: "100m" ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst imagePullSecrets: - name: regcred restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: availableReplicas: 2 conditions: - lastTransitionTime: "2020-04-24T22:55:16Z" lastUpdateTime: "2020-04-24T22:55:16Z" message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available - lastTransitionTime: "2020-04-24T22:55:15Z" lastUpdateTime: "2020-04-24T22:55:16Z" message: ReplicaSet "nginx-basic-859c5c97cd" has successfully progressed. reason: NewReplicaSetAvailable status: "True" type: Progressing observedGeneration: 1 readyReplicas: 2 replicas: 2 updatedReplicas: 2
@somnathpandey3692
@somnathpandey3692 4 жыл бұрын
Below are the version details: [spandey@somnath ~]$ kubectl version --short Client Version: v1.17.3 Server Version: v1.17.3 [spandey@somnath ~]$ helm version --short v3.2.0+ge11b7ce
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@somnathpandey3692 Hi, thanks for watching. The problem is with indentation. When you edit it, "requests" should be indented at the same level as "limits". Limits and requests are under "resources" field. The way you edited is that you have put "requests" at the same level as "resources" which is why you were not able to edit it. Cheers.
@somnathpandey3692
@somnathpandey3692 4 жыл бұрын
@@justmeandopensource Thanks for your reply. I did proper indentation. Now I got why it was through that error, due to duplication for resources properties. I was defined one custom and one more already defined with resources{}. Now I can able to edit. Thanks again and keep sharing your video it's help a lot.
@anamikakhanna8069
@anamikakhanna8069 4 жыл бұрын
Please make video on pod affinity and pod anti affinity
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Anamika, thanks for watching. That's in my list and I will get it done sometime. Cheers.
@anamikakhanna8069
@anamikakhanna8069 4 жыл бұрын
I have one requirement on production environment, I want to have one pod for production but as the load increases it should create another pod and when load is normal, it should switch back to normal 1 pod , but how to handle the on going request on newly created pod .. I used hpa for creating new pod , session affinity for session to be maintained on same pod , is there any other things needs to be added?
@anamikakhanna8069
@anamikakhanna8069 4 жыл бұрын
@@justmeandopensource and very much appreciate for your quick response. Cheers ...
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@anamikakhanna8069 I haven't actually done much on affinities to be honest.
@anamikakhanna8069
@anamikakhanna8069 4 жыл бұрын
@@justmeandopensource can you please help me managing the session , means once the new pod gets created the application running on that pod should run back to the initial pod if the load is back to normal. I think you got my question 🙈🙈?
@pernankilvivek8774
@pernankilvivek8774 3 жыл бұрын
the path has now changed to the the below . You run this. kubectl apply -f github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.0/components.yaml just deploying this will do . But you have to make the changes what Venkat has told under deployment.
@swatkats9073
@swatkats9073 3 жыл бұрын
How do we make the change when we pull this way?
@chereekandy
@chereekandy 4 жыл бұрын
I am getting this error ``` root@master-node:/play/kubernetes/yamls# kubectl describe hpa nginx-deploy Name: nginx-deploy Namespace: default Labels: Annotations: CreationTimestamp: Wed, 29 Jan 2020 21:25:37 +0530 Reference: Deployment/nginx-deploy Metrics: ( current / target ) resource cpu on pods (as a percentage of request): / 20% Min replicas: 1 Max replicas: 5 Deployment pods: 0 current / 0 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for kind "Deployment" in group "extensions" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 2s (x6 over 78s) horizontal-pod-autoscaler no matches for kind "Deployment" in group "extensions" root@master-node:/play/kubernetes/yamls# ```
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Vipin, thanks for watching. What version of Kubernetes you are running in your cluster? From k8s v1.16, some of the apiVersions has been deprecated. One of them is extensions/v1beta1. Check your manifests and wherever you are using extensions/v1beta1 as apiVersion, you need to change it to apps/v1. Thanks.
@yogithakakarla1716
@yogithakakarla1716 Жыл бұрын
😇
@justmeandopensource
@justmeandopensource Жыл бұрын
Thanks for watching
@JoaoPedro-zn8bn
@JoaoPedro-zn8bn 3 жыл бұрын
Don't forget to put double quotes around the values of your cpu limits and resources in the yaml file of the service, otherwise the HPA may not work and complain about "missing request for cpu".
@MS-rt8vl
@MS-rt8vl 3 жыл бұрын
சகோ காணொளிகளை தமிழிலும் உங்களுக்கு நேரமிருந்தால் பதிவிடுங்கள். தமிழ் சகோதரர்களுக்கும், சகோதரிகளுக்கும் புரிந்துகொள்வதற்கு மிக எளிமையாக இருக்கும். நன்றி
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Murugan, thanks for watching. I am afraid I wouldn't have time to do these again in Tamil. Thanks for your interest though. Cheers.
@personaldata9660
@personaldata9660 4 жыл бұрын
Hi venkat, thanks for sharing about HPA. my question about limit resource, if one pod limit cpu 20% from resource my worker and then my pod autoscale 3 replica. So total used cpu my worker 60%?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi, thanks for watching this video. HPA looks at the average cpu utilization of all the pods in your deployment and compares it with the threshold you have set in the HPA object. Simply put, it will be like total cpu utilization of all the pods divided by total cpu requests of all the pods. Here is the text from documentation about how it is calculated. """The autoscaler is implemented as a control loop. It periodically queries pods described by Status.PodSelector of Scale subresource, and collects their CPU utilization. Then, it compares the arithmetic mean of the pods' CPU utilization with the target defined in Spec.CPUUtilization, and adjusts the replicas of the Scale if needed to match the target (preserving condition: MinReplicas
[ Kube 35 Discussion 1 ]  Pod auto-scaling based on memory utilization
19:15
Just me and Opensource
Рет қаралды 8 М.
Autoscaling in Kubernetes
19:07
Pavan Elthepu
Рет қаралды 20 М.
ТАМАЕВ УНИЧТОЖИЛ CLS ВЕНГАЛБИ! Конфликт с Ахмедом?!
25:37
Now THIS is entertainment! 🤣
00:59
America's Got Talent
Рет қаралды 38 МЛН
ОСКАР vs БАДАБУМЧИК БОЙ!  УВЕЗЛИ на СКОРОЙ!
13:45
Бадабумчик
Рет қаралды 6 МЛН
[ Kube 13 ] Using Persistent Volumes and Claims in Kubernetes Cluster
44:30
Just me and Opensource
Рет қаралды 36 М.
[ Kube 24 ] Getting started with Helm v2 in Kubernetes Cluster
27:16
Just me and Opensource
Рет қаралды 63 М.
Understanding CPU & Memory with the Kubernetes Vertical Pod Autoscaler
22:12
How Autoscaling Works In Kubernetes (And Beyond)? Kubernetes Tutorial
30:55
Using docker in unusual ways
12:58
Dreams of Code
Рет қаралды 419 М.
[ Kube 21 ] How to use Statefulsets in Kubernetes Cluster
33:25
Just me and Opensource
Рет қаралды 29 М.
[ Kube 63 ] Creating your first Helm chart
34:03
Just me and Opensource
Рет қаралды 60 М.