Hi Venkat , could you please do videos on Openshift platform ?
@justmeandopensource3 жыл бұрын
Hi, I have plans to start openshift series at some point. Cheers.
@mikebabs55055 жыл бұрын
Excellent
@justmeandopensource5 жыл бұрын
Thanks Mike for watching this video and taking time to comment. Cheers
@aymenrahal49285 жыл бұрын
after switching my computer off and on again I am getting an error every time I tap the command $kubectl get nodes. could you help me?
@justmeandopensource5 жыл бұрын
Hi Aymen, Could you please explain your cluster setup? How did you set up your cluster? What error you get when you run kubectl get nodes command? Thanks
@sudheshpn5 жыл бұрын
Well explained !During master upgrade Coredns pod was evicted and was re-scheduled to run on my worker node. Once the master node was uncordon the Coredns pod did not reschuled back to master POD. Is it expected? Coredns would normally run only master node if i am not wrong..
@justmeandopensource5 жыл бұрын
Hi Sudhesh, thanks for watching this video. I am not sure whether Coredns pods are allowed to run only in master nodes. During the initial provisioning of the cluster, we bring up master node first and coredns pods gets scheduled there. I don't think it needs to be or meant to be run only on master nodes. As long as the pods or running as per deployment on any nodes and maintains the replica count, it shouldn't matter. Thanks.
@sudheshpn5 жыл бұрын
@@justmeandopensource Thank you venkat!
@justmeandopensource5 жыл бұрын
@@sudheshpn You are welcome.
@sivaguruvinayagam77795 жыл бұрын
Hi venkat, When u upgrade kworker1, u can moove all the containers manually in kworker2? Because il production situation we can sure that all containers working. Thank you.
@justmeandopensource5 жыл бұрын
Hi Siva, thanks for watching this video. Yes when you are about to upgrade a worker node, you can drain it which will evict all the pods on that node and re-schedule them on other nodes. Thanks.
@sivaguruvinayagam77795 жыл бұрын
@@justmeandopensource thanks for your quick answer, I'll test it. I watch every day your videos. I'm working in kubernetes daily, so your video very useful for me.
@justmeandopensource5 жыл бұрын
@@sivaguruvinayagam7779 That's great to hear. Thanks for following my channel.
@sivaguruvinayagam77795 жыл бұрын
Hi venkat, in my work, we have a cluster with 3 serveurs, one with control panel and etcd and two serveurs worker but not kubeadm, I don’t know how I can upgrade my cluster. Thanks for your help.
@justmeandopensource5 жыл бұрын
Hi Sivaguru, thanks for watching this video. Even my setup was similar to yours. I had one master node where etcd was running and two worker nodes. On the worker nodes you drain and update kubelet. I am about to do a video on KOPS to provision k8s cluster in AWS. Using KOPS also, you can upgrade your cluster. But to be honest, I haven't explored much about upgrading the cluster. Because in most of the production use cases, we will be using more automated approach or managed control plane like EKS or GKE. Thanks.
@sivaguruvinayagam77795 жыл бұрын
@@justmeandopensource thanks for your quick answer.
@justmeandopensource5 жыл бұрын
@@sivaguruvinayagam7779 You are welcome.
@sureshsurya50025 жыл бұрын
hey, I just wanted to understand, why u r not draining kmaster before upgrading the kubeadm and cluster? is there any specific reason for that.
@justmeandopensource5 жыл бұрын
Hi Suresh, thanks for watching this video. In my environment, the master node kmaster only runs cluster related components like etcd, controller-manager, scheduler. And there is a taint associated with the master node so that no user workloads are scheduled in master node. So I didn't have to drain the node. When you drain the node, it only drains any custom workloads and not the cluster components. I did drain the worker nodes where my pods/applications were running. Thanks.
@sureshsurya50025 жыл бұрын
@@justmeandopensource thank u so much for clarification, great videos u made...
@justmeandopensource5 жыл бұрын
@@sureshsurya5002 No worries. You are welcome. Cheers.
@Nimitali4 жыл бұрын
hi Venkat, i am deploying custom/multiple scheduler on my cluster (using vagrant/Vbox) but failing to do so. 2nd scheduler gets created but the default-scheduler goes in Error/Crashloopback. checked k8s.io but unable to create scheduler bin into container image(git clone does not work).Would appreciate if you suggest here on my issue and hopefully a separate video on scheduling will benefit a lot in future
@justmeandopensource4 жыл бұрын
Hi Keenjal, thanks for watching. I haven't had a chance to work on or explore custom scheduler yet. I will see If I can do that. Cheers.
@manikanthkommoju31765 жыл бұрын
bruh if you have some time plz make a video on ingress
@justmeandopensource5 жыл бұрын
Yeah sure. Thanks for following my videos.
@sumithsps007 Жыл бұрын
How to upgrade your Kubernetes Cluster with external ETCD