man, Despite I've defined apiserver-advertise-address flag in kubeadm init command: kubeadm init apiserver-advertise-address=192.168.10.17 the IP of kube-apiserver-master and etcd-master is 10.0.2.15 (NAT interface) and I think this is causing CrashLoopBackOff when intalling Flannel. Do you know how I can fix it?
@LearnDevOpswithSerge7 күн бұрын
Hi, 1. You can check your Kubernetes API server IP address using the command: "kubectl -n kube-system get pod -l component=kube-apiserver -o yaml | grep advert" or find it in the file "/etc/kubernetes/manifests/kube-apiserver.yaml" on the master node 2. Recheck your internal IP addresses (5:55) of the nodes using the command "kubectl get node -o wide". They shouldn't be 10.0.2.15. 3. Recheck the interface in the flannel config (8:19)
@joshuaedward689317 күн бұрын
Hi there, i cant seem to get my network working at all, I'm using the same manifest as you(edited the necessary things) but its not working, my kube-flannel is currently stuck at terminating phase for more than a day now.
@LearnDevOpswithSerge16 күн бұрын
Hi, try to check the logs (kubectl logs -p POD-NAME).
@ManjunathKundargi6 ай бұрын
Hi, what will be the command at 3:55 if i am using docker instead of containerd
@LearnDevOpswithSerge6 ай бұрын
Hi, Kubernetes has not supported Docker since version 1.24. Here is more information about it: kubernetes.io/blog/2022/02/17/dockershim-faq/
@richardsrobin_r2 ай бұрын
vagrant file for networking setting , which ip should be used for --apiserver-advertise-address =windows ip4 address ?
@LearnDevOpswithSerge2 ай бұрын
Hi, here is my config: Vagrant file: 3.times do |i| config.vm.define "node#{i+1}" do |node| node.vm.hostname = "node#{i+1}" node.vm.network "public_network", bridge: "Intel(R) Wireless-AC 9560 160MHz", ip: "192.168.1.5#{i+1}" end end --apiserver-advertise-address: root@node1:~# cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep advertise kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.1.51:6443 - --advertise-address=192.168.1.51
@RaviKumar-ko2hn2 ай бұрын
Hi I am trying to setup k8s Ubuntu 22 jammy using aws ec2 instance. But issue with pod network. In logs i got dockershim.sock no such file directly like. How do i fix this issue.
@LearnDevOpswithSerge2 ай бұрын
Hi, The best choice is not to use Dockershim. Dockershim has been removed as of Kubernetes v1.24. kubernetes.io/blog/2022/05/03/dockershim-historical-context/
@hamzaerrahma98584 ай бұрын
Thank you for this video, I initialized my cluster and everything was working perfectly. However, after I rebooted my machines, the kubectl get nodes command stopped working. When I run sudo kubectl get nodes, I get the following error message:The connection to the server 192.168.56.101:6443 was refused - did you specify the right host or port?
@LearnDevOpswithSerge4 ай бұрын
Hi, have you disabled swap? stackoverflow.com/questions/56737867/the-connection-to-the-server-x-x-x-6443-was-refused-did-you-specify-the-right Try to execute these commands on the master node: 1. sudo -i 2. swapoff -a
@hamzaerrahma98584 ай бұрын
@@LearnDevOpswithSerge Thanks You Very Much
@PratapKumar-sm9if5 ай бұрын
Hi Serge, I have followed your tutorial, However, Kube Proxy status shows a Crashloop error Any suggestion Please ? Thanks
@LearnDevOpswithSerge5 ай бұрын
Hi, I guess you mean Flannel when talking about kube proxy. 1. Check logs of the crashed container: "kubectl -n logs -p " 2. Check events of the pod in the 'CrashLoopBackOff' state: "kubectl -n describe pod " 3. Analyze the information from the 1st and 2d list items.
@magicalvibez20374 ай бұрын
how did u created the master and worker nodes?
@LearnDevOpswithSerge4 ай бұрын
Hi, I run virtual machines using VirtualBox and Vagrant.
@satishbarnana22673 ай бұрын
how can we configure 3 vms in single machine and can we run master and worker nodes on same machine?
@LearnDevOpswithSerge3 ай бұрын
Hi, I use VirtualBox with Vagrant to create VMs. In Linux, you can use KVM or VirtualBox too. Yes, you can use master for your workloads. To do it, remove taints on the master/control plane node.
@badex33017 ай бұрын
Thanks for the informative vid. Would be nice to have a google doc stating the command at 3:55
@LearnDevOpswithSerge7 ай бұрын
Hi, containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sed 's/sandbox_image = "registry.k8s.io\/pause:3.6"/sandbox_image = "registry.k8s.io\/pause:3.9"/' | sudo tee /etc/containerd/config.toml
@badex33017 ай бұрын
@@LearnDevOpswithSerge thank you! Subscribed
@shang-mokai54284 ай бұрын
Thanks You Very Much !!! working well for me
@timetraveller20455 ай бұрын
Many Thanks! Very usefull video!
@IndianSumaira3 ай бұрын
did it worked for you ? I am facing some trouble when i try to init cluster, says its: st:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher
@suryagowda36633 ай бұрын
Hi , i want to learn step by step about kubernetes with syllabus. Can u share here...
@LearnDevOpswithSerge3 ай бұрын
Hi, I'm going to make a video about it.
@mazahirahmedАй бұрын
8.20 name of internal network interface is not same for master and slave for me, I am getting error.
@LearnDevOpswithSergeАй бұрын
Hi, 1. You can set labels on each node group with different interface names. For example, node1 - label1 node2 - label1 node3 - label2 node4 - label2 2. Make 2 DaemonSets with different args for containers and set different nodeSelectors for each DaemonSet. 3. I suggest you watch my video: kzbin.info/www/bejne/d2rTmaxunteFjrs
@suryagowda36633 ай бұрын
What is kubeadm? What is the use? When we should use?
at 03:47 you prepared a command, from where do i get it.
@LearnDevOpswithSerge4 ай бұрын
Hi, containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sed 's/sandbox_image = "registry.k8s.io\/pause:3.6"/sandbox_image = "registry.k8s.io\/pause:3.9"/' | sudo tee /etc/containerd/config.toml
@christopherobinna21404 ай бұрын
@@LearnDevOpswithSerge how did you generate this command. any documentation for it. up till this stage you have explained everything in detail but just copied and pasted this out of nowhere. the tutorial is very great!
@LearnDevOpswithSerge4 ай бұрын
@christopherobinna2140 Hi, 1. SystemdCgroup = true # The documentaion about it kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd 2. The issue with sandbox image: github.com/cri-o/cri-o/issues/6985 discuss.kubernetes.io/t/can-not-install-kubernetes-cluster-with-kubeadm/24079 ---- Note that the version of the sandbox image in each Containerd version can be changed so check it. For example, in my video, the version is 3.6 and now it is 3.8. The message while installing Kubernetes: W0725 06:32:35.829024 14158 checks.go:844] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.9" as the CRI sandbox image.
@sumanth61143 ай бұрын
dude at 3:49 its big command where can i get that?
@LearnDevOpswithSerge3 ай бұрын
Hi, containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sed 's/sandbox_image = "registry.k8s.io\/pause:3.6"/sandbox_image = "registry.k8s.io\/pause:3.9"/' | sudo tee /etc/containerd/config.toml
@laurentsun47075 ай бұрын
Hello, for the command at 5:35 how to you find the --apiserver-advertise-address
@LearnDevOpswithSerge5 ай бұрын
Hi, it is an IP of your master node. You can find out it by the command: "ip address show" or shortly "ip a". ``` vagrant@master:~$ ip address show | grep 192.168.1.50 inet 192.168.1.50/24 brd 192.168.1.255 scope global enp0s8 ```
@laurentsun47075 ай бұрын
@@LearnDevOpswithSerge Thank you! Do you have a video to show how to setup the kubernetes dashboard?
@LearnDevOpswithSerge5 ай бұрын
I haven't done it yet. Thanks for the idea!
@ElliotAlderson-s4l3 ай бұрын
@@LearnDevOpswithSerge also, the --pod-network-cidr, where did you find that?
why you not have error invalid argumen with command sudo sysctl --system
@LearnDevOpswithSerge2 ай бұрын
Hi, this command is correct: $ sysctl --help | grep system --system read values from all system directories
@ramankhanna95266 ай бұрын
thankyou
@LearnDevOpswithSerge6 ай бұрын
You are welcome!
@sathvikbandaru39854 ай бұрын
Thanks you
@sathvikbandaru39854 ай бұрын
for me Culster set up is ready because of you but little issue my pods are not running properly when im using flannel but when i using calico now my pods are running
@sandeepthapa94783 ай бұрын
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1 $ sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml from where did u get this command in k8s documentation
@LearnDevOpswithSerge3 ай бұрын
Hi, 1. SystemdCgroup = true # The documentaion about it kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd 2. The issue with sandbox image: github.com/cri-o/cri-o/issues/6985 discuss.kubernetes.io/t/can-not-install-kubernetes-cluster-with-kubeadm/24079 ---- Note that the version of the sandbox image in each Containerd version can be changed so check it. For example, in my video, the version is 3.6 and now it is 3.8. The message while installing Kubernetes: W0725 06:32:35.829024 14158 checks.go:844] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.9" as the CRI sandbox image.