[ Kube 30 ] Deploying Kubernetes Cluster using LXC Containers

  Рет қаралды 18,010

Just me and Opensource

Just me and Opensource

Күн бұрын

Пікірлер: 225
@bashardlaleh2110
@bashardlaleh2110 2 жыл бұрын
this channel is so underrated , it's one of the best channels for DevOps
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Hi Bashar, thanks for understanding the value. Cheers.
@JonBrookes
@JonBrookes 3 жыл бұрын
absolutely fantastic, thanks for this. Its saved me soooo much time setting up k8s on LXC
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Jon, many thanks for watching. As you may or may not know, kubernetes is planning to remove support for Docker as container runtime. So going forward we will have to use one of the other container runtimes. I have updated my vagrant environment to use Containerd as the runtime instead of Docker. I am yet to update this lxd environment to use Containerd. Stay tuned.
@taherboujrida8110
@taherboujrida8110 3 жыл бұрын
Thank you pro, for the fantastic clear to the point, easy/FAST deploying K8S , you are super-janitor.. all the best
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi, thanks for watching. Cheers.
@JeronimoAvelarFilho
@JeronimoAvelarFilho 5 жыл бұрын
Thanks for the effort and time to put this kubernetes tutorials on youtube. Congratulations for the material.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Thanks Jeronimo.
@meilyandevriyantimor5492
@meilyandevriyantimor5492 4 жыл бұрын
Once again, it's very awsome video!!!. After Docker, now I know to install Kubernet Cluster using LXC Containers. I wish you can make video about installation of Kubernet Cluster using Rkt Container. I'l be waiting for this...
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Meilyand, thanks for watching. I was using LXC containers for my kubernetes cluster but since k8s v1.15 and above, the cluster bootstrapping process doesn't work. So I locked it to k8s v1.14.3. Never had a chance to dig that further. When I get some time I will look into rkt. But I have already recorded videos for the next two months. Cheers.
@techpetla3901
@techpetla3901 2 жыл бұрын
This is a great video. I followed your steps and created k8s single node cluster. Some of the k8s options require shared option set while mounting the rootfs of the lxd container. I tried different methods like lxc.rootfs.options=shared under raw.lxc or propagation=shared under the devices->root: but no luck. Appreciate if someone can point a solution. Note: We can check the rootfs is shared by running "findmnt -o TARGET,PROPAGATION" inside the lxd container.
@Himanshu-Patel-Clarisights
@Himanshu-Patel-Clarisights Жыл бұрын
Very few developer teach with this depth !!
@justmeandopensource
@justmeandopensource Жыл бұрын
Hi Himanshu, thanks for your comment. BTW I am not a developer 🫨
@Himanshu-Patel-Clarisights
@Himanshu-Patel-Clarisights Жыл бұрын
Thanks for heads up Venkat Nagappan, 😂😂
@aakashrajguru
@aakashrajguru 3 жыл бұрын
Hi Venkat, thanks a lot for making such informative videos. Would it be possible for you to make some videos on Kubernetes Cluster Federation (kubefed) using lxc containers or KIND.
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Akash, thanks for watching. I will look into it when I get some time. I have a huge list of user requested contents and I am going through them in order. Cheers.
@aakashrajguru
@aakashrajguru 3 жыл бұрын
@@justmeandopensource Thanks a lot, Venkat
@justmeandopensource
@justmeandopensource 3 жыл бұрын
You are welcome.
@JulianBG
@JulianBG 4 жыл бұрын
@Just me and Opensource Very helpful for me overall, although I had a lot of issues. At then end I'm using Raspberry PI 4 host (arm64) with Ubuntu 20.04. Inside LXD I've installed CentOS 8, but I had to modify the profile (lxc.mount.auto = proc:rw sys:ro) otherwise the container cannot get any IP (when using sys:rw). The bootstrap shell file was redacted in order to work for CentOS 8.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Julian, thanks for watching. Cool that you got it working in Arm64 with CentOS 8 which I haven't tried. Yeah my bootstrap scripts are designed for CentOS 7. Cheers.
@JulianBG
@JulianBG 4 жыл бұрын
@@justmeandopensource I have couple of questions that bothers me: - I'm Arch user as you, so I don't have experience with Cent, but why would you use (quite) older CentOS 7 version and not 8? Is there are cons of using 8 and any pros for 7? - I saw a lot of warnings in the log (if I run bootstrap command without error redirections) about things that are not available or potentially failed. At the end the cluster is deployed, up and running, but those errors/warnings are bothering me. Do you have the same issues? Do you think they should be addressed? - Last thing, do you have any good source for those proc:rw, sys:rw, cgroup:rw settings in the profile? I'm not talking about the official page where you see some generic information, but better deep explanation. For example your config had sys:rw that prevented network and IP allocation for the CentOS 8 VM, so I had to change it to :ro (tested through experiment not by reading it from somewhere), while the sys:rw was working for CentOS 7?!
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@JulianBG Hi when I started this series CentOS 8 wasn't out. To be honest, you shouln't really care much about the underlying OS. You just need a Linux distro to be able to run containers. You should be looking at using some kind of container os that is designed specifically to run containers without any additional bloatware.
@kongawilly
@kongawilly 2 жыл бұрын
Fantastic !!! Just add curl openssh-server gnupg2 software-properties-common before step 2 to work correctly for me !!!
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Cool. I haven't used LXC for a very long time. I might get back to using it.
@kongawilly
@kongawilly 2 жыл бұрын
@@justmeandopensource you do a great work. Continue comme ça 😎😎😎
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Sure I will. Thanks.
@jlan421
@jlan421 5 жыл бұрын
Venkat, great video as always! One question, will the bootstrap script also work when setting up multi-master?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hey, thanks for watching this video. It won't work for multi-master setup unfortunately. I was using kubespray for that. Also recently explored "Kubernetes the hard way" and made couple of videos which will be released in the coming weeks on Mondays. Thanks
@stevecorbin9102
@stevecorbin9102 4 жыл бұрын
Thanks for your video it is great. Seem like the kubernetes cluster I created is working, but I do see a message in the systemctl status docker warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Steve, thanks for watching. I do see that as well despite cluster running fine.
@msahsan1
@msahsan1 4 жыл бұрын
thanks
@justmeandopensource
@justmeandopensource 4 жыл бұрын
You are welcome. Thanks for watching.
@pawansolanki20095
@pawansolanki20095 3 жыл бұрын
Hi Venkat Thanks for making this video. My host machine is Linux localkubenode-1 4.15.0-117-generic #118-Ubuntu SMP Fri Sep 4 20:02:41 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux and guest machine is centos 7 as you have shown in video After running command cat bootstrap-kube.sh | lxc exec kmaster bash Error: bash: line 10: containerd: command not found Failed to restart containerd.service: Unit not found. [TASK 2] Add apt repo for kubernetes (23) Failed writing body [TASK 3] Install Kubernetes components (kubeadm, kubelet and kubectl) Failed to restart kubelet.service: Unit not found. [TASK 4] Enable ssh password authentication Failed to reload sshd.service: Unit not found. [TASK 5] Set root password [TASK 6] Install additional packages mknod: '/dev/kmsg': Operation not permitted [TASK 7] Pull required containers [TASK 8] Initialize Kubernetes Cluster [TASK 9] Copy kube admin config to root user .kube directory mkdir: cannot create directory '/root/.kube': File exists cp: cannot stat '/etc/kubernetes/admin.conf': No such file or directory [TASK 10] Deploy Flannel network [TASK 11] Generate and save cluster join command to /joincluster.sh
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Pawan, thanks for watching. This is an old video. I have recently updated my github repo to work only for Ubuntu 20.04 lxc containers with containerd as runtime. I guess you are running centos 7 lxc containers. Please follow this documentation github.com/justmeandopensource/kubernetes/tree/master/lxd-provisioning which is also explained in this later updated video kzbin.info/www/bejne/pJezl2Omf5aMgqs Cheers
@muralidhart4708
@muralidhart4708 4 жыл бұрын
Great video, thanks a bunch for making it, i have a question, instead of bridge network, can you create a macvlan network (same segment of host network?) this will help in access the apps in cluster from other machines not just from host machine. please let me know if that is doable?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Murali, thanks for watching. Yes you can use macvlan instead of bridge to get ip addresses from the host network and thereby the services can be accessed from other devices on the host network as well. But in order to use macvlan type of network, you will have to have ethernet networking on your host machine. It won't work with wifi interface. Cheers.
@muralidhart4708
@muralidhart4708 4 жыл бұрын
@@justmeandopensource Thanks for the reply.. I am using ethernet n/w only, still no luck from host network...
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@muralidhart4708 Hmmm. Is this on your own machine or in some virtual machine in the cloud? If you are trying this on cloud it won't work as expected.
@muralidhart4708
@muralidhart4708 4 жыл бұрын
@@justmeandopensource Its my own machine... Ubuntu 20
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@muralidhart4708 I was wanting to do this for a long time. But I don't have an ethernet port on my laptop. Need to buy usb ethernet adaptor and I can demo it. Will soon do a video on it. Cheers.
@m.fauziislami2580
@m.fauziislami2580 4 жыл бұрын
Thank you for creating this helpful video! I watch your video and follow along, but until now my lxc containers can't connect to internet, how to make it can reach the internet? I have configured NAT on my bridge but it still doesn't work. Could you mind helping me to this issue?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Fauzi, thanks for watching. When you set up lxd using lxd init command, it would have created a bridge (lxdbr0) attaching it to your primary network interface. I have done this video on getting started with lxd/lxc which might help you. kzbin.info/www/bejne/eYjQnIaglKdgrdE Check your current bridge using "lxc network show ", usually lxdbr0 if you have gone with defaults. The ipv4.nat should be set to true. Or try creating a new network using "lxc network" Following post might also help you. stgraber.org/2016/10/27/network-management-with-lxd-2-3/ Cheers.
@ajit555db
@ajit555db 5 жыл бұрын
Just spotted "Ilaiyaraaja", I don't know Tamil but some of my fav Hindi songs are by him :)
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hmm. Good spot. kzbin.info/www/bejne/joLZgoh7bNl0qc0 Yeah. Famous musician. I tend to use Chrome in Incognito mode so that I don't expose myself like search, watch history, bookmarks and others :) Missed it this time. You would have also noticed the Lullaby video in the history and come to a conclusion that I must be having a kid and you are right. :)
@sandeeprohilla83
@sandeeprohilla83 5 жыл бұрын
Couldn't find video 28 and 29
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@sandeeprohilla83 28 and 29 were recorded earlier but decided to release it later as those weren't very important ones. Those two videos are about deploying app in Google Kubernetes Engine (GKE). I think they are scheduled for May 1st and 8th. Thanks, Venkat
@deventiwari2000
@deventiwari2000 5 жыл бұрын
hi Venkat, excellent video!!! I was able to get the kubernetes master and cluster created. functions like pod, creation, deploy are working ok. I am not able to get the service invoked from other pod. I tried networkpolicy as well but didnt help. If I invoke pod endpoint ip it works. for ex. busybox calling nginx deployment service doesn't work. please advise!
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi, thanks for watching this video. I will try it in my cluster and post the commands and outputs in pastebin link. Stay tuned. Cheers.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi, I just tried it in my LXC cluster and all seems working as expected. What I tried: 1. Deploy an nginx pod 2. Expose it as ClusterIP service 3. Create another pod (eg: debian container) 4. Try to ping the nginx service (nginx.default.svc.cluster.local) - ping won't respond 5. Try to ping the ClusterIP of the nginx service - ping won't respond 6. Access the service endpoint by dns name - it will work You won't be able to ping the clusterip of any service. You will only be able to access the service endpoint. Please check the below link for the commands I have tried. pastebin.com/aAVZW57Z Thanks.
@jaksvelpin8077
@jaksvelpin8077 4 жыл бұрын
LXC containers. well how to save current KC settings that was made when LXC was working? if power down all data in containers will be erased and need do all setup again? i thing it's big disadvantage of using LXC containers or any containers. Maybe there is a way how to save all changes that made in containers. In other video you sow how to save states of machines (Setup Kubernetes Cluster with Vagrant). For example, after running LXC we install also Istio on our KC, then shutdown LXC, and then after it need install istio again with all components Kaili and etc?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Jaks, thanks for watching. You can setup your cluster as per your liking and you can stop and restart it anytime you want. Please refer to the below video for a hack to make restarting cluster possible. kzbin.info/www/bejne/h5OQpINqlrJjha8 cheers.
@StEvUgnIn
@StEvUgnIn 4 жыл бұрын
Just read documentation on docker to learn how to save state
@AshutoshKumar-ue3dr
@AshutoshKumar-ue3dr 4 жыл бұрын
What is the point of running the docker containers inside LXC Container? You could have setup K8 directly with docker on host machine.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi, Thanks for watching. As the video title says, this is to show that a kubernetes cluster can be provisioned on LXC containers. LXC containers are system containers unlike docker containers which are application containers. Hope you understand the difference. This will help those who haven't got enough resources on their laptop/workstation to run a Kubernetes cluster and an opportunity to learn lxc containers which is cool.
@AshutoshKumar-ue3dr
@AshutoshKumar-ue3dr 4 жыл бұрын
Got it thanks. Although I know the difference between LXC and docker, you also explained it a cool way. Thanks for making such wonderful videos. 😊
@ryanbell85
@ryanbell85 3 жыл бұрын
What is dockers role in this configuration? Is it only being used as the application repository?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Ryan, thanks for watching. In this video I just demonstrated that you can use LXC containers for your Kubernetes nodes. Once I started the LXC containers, I set up a kubernetes cluster as usual with Docker as the container runtime. SO kubernetes running on the LXC containers uses Docker for the containers.
@ryanbell85
@ryanbell85 3 жыл бұрын
@@justmeandopensource Thanks for this comment and your most recent video on using a different runtime. Trying to get started with Kubernetes but its tough to keep up with all the recent changes with docker.
@justmeandopensource
@justmeandopensource 3 жыл бұрын
@@ryanbell85 I understand.
@automationlearner2253
@automationlearner2253 3 жыл бұрын
hy ..i followed your video and set up exactly but after i set up both nodes i did lxc list ..and i didnt find flannel in worker node running ... then i debug the issue and found that join command was not working and also kubelet service was not starting ...i spent a lot of hours on it searching solution on google but no solution has worked for me please can you help me with this ... also i have been followed your kubernetes the hard way video and almost set up everything but same problem happened there kubelet was getting auto restarting again and again and therefore i was not able to get nodes on host .....i need to set up this and then i would follow your elk video and will set up the elk stack i have been doing all set up on virtualbox instances using lxd containers ...because i have windows host ..i loved lxd really ..and i want do at least one set up of k8s with lxd ...please if you see this then reply me please brother i have been trying to reach you everywhere on twitter also i have been new in k8s and cloud native things please help me
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi, thanks for watching. Whats the storage backend you are using in lxd? If you are using Btrfs, kubelet service won't work without a hack. I am using "dir" storage backend and K8s running fine without any issues. Cheers.
@automationlearner2253
@automationlearner2253 3 жыл бұрын
@@justmeandopensource i have been using dir storage pool
@Peter1215
@Peter1215 4 жыл бұрын
Great vid, thanks! I was trying this setup on virtual box, but I noticed that after I power off the vm (vagrant halt) and come back up again by vagrant up, the lxc containers are there but kubernetes is not running, I have to use the provisioning script again and I'm losing all my data in the process. Could you help me triage this issue?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Piotr, thanks for watching. I usually use this setup as a one off provisioning to test something and then destroy it at the end. I also used lxc snapshot to store the state of the containers and use the snapshot if I want it back.
@Pallikkoodam94
@Pallikkoodam94 5 жыл бұрын
Thank you Venkat, I am getting the following message while doing the vagrant up ==> ubuntuvm01: Configuring and enabling network interfaces... The following SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed! /sbin/ifdown eth1 2> /dev/null Stdout from the command: Stderr from the command: mesg: ttyname failed: Inappropriate ioctl for device Once enter into the container and list lxd getting this error. root@ubuntuvm01:/home/vagrant# lxd list EROR[10-31|16:29:32] Failed to start the daemon: LXD is already running
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Ajeesh, thanks for watching this video. Its lxc list and not lxd list. Have you installed and configured lxd in the Ubuntu vm? Only then you can see lxc list. Thanks.
@Pallikkoodam94
@Pallikkoodam94 5 жыл бұрын
@@justmeandopensource oops my bad.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@Pallikkoodam94 No worries.
@mihaimyh
@mihaimyh 5 жыл бұрын
Hi again, what widget are you using for displaying your system, load, processors, memory and network info? The one above your camera?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Mihai, thats conky process configured to show various system utilisation metrics. You can install conky. There are lots of conky config files you can find online (.conkyrc) Thanks, Venkat
@mihaimyh
@mihaimyh 5 жыл бұрын
@@justmeandopensource Thanks for your quick answer. Your configuration looks awesome, is there any chance to share it?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@mihaimyh I will upload it to my github repo and share the link. Thanks.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Here you go. pastebin.com/rcXUpxFq Link valid for 24 hours. You might need to customize it for your needs. This is customized for 1080p resolution although my monitor is 4k high resolution. Please play with it as starting point. Also consider using dark background like mine. Thanks, Venkat
@helioay
@helioay 4 жыл бұрын
Hi Venkat, Nice video.. I have followed it but I am having problem to start docker in the lxd/lxc container. I have already tried centos7 and ubuntu host and centos7 and ubuntu lxc container, but none of them the docker is coming up and it's always showing the same error: Dec 04 12:30:24 kmaster dockerd[2086]: time="2019-12-04T12:30:24.667818401Z" level=error msg="There are no more loopback devices available." Dec 04 12:30:24 kmaster dockerd[2086]: time="2019-12-04T12:30:24.667876055Z" level=error msg="[graphdriver] prior storage driver devicemapper failed: loopback attach failed" Dec 04 12:30:24 kmaster dockerd[2086]: failed to start daemon: error initializing graphdriver: loopback attach failed Dec 04 12:30:24 kmaster systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE Dec 04 12:30:24 kmaster systemd[1]: docker.service: Failed with result 'exit-code'. Dec 04 12:30:24 kmaster systemd[1]: Failed to start Docker Application Container Engine. Do you have an idea/tip of what is going on? THanks.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi, thanks for watching this video. If you scroll through the comments section in this video, there is a conversation with Sam Nickerson who was having the exact problem. Please read through his comments. I think it is do with the way you install LXD. Did you use snap or the default package manager? May be try using a different version of LXD. Bear in mind that the latest Kubernetes version doesn't work very well on LXC containers. I used it until kubernetes version 1.14.3. After that it never worked and I had to give up. Cheers.
@helioay
@helioay 4 жыл бұрын
@@justmeandopensource, thanks for your response. I read the conversation with Sam and I could make some tests and I found few conditions where it works and others which does not work. My system was not working, LXD/LXC Ubuntu OS where the LXC is configured to use ZFS as storage... so in this conditional the docker does not come up in the LXC Container.... I have created a new storage for LXC on directory (dir) and when I launch a LXC container using the new storage (dir), the docker comes up in the container.... so for sure it´s some issue related to ZFS storage. I have tried to have LXD/LXC Centos host and none combination of ZFS or dir Storage for LXC works... I mean, the docker does not come up in the container... so it might be having issue with Centos host for LXD/LXC. Thanks.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@helioay I am not using LXC containers anymore for my Kubernetes cluster. I couldn't get it working for Kubernetes v1.15 and above. However older versions of Kubernetes should work fine. Since LXC shares the host kernel, you will have to pay attention to the host Kernel. Make sure your host kernel is newer than the OS you are trying to run in container. For example, If your host machine runs Ubuntu 14 and you try to run an Ubuntu 18 container, it won't work very well. But if you have Ubuntu 18 host, you can run Ubuntu 14 container. You can also cross run the OS. For example, on your Ubuntu host you can run CentOS container and vice versa. But the kernel version on the host machine should be higher than the container. Thanks.
@helioay
@helioay 4 жыл бұрын
@@justmeandopensource I could get Kubernet 1.16 working..... $ kubectl version --short Client Version: v1.16.3 Server Version: v1.16.3 $ kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.1.191 Ready controlplane,etcd,worker 5m24s v1.16.3 192.168.1.192 Ready controlplane,etcd,worker 5m24s v1.16.3 192.168.1.193 Ready controlplane,etcd,worker 5m24s v1.16.3 I am using rke to install and it required 2 worked around only: > mount --make-shared / > mknod /dev/kmsg c 1 11 so the rke installation worked perfectly.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@helioay Wow.. That's great.
@gauravvij137
@gauravvij137 3 жыл бұрын
If I run k8s inside LXD as a worker node then I am unable to reach it from another master node because LXD is behind NAT. Is there a way to solve this?
@krishnavamsij
@krishnavamsij 4 жыл бұрын
My understanding is kubelet runs on every worker node and take instructions from the API Server. So why does it need to run on the master as well ? So is it a node agent, which runs on all nodes including the master ?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Vamsi, tyanks for watching. Master node is also a worker node where you can run pods. So kubelet component needs to run. However, in general practice, you will have a taint associated with master nodes to prevent it from running any of your workloads other than the kubernetes system pods. Cheers.
@bonzcloud
@bonzcloud 4 жыл бұрын
Another thing is have you tried this setup with microk8s ? I was trying with microk8s and it did not work out for me .
@justmeandopensource
@justmeandopensource 4 жыл бұрын
hi Joy, thanks for watching. This video is about provisioning a k8s cluster in LXC containers. Microk8s is self contained that you install and run on your local machine. I am not sure how to try this in microk8s.
@vincentnambatac160
@vincentnambatac160 3 жыл бұрын
only the kmaster is running , the kworker wont connect to the kmaster and as well kubectl on kworker is not working . anyhelp for this ?
@tsingh7491
@tsingh7491 4 жыл бұрын
Thanks for the tutorial @Just me. Would these steps be different for snap lxd? I am trying to lab this up but ran into issues editing the profiles on lxd install per your previous video. Just switched to snap builds. No matter what changes I make, I keep getting the same error: Config parsing error: yaml: line 21: did not find expected key Press enter to open the editor again
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi, thanks for watching. I indeed used Snap LXD for all my tutorials.
@tsingh7491
@tsingh7491 4 жыл бұрын
@@justmeandopensource Ah ok, the build using your video was showing 3.0.3 and using snap is 3.21. after installing snap package. All good. Any idea abou this error? No matter what changes I make, I keep getting the same error: Config parsing error: yaml: line 21: did not find expected key Press enter to open the editor again
@tsingh7491
@tsingh7491 4 жыл бұрын
Looks like the k8s profile first line should be "config:" instead of "config: {}" Maybe a change in newer versions.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@tsingh7491 Can you paste the config in pastebin.com and share it. I can take a look.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@tsingh7491 Ah I see. Yeah config: {} is empty config and you have to remove the paranthesis.
@sriharshavallabaneni6542
@sriharshavallabaneni6542 5 жыл бұрын
Hello @Just me and Opensource. Thank you for your videos they are really helpful I have a question regarding lxc containers. I am using my mac to launch vagrant ubuntu issue is I am unable to ping the container from my mac due to which i am unable to test any HA or ingress network from my mac. following is the k8s profile which i am using to launch kworker/kmaster. {code} config: limits.cpu: "2" limits.memory: 2GB limits.memory.swap: "false" linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay raw.lxc: "lxc.apparmor.profile=unconfined lxc.cap.drop= lxc.cgroup.devices.allow=a lxc.mount.auto=proc:rw sys:rw" security.nesting: "true" security.privileged: "true" description: K8s LXD profile devices: eth0: name: eth0 nictype: bridged parent: lxdbr0 type: nic root: path: / pool: default type: disk name: k8s_bkp used_by: - /1.0/containers/kmaster - /1.0/containers/kworker1 - /1.0/containers/kworker2 {code} 2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 02:a9:3e:be:bb:aa brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3 valid_lft 69320sec preferred_lft 69320sec inet6 fe80::a9:3eff:febe:bbaa/64 scope link valid_lft forever preferred_lft forever 3: enp0s8: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:8c:32:d9 brd ff:ff:ff:ff:ff:ff inet 172.42.42.101/24 brd 172.42.42.255 scope global enp0s8 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe8c:32d9/64 scope link valid_lft forever preferred_lft forever 4: lxdbr0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether ea:8b:22:b5:03:a9 brd ff:ff:ff:ff:ff:ff inet 10.251.251.1/24 scope global lxdbr0 valid_lft forever preferred_lft forever inet6 fd42:75f8:d465:eb2a::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::38c9:95ff:fe0f:b93d/64 scope link valid_lft forever preferred_lft forever +----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ | kmaster | RUNNING | 10.251.251.244 (eth0) | fd42:75f8:d465:eb2a:216:3eff:fee2:a0f2 (eth0) | PERSISTENT | 0 | +----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ | kworker1 | RUNNING | 10.251.251.204 (eth0) | fd42:75f8:d465:eb2a:216:3eff:feda:ea7d (eth0) | PERSISTENT | 0 | +----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ | kworker2 | RUNNING | 10.251.251.159 (eth0) | fd42:75f8:d465:eb2a:216:3eff:fe6a:603a (eth0) | PERSISTENT | 0 | +----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ {code} I am able to ping from my mac to the following enp0s8(172.42.42.101). But when trying to ping 10.251.251.244 it is erroring
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Sriharsha, thanks for watching this video. I haven't tried your setup. And I don't think you will be able to access the containers directly from your Mac. The LXC containers are connected to the lxdbr0 bridge which is on your Ubuntu VM. Although your containers can talk via the bridge to the outside world, it wouldn't be possible the other way round. Why don't your try installing and using lxc on your Mac instead of through an Ubuntu VM? brew install lxc Thanks.
@sriharshavallabaneni6542
@sriharshavallabaneni6542 5 жыл бұрын
@@justmeandopensource I read somewhere that lxc won't work on MAC hence I have to install ubuntu by using vagrant. As you said lxc will work on mac i have to try installing on mac directly. Will keep you posted
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@sriharshavallabaneni6542 Sure. Meanwhile when I get some time, I will see if I can find a solution.
@jagtaruskha
@jagtaruskha 5 жыл бұрын
Venkat, this is merely a confirmation about the bash file configuration. ONLY 1.14.1 worked. not even 1.14.3 can work due to cgroups issue.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hmm. The last successful version I used and have been using is 1.14.3.
@sajjadhosseinzadeh
@sajjadhosseinzadeh 11 ай бұрын
Dear Venkat, Is it possible to use longhorn on kubernetes cluster with nodes created by LXC? I'm trying to do that but the volumes remain in Attaching status and can not be attached, but when I create a volume from longhorn UI and choose iscsi for frontend field the volume can be attached. the default value for frontend field is Block dev seems that we should change its default value to iscsi, is it possible? if yes, how? Thanks
@dhananjaypatangay4029
@dhananjaypatangay4029 5 жыл бұрын
Hi Venkat, How to expose the lxc container to the host of vagrant machine? Also how to set up kube ui?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
HI DP, thanks for watching this video. It involves little bit of networking. So you have a vagrant virtual machine and inside that virtual machine you are running LXC containers. Now you want to access the LXC container from the host machine that is running the vagrant virtual machine. I haven't tried that. My host machine is itself a Linux machine where I installed LXC and I can access the containers. In your case, the vagrant virtual machine will be on a private bridge network. Any virtual machines created will be using this bridge to access the outside world. While installing the LXD, during the lxd init process, you have to set yes to the below question. Would you like LXD to be available over the network (yes/no) [default=no]? Now you can install lxc command on your host machine and access the lxc connection inside your virtual machine. I know it sounds complicated. There are lots of articles in the internet explaining this networking. Thanks.
@dhananjaypatangay4029
@dhananjaypatangay4029 5 жыл бұрын
@@justmeandopensource Thanks for the superfast reply Venkat! Also how to set up kube UI and swap flannel with Calico? From my host which is a Mac can I execute kubectl commands?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi, The latest version of Kubernetes is 1.16.0. But since version 1.15.4, I had problems using LXC containers for nodes. The working version of kubernetes that I could provision using LXC containers is 1.14.3. But if you don't mind using vagrant virtual machines as your nodes, then you can clone my github repository github.com/justmeandopensource/kubernetes. cd kubernetes/vagrant-provisioning and then do vagrant up. Since k8s version 1.16.0, I changed the default overlay network from flannel to calico. So once you do vagrant up, within about 10 minutes you will have a k8s v1.16.0 cluster with calico network. Thanks.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Forgot to mention that you can use kubectl command on your Mac. kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-macos Thanks.
@marloncesar4573
@marloncesar4573 3 жыл бұрын
Lxc + Containerd (without docker) works? I'm having troubles
@justmeandopensource
@justmeandopensource 3 жыл бұрын
I have it successfully running. Not tested very recently though. github.com/justmeandopensource/kubernetes/tree/master/lxd-provisioning
@marloncesar4573
@marloncesar4573 3 жыл бұрын
@@justmeandopensource your videos are awesome. Thanks!
@justmeandopensource
@justmeandopensource 3 жыл бұрын
@@marloncesar4573 Thank you.
@VinuezaDario
@VinuezaDario 4 жыл бұрын
Hi, for a production environment that is the most recommended? Kubernetes Cluster using LXC Containers or Kubernetes Cluster using Kubeadm?? Thanks for your contributions
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Darvin, You asked kubernetes using LXC or Kubernetes using Kubeadm. These are different. LXC is the node and kubeadm is the provisioning method. You can run kubernetes nodes on containers, virtual machines or physical servers. In this video I chose to use lxc containers. And in this video I used kubeadm method to install the cluster.
@VinuezaDario
@VinuezaDario 4 жыл бұрын
@@justmeandopensource My problem is that I have a kubernet installation with 3 virtual machines [master, worker1, worker2] but when customers consume the application only one worker works and the application gets very slow. That's why I asked you which installation is the most recommended. If I use LXC or virtual machines.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@VinuezaDario Installation method doesn't matter. Its the design of your application. If you think there is a load on your application, then you have to increase the replicas. What do you mean when you say only one worker works? How is your application deployed and how are you exposing it to your customers?
@VinuezaDario
@VinuezaDario 4 жыл бұрын
@@justmeandopensource Hi, Thanks for you help me. What do you mean when you say only one worker works? I have 3 virtual machines installed: [master, worker1, worker2] in centos 7 [root@masterpruebas webui-nms-test]# kubectl get all -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/webui-7c76ccc95-hlbdq 1/1 Running 0 7h53m 10.244.2.25 worker2pruebas pod/webui-7c76ccc95-rhbb6 1/1 Running 0 7h53m 10.244.1.25 worker1pruebas But, this is the result of each worker, only worker2 works worker1 ---------------------- # free -h total used free shared buff/cache available Mem: 11G 905M 9.6G 12M 1.1G 10G Swap: 0B 0B 0B worker2 ---------------------- # free -h total used free shared buff/cache available Mem: 11G 9.4G 324M 12M 1.8G 1.9G Swap: 0B 0B 0B How is your application deployed and how are you exposing it to your customers? I expose it with a LoadBalancer service: apiVersion: v1 kind: Service metadata: name: webui labels: app: webui track: stable spec: type: LoadBalancer ports: - name: http port: 80 targetPort: 8877 - name: https port: 443 targetPort: 4443 selector: app: webui track: stable sessionAffinity: ClientIP externalIPs: - 172.16.11.140
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@VinuezaDario Is this cluster in the cloud or on your local machine? And how do you say only worker2 works?
@sarfarazshaikh
@sarfarazshaikh 5 жыл бұрын
Are you using intel or amd ? Can you provide your system hardware details
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Sarfaraz, thanks for watching this video. I use DELL XPS 13 9370 Laptop with 16GB RAM and Intel Core i7 8th Gen CPU.
@bonzcloud
@bonzcloud 4 жыл бұрын
I tried the same setup with lxc launch images:centos/7 kworker1 --profile microk8s. It kicked me out of the host . Although the container was running but I did not understand why did it kick me out of the host machine and log me off ? Any idea ? It also happened both times for master and worker .
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Joy, thanks for watching. What I noticed was, especially when launching an LXC container, the cpu load on my host machine shoots up so high for a few seconds when my system freezes and then all returns to normal after 3 or 4 seconds. Depends on your system resources.
@alexiaowang
@alexiaowang 5 жыл бұрын
Hello nice vid! I have tried your profile set and use most of the bootstrap script but rather join a remote cluster manually. when I join, I got the following error. any ideas? I tried to include bridge and netfilter in the profile driver but still unable to get /proc/sys/net/bridge/bridge-nf-call-iptable and no idea about the second error. [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist [ERROR SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "", err: exit status 1
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Shawn, Apologies for the delay. I missed this comment. Kubernetes cluster in LXC environment is not production ready and expect to see few issues. But I haven't seen any of the errors you mentioned but different ones. I believe you are getting this error during kubeadm init command. You can pass "--ignore-preflight-errors=all" to ignore all errors. If you look at my bootstrap script (line 60), I have ignored both the errors you pointed out because I know we have problem with them in LXC containers. github.com/justmeandopensource/kubernetes/blob/master/lxd-provisioning/bootstrap-kube.sh Thanks
@mihaimyh
@mihaimyh 5 жыл бұрын
If I would like my lxc containers have IPs from the same network as the host, what should I do?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Please check the below links. blog.simos.info/how-to-make-your-lxd-container-get-ip-addresses-from-your-lan/ blog.simos.info/how-to-make-your-lxd-containers-get-ip-addresses-from-your-lan-using-a-bridge/ Hope it helps. Thanks
@shravansingh8609
@shravansingh8609 5 жыл бұрын
sir why do you use the vagrant up command in this video. actually u are using the ubuntu machine so what is the use of vagrant up command??
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Shravan, Thanks for watching this video. Starting at 3:11, I showed my environment and explained what I will be doing. My host machine is Arch Linux based Manjaro distribution. I used vagrant up command to bring up an Ubuntu virtual machine. So the entire demo of lxc/lxd containers are done inside this Ubuntu virtual machine. If you are practising on Ubuntu machine, you can skip the vagrant up command. Hope this makes sense. If not let me know. Thanks, Venkat
@shravansingh8609
@shravansingh8609 5 жыл бұрын
@@justmeandopensource thanks for answer. Actually i use the ubuntu machine thats why i got confuse that should i use vagrant up command or not
@justmeandopensource
@justmeandopensource 5 жыл бұрын
You can just follow from 6:00 then. Thanks.
@vamsijakkula5683
@vamsijakkula5683 4 жыл бұрын
venkat, post restart , i'm not able to get the cluster running again. Any reason why ?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Vamsi, thanks for watching. Please see this video for a possible solution. kzbin.info/www/bejne/h5OQpINqlrJjha8 Cheers.
@JohnSmithExtra
@JohnSmithExtra 5 жыл бұрын
Have you tried using Ubuntu 18.04.2 with LXC 3.15 and Kubelet 1.15? I can't seem to turn of the file swap to allow kubelet to run, so that I can do "kubeadm init".
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi John, thanks for watching this video. Yeah I know about that problem. At the moment, 1.15.0 doesn't work in LXC environment. And you won't be able to turn off swap inside the lxc container. It can only be done on the parent host running lxc container. But you don't have to disable swap. You can ignore the errors during kubeadm init command by passing certain parameters. Please check my github repo for the bootstrap script. Thanks.
@JohnSmithExtra
@JohnSmithExtra 5 жыл бұрын
@@justmeandopensource Yea, it's not working. Kubelet won't start even with this... Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false" to /etc/systemd/system/kubelet.service.d/10-kubeadm.conf doesn't work.
@JohnSmithExtra
@JohnSmithExtra 5 жыл бұрын
even putting the --fail-swap-on=false in /etc/default/kubelet doesn't generate the /var/lib/kubelet/config.yaml with failSwapOn: false
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@JohnSmithExtra Yes. I am still trying to figure out how to resolve this. fail-swap-on parameter can no longer be passed as KUBELET_EXTRA_ARGS. It has to be in a configuration file and passed to kubelet via --config option. You can check the below link. I somehow managed to get past that by creating config file, but had few more errors down the line. kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ Until I figure out a solution, I have locked my bootstrap script to install kubernetes v1.14.3 Thanks.
@rajeshrajendran5218
@rajeshrajendran5218 5 жыл бұрын
Hi, Thank you for the video. But does this method has any problem with arch+BTRFS ? For whatever reason, kubelet is not starting at all. Error if I run lxc in arch(with btrfs) : Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 26 in cached partitions map And if I run ubuntu vagrant and then lxc: Failed to start ContainerManager [open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied, open /proc/sys/vm/overcommit_memory: permission denied] Any thoughts ?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Rajesh, thanks for watching this video. I used "dir" as storage backend in this video. I haven't tried btrfs in the first place as I remember seeing some issues with it. Not just in Arch Linux but in general. But you have to understand what the error is and try to resolve. It is definitely a supported backend and there is no reason why you shouldn't use btrfs. Thanks.
@rajeshrajendran5218
@rajeshrajendran5218 5 жыл бұрын
@@justmeandopensource Even I did the same host machine vanila arch file system: btrfs I think this is an issue with btrfs. Even my friends running in ubuntu18.04 with btrfs, had this issue
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@rajeshrajendran5218 Hmm. When I get some time I will try using btrfs as storage backend and see if I get the same error. Cheers.
@samnickerson9255
@samnickerson9255 4 жыл бұрын
Great video as always, but failing for me. Does it REQUIRE running in a VM? I installed LXD on a Centos7 Server, and ran your steps as if I were in a VM. Task 2 and Task 12 Failed on Master. Docker fails to come up. I deleted Master and recreated a new LXC container and tried from command line to install docker from within the container, and it fails as well. Host machine is Centos7 running LXC/LXD. No hypervisor in play other than LXC/LXD. May have something to do with "cgroup"? "Dec 02 22:25:01 kmaster dockerd[1882]: time="2019-12-02T22:25:01.018629866Z" level=warning msg="Your kernel does not support cgroup memory limit"" Dec 02 22:24:57 kmaster dockerd[1661]: Error starting daemon: Devices cgroup isn't mounted : [root@kmaster ~]# systemctl start docker Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details. [root@kmaster ~]# systemctl status docker.service ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: failed (Result: start-limit) since Mon 2019-12-02 22:25:01 UTC; 21s ago Docs: docs.docker.com Process: 1882 ExecStart=/usr/bin/dockerd (code=exited, status=1/FAILURE) Main PID: 1882 (code=exited, status=1/FAILURE) Dec 02 22:25:01 kmaster systemd[1]: Failed to start Docker Application Container Engine. Dec 02 22:25:01 kmaster systemd[1]: Unit docker.service entered failed state. Dec 02 22:25:01 kmaster systemd[1]: docker.service failed. Dec 02 22:25:01 kmaster systemd[1]: docker.service holdoff time over, scheduling restart. Dec 02 22:25:01 kmaster systemd[1]: Stopped Docker Application Container Engine. Dec 02 22:25:01 kmaster systemd[1]: start request repeated too quickly for docker.service Dec 02 22:25:01 kmaster systemd[1]: Failed to start Docker Application Container Engine. Dec 02 22:25:01 kmaster systemd[1]: Unit docker.service entered failed state. Dec 02 22:25:01 kmaster systemd[1]: docker.service failed.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Sam, How did you install docker? Just remove docker completely and try the below commands. # curl -fsSL get.docker.com | sh # systemctl start docker Same issue as yours: github.com/moby/moby/issues/29260 Or try it on a different machine. Thanks
@samnickerson9255
@samnickerson9255 4 жыл бұрын
@@justmeandopensource Thanks for the reply, still fails. So odd, I am in the same container you are and it doesnt work. Is it, as an LXC, trying to use something the Centos Host kernel doesnt have? Also of note: the host machine (Centos7) is running Docker fine. [root@kmaster ~]# systemctl start docker Job for docker.service failed because start of the service was attempted too often. See "systemctl status docker.service" and "journalctl -xe" for details. To force a start use "systemctl reset-failed docker.service" followed by "systemctl start docker.service" again. ------------------ This noted: Dec 03 16:54:00 kmaster yum[1344]: Installed: 2:container-selinux-2.107-3.el7.noarch Dec 03 16:54:04 kmaster systemd[1]: Reloading. Dec 03 16:54:04 kmaster yum[1344]: Installed: containerd.io-1.2.10-3.2.el7.x86_64 Dec 03 16:54:05 kmaster systemd[1]: Failed to duplicate autofs fd: Bad file descriptor ------------------ Also noted: Dec 03 16:54:45 kmaster dockerd[1645]: failed to start daemon: Devices cgroup isn't mounted Dec 03 16:54:45 kmaster systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE Dec 03 16:54:45 kmaster systemd[1]: Failed to start Docker Application Container Engine.
@samnickerson9255
@samnickerson9255 4 жыл бұрын
UPDATE: Spun an Ubuntu VM, jumped into it and ran an LXC container up and docker works fine. MUST be something with the Centos kernel of the host. Rebuilding the host with Ubuntu 18.04.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@samnickerson9255 Cool
@samnickerson9255
@samnickerson9255 4 жыл бұрын
@@justmeandopensource Unfortunately it results in the same error. Your instructions work if performed in a VM, but on the host without a VM, it fails. Frustrating. I performed a clean install of Ubuntu 18.04, on a Dell PowerEdge 720, lxc/lxd running fine, but docker still will not run inside the lxc container.
@donaldfrench3696
@donaldfrench3696 4 жыл бұрын
I like your courses, however on the ones I used your scripts for have failed. I am working on kube 30 and using your two scripts using a ubuntu host and centos/7 containers.attempting to do the Kubernetes configure it fails because there is not dns so it can not access anything outside of the container. the displayed log is: [TASK 1] Install docker container engine [TASK 2] Enable and start docker service Failed to start docker.service: Unit not found. [TASK 3] Add yum repo file for kubernetes [TASK 4] Install Kubernetes (kubeadm, kubelet and kubectl) [TASK 5] Enable and start kubelet service [TASK 6] Install and configure ssh sed: can't read /etc/ssh/sshd_config: No such file or directory [TASK 7] Set root password [TASK 8] Install additional packages [TASK 9] Initialize Kubernetes Cluster [TASK 10] Copy kube admin config to root user .kube directory cp: cannot stat '/etc/kubernetes/admin.conf': No such file or directory [TASK 11] Deploy flannel network [TASK 12] Generate and save cluster join command to /joincluster.sh I went in to look at the ssh first and attempted to install it. There is no DNS response. I pinged google.com and nothing. I pinged 8.8.8.8 and get immediate response. I did notice that you video was using newer versions of code that the github and attempted it using the updated versions to match your video. All I want to do is get a 2 node k8s cluster for home/office testing.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Donald, thanks for watching. I use this script every day and my Kubernetes cluster is based exactly on this video. Nothing really changed. In your case, something is wrong with the lxc containers. I can see TASK 2 failed to start the docker service. The actual problem is in TASK 1. I guess docker didn't get installed at all. Please modify the bootstrap script and remove all redirections ">/dev/null 2>&1" and run this again. You will be able to see the full output and where its failing. Make sure your lxd environment is setup properly and that the containers gets ipv4 address and can communicate with the internet. Cheers.
@kolluruchakresh8296
@kolluruchakresh8296 5 жыл бұрын
[kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL localhost:10248/healthz' failed with error: Get localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. I'm getting this error when i run kubeadm init --ignore-preflight-errors=all. Can you please help with this.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Kolluru, thanks for watching this video. I believe you are not using the latest code from my github repo. Originally when I wrote the bootstrap code for LXC environment, I installed the latest of Kubeadm and kubelet. Which was working fine until 1.15.0 version of Kubernetes got released. I couldn't get it working with v1.15.0. I am still trying to find the issue. I got the same error as you mentioned. I then updated the bootstrap script to install Kubernetes v1.14.3. So if you do a fresh clone of github.com/justmeandopensource/kubernetes, it will work. Please give it a try and let me know if it worked. Thanks
@shravansingh8609
@shravansingh8609 5 жыл бұрын
hey venkat i create the master and worker container and when i enter the root mode of master and perform nproc command it will show only 1 processor how can i increase to 2 processor?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Shravan, how many processors does your host machine have? If you haven't restricted the number of CPUs in the profile, you will see the same number of CPUs that you see in your host machine. Thanks
@shravansingh8609
@shravansingh8609 5 жыл бұрын
@@justmeandopensource actually I am not restrict in start
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@shravansingh8609 So I believe your host machine has just 1 cpu and that is what reflected inside the LXC container.
@shravansingh8609
@shravansingh8609 5 жыл бұрын
@@justmeandopensource suppose i want to change the cpu no. how can i do that??
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Shravan, Please answer my questions below. 1. How many CPUs you have got in the host machine? 2. How many CPUs are showing up in the LXC container? 3. How many CPUs you want in the LXC container? Thanks,
@techiejaybhatt3471
@techiejaybhatt3471 5 жыл бұрын
Hello, After lxd init instruction, I am getting this error "Error: Failed to create network 'lxdbr0': Failed to automatically find an unused IPv6 subnet, manual configuration required". how can I set up IPv6 subnet manually? Can you help me with this?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Jay, thanks for watching this video. This error is specific to IPv6. I usually diasble it when running lxd init by setting the option to none/no instead of auto. Unless you want to use ipv6, I would just disable it. Thanks.
@techiejaybhatt3471
@techiejaybhatt3471 5 жыл бұрын
@@justmeandopensource thank you, for replying.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
No worries. You are welcome. Cheers.
@thunderbirds8633
@thunderbirds8633 3 жыл бұрын
Is there anyway to access these lxc containers from another machine in the network? say, I have installed on an esxi machine with ubuntu OS these lxc contaienrs, how to access these containers from another machine in the same network, so I can access the kubernetes services?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi, you can configure maxvlan in lxd so that all lxd containers get ip address from the host network and then it can be accessed from all other machines on the same network. If you don't want to do that, you can setup iptables port forwarding.
@thunderbirds8633
@thunderbirds8633 3 жыл бұрын
Thank you sir . My home network is very slow..for testing I am adding the lxc environment in ubuntu.. is there anyway to download the lxc images seperately like some zip files and add offline to the lxc environment whenever I want?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
@@thunderbirds8633 You can download images locally. Check this documentation. linuxcontainers.org/lxd/docs/master/image-handling
@manikandans8808
@manikandans8808 5 жыл бұрын
Now the script file is not working. I tried it several times. It pushes me an error that states "kubelet was not healthy or isn't running". Can you kindly check that out?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Mani, I know someone would ask this and you asked. I am aware of this issue. This bootstrap script always install the latest Kubernetes version and not sure if you know Kubernetes v1.15 has been released. And it doesn't work well in the lxc environment. So I already updated the bootstrap script to install Kubernetes version 1.14.3. So if you check out the bootstrap script from my repo now and try it, it will work. Just git clone it. Thanks.
@manikandans8808
@manikandans8808 5 жыл бұрын
@@justmeandopensource sure I'll check it.
@manikandans8808
@manikandans8808 5 жыл бұрын
Works perfectly..thanks venkat...since it was 1.15 was not able to bootstrap? Have they changed any configure to bootstrap?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Not sure what exactly changed. I tried debugging it for a while but couldn't get to the bottom of it. I will continue digging this further to make it work with 1.15.0. Let me know if you find anything. Thanks
@manikandans8808
@manikandans8808 5 жыл бұрын
@@justmeandopensource surly. Thanks for it.
@JeronimoAvelarFilho
@JeronimoAvelarFilho 5 жыл бұрын
Hi , I am not able to put kmaster to work. After some debugging I found the error: kubectl apply -f raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml unable to recognize "raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get 10.230.203.84:6443/api?timeout=32s: dial tcp 10.230.203.84:6443: connect: connection refused. Do you have any hint of what is going on ? I am running on a Dell G7 with pop_os 19.04
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Jeronimo, thanks for watching this video. Actually in your case kubeadm init step failed and cluster wasn't initialized for some reason. Just to confirm, are you using the same version of bootstrap-kube.sh in my github repo? The bootstrap-kube.sh will try to install kubernetes v1.14.3 as I had problems installing 1.15.0. What I would suggest is to edit the bootstrap-kube.sh and remove ">> /root/kubeinit.log 2>&1" from the end of line #60. Now if you run the bootstraps script, you can see what kubeadm init command is doing and where it is failing. Thanks.
@JeronimoAvelarFilho
@JeronimoAvelarFilho 5 жыл бұрын
@@justmeandopensource Thanks for your attention. I am using the last version of your code. I will make the edit you suggested and see what appears.
@JeronimoAvelarFilho
@JeronimoAvelarFilho 5 жыл бұрын
It seems that it is missing the corresponding kernel-devel sources. when i yum install them , the 3.10 version ins installed but i think its trying to find 5.0 source code
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hmmm... When using LXC containers, we have to think about host kernel support as well.
@manojkaila5919
@manojkaila5919 4 жыл бұрын
hi , i have done the same process manually before watching this video and i had issues starting docker . I see you are doing the same in this video and i have used the same versions of docker and kubernetes but unable to start docker. base server : centos 7 kmaster : centos 7 kworker: centos 7 issue : unable to start docker : docker.service: Failed with result 'start-limit-hit' kernel : 3.10.0 can i know if you have any solution for this and also would like to know the kernel version of your server?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
HI Manoj, thanks for watching. I am using Manjaro (Arch Based Linux Distro) with 5.3 Kernel. Your host kernel has to be higher than the container kernel. Actually container doesn't have a kernel. It will use the host kernel. For example, CentOS 7 usually comes with kernel 3.X and if your host kernel is less than that you will have problems.
@manojkaila5919
@manojkaila5919 4 жыл бұрын
@@justmeandopensource Thanks for the quick reply . my host kernel is Linux 3.10.0-1062.9.1.el7.x86_64 and the container kernel is same as it picks up the host machine kernel. so if i upgrade the kernel version on my host will it not reflect inside the container too as it picks up from the host machine?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@manojkaila5919 You can check this on a separate machine without breaking your machine. Yes it should be fine if you upgrade to newer version of kernel.
@manojkaila5919
@manojkaila5919 4 жыл бұрын
@@justmeandopensource Thank you so much will check and get back to you and thanks for the video, made work much easier!!!
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@manojkaila5919 Thanks and you are welcome.
@Airbag888
@Airbag888 4 жыл бұрын
I wish there was a Windows LXC container so I could get rid of kvm :p
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Unfortunately you can't have a Windows LXC container as it is a Linux kernel based virtualization.
@Airbag888
@Airbag888 4 жыл бұрын
@@justmeandopensource oh yes definitely. That's too bad but perfectly understandable. I guess I can save on cpu and memory footprint for Linux vms so that's a net positive
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@Airbag888 Yeah.
@vijaychebium3216
@vijaychebium3216 5 жыл бұрын
Hi, Thanks for the video. I see a couple of issues after running bash script on kmaster. 1) [TASK 12] Generate and save cluster join command to /joincluster.sh timed out waiting for the condition 2) If you see below it didn't create a flannel network and CNI. lxc list +----------+---------+----------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +----------+---------+----------------------+------+------------+-----------+ | kmaster | RUNNING | 192.168.1.226 (eth0) | | PERSISTENT | 0 | | | | 172.17.0.1 (docker0) | | | | 3) [root@kmaster ~]# kubectl version --short Client Version: v1.14.1 The connection to the server 192.168.1.226:6443 was refused - did you specify the right host or port? 4) journalctl -u kubelet May 15 15:45:41 kmaster kubelet[8613]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config fla May 15 15:45:41 kmaster kubelet[8613]: F0515 15:45:41.802103 8613 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to rea May 15 15:45:41 kmaster systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a May 15 15:45:41 kmaster systemd[1]: Unit kubelet.service entered failed state. May 15 15:45:41 kmaster systemd[1]: kubelet.service failed. May 15 15:45:51 kmaster systemd[1]: kubelet.service holdoff time over, scheduling restart. May 15 15:45:51 kmaster systemd[1]: Stopped kubelet: The Kubernetes Node Agent. May 15 15:45:51 kmaster systemd[1]: Started kubelet: The Kubernetes Node Agent.what could go wrong, here?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Vijay, thanks for watching this video. I think probably one of the earlier steps failed. Please remove all the output redirection in bootstrap shell script (>/dev/null 2>&1) and then run it. You will then be able to see where it's failing. Make sure to start everything from scratch. Thanks, Venkat
@vijaychebium3216
@vijaychebium3216 5 жыл бұрын
@@justmeandopensource [TASK 11] Deploy flannel network unable to recognize "raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get 192.168.1.226:6443/api?timeout=32s: dial tcp 192.168.1.226:6443: connect: connection refused
@vijaychebium3216
@vijaychebium3216 5 жыл бұрын
@@justmeandopensource I was able to bring up the master. Now i have issues with kworker node. Log May 16 18:50:13 kworker1 kubelet[14319]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See kubernetes.io/docs/tasks/administer-clu May 16 18:50:13 kworker1 kubelet[14319]: F0516 18:50:13.283214 14319 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", May 16 18:50:13 kworker1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a [root@kworker1 ~]# cat /tmp/joincluster.log ssh: connect to host kmaster.lxd port 22: Connection timed out /joincluster.sh: line 1: --ignore-preflight-errors=Swap,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables,SystemVerification: command not found what could be the issue? any idea?
@vijaychebium3216
@vijaychebium3216 5 жыл бұрын
Why the Kworker node doesnt have CNI IP ?
@JeronimoAvelarFilho
@JeronimoAvelarFilho 5 жыл бұрын
@@vijaychebium3216 Hi , I am facing the same problem and message errors with flannel activation. How have you solved the problem with the kmaster ? Thanks in advance
@seshreddy8616
@seshreddy8616 4 жыл бұрын
Hi Venkat, Thanks for the great video. I can't see flannel network up and running. flannel and Core DNS pods are not working. Any idea. I'm using VMware bridged networking and not sure if it's related. seshi@ubuntu:~/kubernetes/lxd-provisioning$ lxc list +----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ | kmaster | RUNNING | 172.17.0.1 (docker0) | fd42:978e:826d:d4c0:216:3eff:fe5e:8a3d (eth0) | PERSISTENT | 0 | | | | 10.126.212.140 (eth0) | | | | +----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ | kworker1 | RUNNING | 172.17.0.1 (docker0) | fd42:978e:826d:d4c0:216:3eff:fea4:ba46 (eth0) | PERSISTENT | 0 | | | | 10.126.212.37 (eth0) | | | | +----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ seshi@ubuntu:~/kubernetes/lxd-provisioning$ lxc exec kmaster -- /bin/bash [root@kmaster ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kmaster Ready master 4m35s v1.17.1 kworker1 Ready 66s v1.17.1 [root@kmaster ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-6955765f44-6qbdb 0/1 ContainerCreating 0 4m27s coredns-6955765f44-cm67s 0/1 ContainerCreating 0 4m27s etcd-kmaster 1/1 Running 0 4m38s kube-apiserver-kmaster 1/1 Running 0 4m38s kube-controller-manager-kmaster 1/1 Running 0 4m38s kube-flannel-ds-amd64-l7h4p 0/1 CrashLoopBackOff 4 4m26s kube-flannel-ds-amd64-qhk84 0/1 Error 1 76s kube-proxy-6db8d 0/1 CrashLoopBackOff 5 4m27s kube-proxy-g5cfp 0/1 Error 3 76s kube-scheduler-kmaster 1/1 Running 0 4m38s [root@kmaster ~]# kubectl logs kube-flannel-ds-amd64-l7h4p -n kube-system I0711 06:38:11.966153 1 main.go:518] Determining IP address of default interface I0711 06:38:11.978958 1 main.go:531] Using interface with name eth0 and address 10.126.212.140 I0711 06:38:11.978984 1 main.go:548] Defaulting external address to interface address (10.126.212.140) W0711 06:38:11.978994 1 client_config.go:517] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. E0711 06:38:41.980353 1 main.go:243] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-amd64-l7h4p': Get 10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-amd64-l7h4p: dial tcp 10.96.0.1:443: i/o timeout
@seshreddy8616
@seshreddy8616 4 жыл бұрын
Thanks alot Venkat. I've a ubuntu 18.04 vm installed in mac book pro using parallels with a bridge network. This has given me an ip as the same range as my wifi. (192.168.1.214) The LXC containers work fine and have their own dedicated network(ip - 10.75.203.201) However, I'm trying to get a IP for containers in the same range as my wifi. I followed blog.simos.info/how-to-make-your-lxd-container-get-ip-addresses-from-your-lan/ The IP is not getting allocated and is blank. However, I've tried the same process in the ubuntu desktop (NOT a VM). It worked ok in my ubuntu desktop(bare metal , not VM) The same process is not working in mac. It doesn't allocate any IP. I want the LXC containers have the same IP range as my home wi-fi. Any idea ? ---This is Ubuntu Desktop result and looking the similar one in my mac book. lxc list +------------------+---------+----------------------+-----------------------------------------------+-----------+-----------+ | net1 | RUNNING | 192.168.1.214 (eth0) | fdaa:bbcc:ddee:0:216:3eff:fe08:bf90 (eth0) | CONTAINER | 0 | +------------------+---------+----------------------+-----------------------------------------------+-----------+-----------+ Here is the commands to reproduced the problem. But need to be executed in a VM lxc profile create macvlan lxc profile show macvlan ip route show default 0.0.0.0/0 default via 192.168.1.1 dev enp1s0 proto dhcp metric 100 lxc profile device add macvlan eth0 nic nictype=macvlan parent=enp1s0 lxc launch ubuntu:18.04 net1 --profile default --profile macvlan Could you please see if you can offer any advise MacBook pro output parallels@ubun:~$ lxc list +------+---------+------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+------+------+------------+-----------+ | net1 | RUNNING | | | PERSISTENT | 0 | +------+---------+------+------+------------+-----------+
@williamblair1123
@williamblair1123 3 жыл бұрын
Why would someone want to do this? Also, LXD and LXC are not the same thing.
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi William, thanks for watching. This is one of several ways to provision infrastructure for running Kubernetes and is perfectly valid. Its lightweight system containers instead of fully blown virtual machines. Best suited for local clusters where you can't afford more cpu and memory for individual kubernetes nodes. What makes you ask this question?
[ Kube 30.1 ] Kubernetes 1.17 on LXC Containers
14:40
Just me and Opensource
Рет қаралды 6 М.
[ Kube 31 ] Set up Nginx Ingress in Kubernetes Bare Metal
30:17
Just me and Opensource
Рет қаралды 72 М.
She's very CREATIVE💡💦 #camping #survival #bushcraft #outdoors #lifehack
00:26
Cool Parenting Gadget Against Mosquitos! 🦟👶
00:21
TheSoul Music Family
Рет қаралды 12 МЛН
Mom had to stand up for the whole family!❤️😍😁
00:39
REAL 3D brush can draw grass Life Hack #shorts #lifehacks
00:42
MrMaximus
Рет қаралды 8 МЛН
18 Weird and Wonderful ways I use Docker
26:18
NetworkChuck
Рет қаралды 292 М.
[ Kube 13 ] Using Persistent Volumes and Claims in Kubernetes Cluster
44:30
Just me and Opensource
Рет қаралды 36 М.
Getting started with LXC containers
48:04
Just me and Opensource
Рет қаралды 78 М.
[ Kube 26 ] Prometheus monitoring for Kubernetes Cluster and Grafana visualization
34:09
[ Kube 21 ] How to use Statefulsets in Kubernetes Cluster
33:25
Just me and Opensource
Рет қаралды 29 М.
LXC - Guide to building a LXC Lab
15:32
DevOps Journey
Рет қаралды 35 М.
[ Kube 33 ] Set up MetalLB Load Balancing for Bare Metal Kubernetes
11:02
Just me and Opensource
Рет қаралды 42 М.
Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]
3:36:55
TechWorld with Nana
Рет қаралды 8 МЛН
How I Deployed Kubernetes In My Home Lab!
18:20
g0lden
Рет қаралды 6 М.
She's very CREATIVE💡💦 #camping #survival #bushcraft #outdoors #lifehack
00:26