[ Kube 1.5 ] Set up highly available Kubernetes cluster step by step | Keepalived & Haproxy

  Рет қаралды 37,878

Just me and Opensource

Just me and Opensource

Күн бұрын

Пікірлер: 169
@kumarcc1
@kumarcc1 8 ай бұрын
Amazing... I struggled to build a multi-node K8s cluster after watching this content even easier to set up ..i am trying this weekend with my VMware workstation ..thank you ..keep up and we learning from your content
@justmeandopensource
@justmeandopensource 8 ай бұрын
Hi Kumar, Thanks for watching.
@neodzen
@neodzen 3 жыл бұрын
Зачем нужны всякие платные непонятные курсы, когда есть такой замечательный Venkat. Лучшие объяснения, из того что я находил. Спасибо!
@redhatfan2304
@redhatfan2304 Жыл бұрын
Я согласен. Спасибо Венкат.
@aureli4nus
@aureli4nus 2 жыл бұрын
Super tutorial and the way you update your tutorials is just awesome, you really care about what you teach!!
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Hi, Thanks for watching.
@lousab6341
@lousab6341 2 жыл бұрын
In 30 minutes you have explained very clear every step i needed. Thanks a lot!!
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Glad you liked it. Thanks for watching. Cheers.
@lousab6341
@lousab6341 2 жыл бұрын
@@justmeandopensource i'm going to automate all this with an ansible project. It could be interesting to put loadbalancing on docker instead of installing directly on the host and put every thing on EC2 or other VM
@soundgt
@soundgt Жыл бұрын
Fantastic video! I've been tasked with setting up a k8's cluster on RHEL using an vIP from a network appliance. I didn't find anything on the interwebz as good as this video to explain/simplify the process! Checking out some of your other videos as well! Much appreciated! Thanks!
@justmeandopensource
@justmeandopensource Жыл бұрын
Glad to hear that. Thanks for watching.
@ffoxxa2047
@ffoxxa2047 7 ай бұрын
Clearly straight forward. Thanks a lot
@justmeandopensource
@justmeandopensource 7 ай бұрын
You’re welcome and Thanks for watching.
@samueln5506
@samueln5506 Жыл бұрын
This is just beyond simplicity! Nice one.
@justmeandopensource
@justmeandopensource Жыл бұрын
Hi Samuel, Thanks for watching.
@taylormonacelli
@taylormonacelli Жыл бұрын
Straight to the point and thorough. Very nice
@justmeandopensource
@justmeandopensource Жыл бұрын
Hi Taylor, many thanks for watching. Cheers.
@abdelazizsharaf7305
@abdelazizsharaf7305 6 ай бұрын
Best video for k8s setup I've ever seen Thanks
@justmeandopensource
@justmeandopensource 6 ай бұрын
Thanks for watching. Glad you liked it.
@metalmasterlp
@metalmasterlp 3 жыл бұрын
Great help as always ! keep it up! thank you
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Thanks for watching.
@kevinyu9934
@kevinyu9934 3 жыл бұрын
Thank you so much for the amazing content! Look forward to the next one!!!
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Chinglong, thanks for watching.
@julien3573
@julien3573 3 жыл бұрын
Hi, nice explanation! You said you will do another video explaining how to set up this using static pods on the master nodes. Maybe it would be great to explain how to use kube-vip! Which would be essentially the same thing but without having to configure keepalived and haproxy separately plus it has some features worth checking out.
@SinghBalraj102
@SinghBalraj102 2 ай бұрын
Hey Mate, I really enjoyed your video. Would you be willing to share the tool you used for connecting servers and managing multiple sessions on a single page?
@alifiroozizamani7782
@alifiroozizamani7782 Жыл бұрын
Thanks for this awesome tutorial🍻
@justmeandopensource
@justmeandopensource Жыл бұрын
Thanks for watching.
@Mastermnd1
@Mastermnd1 2 ай бұрын
Great video! after setting up my cluster, when I turn off one of the control plane machines to simulate a problem, HAProxy and Keepalived work as expected, but if you check the healthz endpoint you will see one error: [-]etcd failed: reason withheld. I guess I need to watch more videos to find out how to HA the etcd part. Another suggestion: instead of blindly turning off the firewall, advice would be appreciated about which ports to actually open.
@VelanKrishnan
@VelanKrishnan 9 ай бұрын
Hi Venkat,can you please post video for installing keepalived and Haproxy in master node itself for a multi master cluster
@gouterelo
@gouterelo 3 жыл бұрын
Great video Venkat ! One question... that --apiserver-advertise-adress you config in all master nodes, thas its only if you have multiple network interfaces right ? or if you have multiple masters, you have to declare all of them to the api trough the proxy ? (i've have a HA cluster, but it only have one proxy for masters)
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Gonzalo, thanks for watching. That option is only required if you have multiple network interfaces and you want to use a specific one for your cluster. If you don't specify that option it will use the first available NIC by default. In my case, the first available NIC is eth0 which I don't want to use. If you just have one NIC on all nodes, you can ignore this option. Cheers.
@codereaper
@codereaper 2 жыл бұрын
Hi Venkat, thank you for your tutorials, always a great help! I have a question, i'm using the exact same setup in your afore mentioned video "Set up multi master Kubernetes cluster using Kubeadm" in production enviroment on bare metal, so my question is, it's possible to expose the services on the pods to be accesible from outside of the local network using that setup? Here we plan to access services like the Dashboard, MySQL, and Ngnix, from outside, and since i'm kinda new to Kubernetes i'm having some trouble to figure that out. Thanks in advance, looking forward to the next one!
@thaocrouch
@thaocrouch 2 жыл бұрын
Hi, This video is clear to understand and What's terminal name you using in video? (which support show old script hint in background). Thanks...!
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Hi, thanks for watching. You can find more about my terminal setup in the below video. kzbin.info/www/bejne/hoa6n3aYp56WhJo
@thaocrouch
@thaocrouch 2 жыл бұрын
@@justmeandopensource Tks so much...
@justmeandopensource
@justmeandopensource 2 жыл бұрын
@@thaocrouch No worries.
@Ajmalkhalil-cx4gf
@Ajmalkhalil-cx4gf 6 ай бұрын
Thank you!
@justmeandopensource
@justmeandopensource 6 ай бұрын
Hi Ajmal, Thanks for watching.
@mohammadtalep9913
@mohammadtalep9913 Жыл бұрын
Great Work. Thanks
@justmeandopensource
@justmeandopensource Жыл бұрын
Thanks for watching
@kiki-vu9if
@kiki-vu9if 7 ай бұрын
this was fantastic! what about rke2? is it the same? Also, do you have a patreon where I can send you a thank you $$?
@justmeandopensource
@justmeandopensource 7 ай бұрын
Hi, thanks for watching. RKE is a separate topic and this video is about setting everything up manually on bare metal kubernetes. I don't have patreon but if you do like to contribute, then there is a paypal link in the channel page or in video description. Thanks again for your interest in this channel.
@kiki-vu9if
@kiki-vu9if 7 ай бұрын
@@justmeandopensource ok great! i was asking the same thing with rke2 setting it up on bare metal... anyways to contact you directly?
@chanveasna_noun
@chanveasna_noun 3 жыл бұрын
Thank you so much for great tutorial, Can you share me which tools terminal you use?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi thanks for watching. kzbin.info/www/bejne/hoa6n3aYp56WhJo
@epithesi_sobrero
@epithesi_sobrero Жыл бұрын
perfect tutorial! I have a question though. How could you migrate an existing cluster to an HA one?
@craftsmanshopswoodworking5197
@craftsmanshopswoodworking5197 3 жыл бұрын
Thank you very much, great tutorial. How would you address SSL Certificates and access from outside of the local network. Would there be port forwarding to the virtual ip? Thanks again.
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi, thanks for watching. These clusters that I am running are just for demo purpose that I only access from my host machine. So really not worried about accessing it from outside. If I wanted to have access from outside the host machine, I would have setup bridge network and get a LAN IP for each vm that can be reached from all other machine in my LAN and not just the host machine running the VMs.
@Zeid_Al-Seryani
@Zeid_Al-Seryani 3 жыл бұрын
I have a very good question here, first of all thank you for your efforts, I have used keepalived before, and I was going to build the same thing you did here, but regarding keepalived, lets say after 51 down times of any load balancer , the keepalived priority will becomed below 0 is that correct ? will keepalived reset the number on every switch to 100 ? I am intrested to know what will the priority of the load balancers in the keepalived configuration will be ? blesses to you and your efforts.
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi, thanks for watching. priority is just used to determine which of the competing backup nodes can become a master. The node with higher priority becomes the master in case of election. I haven't noticed the priority going to 0 despite the health check script failed many times. And priority can be any number. I believe the priority will be reset to what it was in the configuration when the node restarts but I could be wrong. Priority is only significant when there are more than one backup nodes waiting to become master. If you have just two nodes, no matter what the priority of the other backup node is, it will become master when the original master node crashes.
@Zeid_Al-Seryani
@Zeid_Al-Seryani 3 жыл бұрын
@@justmeandopensource another question please, what is the difference between control-plane-endpoint and apiserver-advertise-address ? what is the usage of each one please, thank you.
@vknvnn
@vknvnn 3 жыл бұрын
Thank your video, it's very helpful for me, could you please make a video about kubemq inside kubernetes?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Thanks for watching. I can try.
@asadujjamannur6938
@asadujjamannur6938 Жыл бұрын
Splendid
@justmeandopensource
@justmeandopensource Жыл бұрын
Hi Asad, thanks for watching.
@siddhantjain1035
@siddhantjain1035 3 жыл бұрын
Thank you so much for such great tutorials on k8s. I have one query, Can we setup HA cluster using only keepalived without using HAProxy or any other load balancer ? like If I setup keepalived in all my master nodes & init the cluster using virtual IP? will this work?
@cezarnicolescu1326
@cezarnicolescu1326 3 жыл бұрын
Hi Venkat, First of all great video! It helped me a ton! I was just wondering of something: I tried to use a similiar setup but with two master nodes instead of 3 and once I was bringing one of the masternodes down (shutdown on the machine) I was not able to access the K8S API from the second master node. Do you know why that is? With 3 master nodes everythings works perfectly. Another question that I have is about using the load balancer machines as NFS servers. Would you recommand such a solution or not and how would you implement NFS storage from a high availability perspective?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Cezar, thanks for watching. With the setup explained in this video, I haven't tried it with 2 master nodes as that is not a proper cluster anyway. But I can surely test it. And NFS is not welcomed very well with kubernetes persistence. Its not distributed and fault tolerant by default. You can look at cloud native storage solutions like OpenEBS, ceph/rook, Longhorn, Glusterfs. I have done videos on few of them. Longhorn cloud native distributed storage kzbin.info/www/bejne/iXWsaoeirpqMetE Glusterfs fundamentals kzbin.info/www/bejne/f3iopYmPnZV2aNE Glusterfs in Kubernetes kzbin.info/www/bejne/bGa7gJ-XerepoNk
@MdRokon-of6pm
@MdRokon-of6pm 2 жыл бұрын
Hi Venkat, This video is really helpful. I have a query, Will there be any split-brain situation in this case if two nodes or more than 50% nodes are down? If so then how can I overcome the situation between hypervisors? Actually, I have only two Proxmox hypervisors.
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Hi, Thanks for watching. Split brain case is possible. The approach would be to bring the third node up as soon as possible. Don’t leave the cluster with 2 nodes for longer.
@petrivanov1598
@petrivanov1598 3 жыл бұрын
Venkat thank you for great video. But for this video or all others HA i thing better solution will be HAProxy with access to Master nodes and workers with local NGINX installed, because HAproxy for all tasks it's single point of failure anyway (even we have 2 of them, 3) - if HAproxy will be loosed all cluster stop working. On other hand with ngnix no matter how many worker or master nodes we will lose (and all haproxy can be gone also) but others workers still know how to communicate with masters (balanced). haproxy for adm access only masters. Maybe you can latter update ansible lesson with this configuration (workers nginx - adm api haproxy).
@fork04
@fork04 3 жыл бұрын
I wonder, if you would like to make a video about installing Percona XtraDB Cluster 8.0?
@yassinehakim4051
@yassinehakim4051 2 жыл бұрын
Can you please add a example to access a pod with nodeport ? Regards
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Hi, thanks for watching. NodePort is for service and not for pod. You create a service of type nodeport and access whatever (application/service) running on the pods behind that service. If you directly want to access the pod, you can do something like kubectl port-forward
@nikhilwankhade3953
@nikhilwankhade3953 Жыл бұрын
Hi Can we have video on K8s worker node app deployment? Thanks
@incredibleearth5444
@incredibleearth5444 2 жыл бұрын
great but where in the keepalived conf mentioned of its peer ? How does the keepalived know of its peer without setting its ip address ?
@kranteshshrestha
@kranteshshrestha 3 жыл бұрын
Hi thanks for all the tutorial. My question is how can i configure multi master on already setup kubernetes cluster. Thank you
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Short answer is you can't. You may have noticed that I passed --control-plane-endpoint option to the kubeadm init command during cluster initialization. You can't add additional master nodes to an existing single node cluster.
@krishnajyothi9796
@krishnajyothi9796 2 жыл бұрын
Hi Venkat, great presentation. I'm more into Openshift and trying to understand how vanilla K8s work. In OCP we have 443 frontend is forwarded to backend nodes where routers are configured, in this way we manage ssl termination at LB for applications if needed. However, I don't see 443/80 port config in LB/HA Proxy in k8s discussion, how is that managed here? any insights on this would be helpful. Thanks.
@i7HX
@i7HX Жыл бұрын
tnx
@kamoliddinxojiyev3500
@kamoliddinxojiyev3500 2 жыл бұрын
great tutorial. if I add service to this cluster how can I connect this service outside the cluster ?. can I use virtual ip ? or I need another loadbalancer for handle fronend requests ?
@Muiterz
@Muiterz 2 жыл бұрын
found the solution :D
@justmeandopensource
@justmeandopensource 2 жыл бұрын
kubernetes.io/docs/setup/production-environment/container-runtimes/#container-runtimes
@MrDungvh
@MrDungvh 3 жыл бұрын
Thanks for your tutorials. Would you please make the scenario with 2 Cluster group ? And how to control each one with your host machine ? I don't know how to customize cluster name.
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Vuong, do you mean running two k8s clusters on my host machine and accessing them?
@MrDungvh
@MrDungvh 3 жыл бұрын
@@justmeandopensource Yes, please do the scerario
@justmeandopensource
@justmeandopensource 3 жыл бұрын
That should be simple. You can point to different kubeconfig via KUBECONFIG env variable or by passing --kubeconfig to your kubectl commands. Or you can merge those kubeconfig into one and switch contexts when working with different clusters.
@MrDungvh
@MrDungvh 3 жыл бұрын
@@justmeandopensource Ok. But my problem is : 2 Cluster have same name, and I don not know how to change name of Cluster. Could you make a video that how to create 2 Cluster with different name ?
@testingutopia
@testingutopia 3 жыл бұрын
@@MrDungvh you need to read more on config file used by kubectl. The command : kubectl cluster-info, tells u abt current cluster being used in config context. Keywords u should use to research: Kubectl Config, kubectl context
@SuperMati9999
@SuperMati9999 2 жыл бұрын
Hi there. Very good video. I have one profesional question. I would like to run a ha kubernetes for production, what do you recommend to contract? A dedicated server and install this vms inside, or a number of virtual private servers in different parts of the world? i can only use bare metal, cant use kubernetes cloud
@KingsDev-xe2wo
@KingsDev-xe2wo 5 ай бұрын
Hey, thanks for amazing tutorial. after setting up this HA, I tried to start only one load balancer and one master, it failed when running kubectl. it's normal if I keep two masters running. can you help me check, thanks.
@hprompt166
@hprompt166 Жыл бұрын
Hi there Great video, do you have to change the ethernet intterface to add the VIP? if so can you point me to a link for that using ubuntu 22.04.1
@justmeandopensource
@justmeandopensource Жыл бұрын
Hi, thanks for watching. The infrastructure for this video has been provisioned using vagrant and virtualbox. If you look in my vagrantfile, I have added a private network in the range 172.16.16.0/24 and these will be on eth1 interface on all my VMs. And at 8:56, you can see my configuring keepalived in /etc/keepalived/keepalived.conf where I specify which interface to use and I have specified eth1 there. Hope it helps. Cheers.
@hprompt166
@hprompt166 Жыл бұрын
@@justmeandopensource I first added the vip/24 to each interface, then when I shutdown everything and bought up the loadblancers one at at time for testing, I saw that it was working with the vip/32 and I removed the vip/24 from the interface. thanks for your help.
@justmeandopensource
@justmeandopensource Жыл бұрын
@@hprompt166 You are welcome.
@batbeo7423
@batbeo7423 2 ай бұрын
Thanks you for your amazing video. I have a issue that i spend my whole day and still can not debug, hope you can help me out. The load balancer work just perfectly fine, but the master doesn't The issue is: when I shutdown any one of my master node , the cluster seem it go down too. I can not using kubectl, if I try it will throw an error : "nodes is forbidden: User "kubernetes-admin" cannot list resource "nodes" in API group "" at the cluster scope" . Sometime it throw another error that said : "etcdserver: request timed out". Sorry for my terrible english. Hope you have a great day
@benkaplan8596
@benkaplan8596 2 жыл бұрын
can you make a tutorial on making ISP proxies from google cloud, and creatingmass amount not just 8?
@surendarmurugesan9797
@surendarmurugesan9797 3 жыл бұрын
Hi Venkat, Can you please start videos for Openshift?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi, That’s in my list but I am not sure when I’ll get to it though. Will try my best. Cheers.
@amitchettri_ac
@amitchettri_ac 2 жыл бұрын
Hi Venkat, Can you post the link for the video where you have setup the keepalive + haproxy as a pod. Thanks in advance
@mikeschem6441
@mikeschem6441 2 жыл бұрын
did he ever post this?
@rachneetsachdeva4
@rachneetsachdeva4 3 жыл бұрын
Hi Venkat. Thanks a lot for your efforts. Is it possible to use traefik instead of HAproxy in this configuration?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Rachneet, thanks for watching. Traefik is used for incluster load balancing and traffic routing. HAProxy is used externally to load balance the traffic to control planes. Traefik is used within the cluster to load balance and route traffic between internal services.
@rachneetsachdeva4
@rachneetsachdeva4 3 жыл бұрын
@@justmeandopensource Oh I see. But can't we use a traefik ingress controller?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
@@rachneetsachdeva4 There is nothing stopping you from using Traefik as ingress controller in your k8s cluster.
@rachneetsachdeva4
@rachneetsachdeva4 3 жыл бұрын
@@justmeandopensource Thanks for clarifying.
@justmeandopensource
@justmeandopensource 3 жыл бұрын
@@rachneetsachdeva4 no worries
@karemsalhi6954
@karemsalhi6954 2 жыл бұрын
Hi, thanks for this tuto how do that with internal-keepalived-haproxy are installed on the masters and how test the HA?
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Hi, thanks for watching. Its best to have keepalived/haproxy on a separate set of machines for true HA scenario. But you can also have these on the master nodes. Same process. Testing HA is done just by shutting of master nodes one by one and see if you can still access the cluster.
@spatil6884
@spatil6884 3 жыл бұрын
Hey can you please make video on how to install ArgoCD on EKS from scratch I don't found any video on KZbin
@justmeandopensource
@justmeandopensource 3 жыл бұрын
I can try.
@spatil6884
@spatil6884 3 жыл бұрын
@@justmeandopensource we are eagarly waiting
@suraj2533
@suraj2533 2 жыл бұрын
Hi Venkat, when we setting up cluster with static pod load balancer, we will have to use different ip in configuration file and to be added it in manifest directory.
@yusefsherief
@yusefsherief 8 ай бұрын
Can we use hardware load balancer instead of HAproxy?
@justmeandopensource
@justmeandopensource 7 ай бұрын
Hi, thanks for watching. Hardware load balancer is an overkill in this setup. Why would you want to go the hard way?
@colossuselka-zc7hb
@colossuselka-zc7hb 11 ай бұрын
hi, i love your channel. I followed your tutorial and ended up with same internal ips and keep getting error: error upgrading connection: unable to upgrade connection: pod does not exist error while tring to access or port forward nginx. Could you help me with that
@Muiterz
@Muiterz 2 жыл бұрын
Excellent!,
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Hi Tim, Thanks for watching.
@Muiterz
@Muiterz 2 жыл бұрын
@@justmeandopensourceI'll implement tomorrow and Monday when succesfull @customer :D. many thanks
@Muiterz
@Muiterz 2 жыл бұрын
@@justmeandopensource Which SSH client do you use? (multiple windows?, looking for a windows application) :D
@justmeandopensource
@justmeandopensource 2 жыл бұрын
@@Muiterz Its just the standard ssh client. What you are asking is my terminal emulator. Again you can use any terminal emulator. On top of it I use tmux which allows the windows to be split into multiple panes.
@Muiterz
@Muiterz 2 жыл бұрын
@@justmeandopensource thank you!
@ahmedkaroui5412
@ahmedkaroui5412 2 жыл бұрын
Hello i have a problem with joinging the worker node, all other configurations worked when i use the kubeadm join command in the kworker1, it's seems not able to connect properly
@sergimeana1062
@sergimeana1062 2 жыл бұрын
How do you secure the cluster for production in case you have a public ip for the keepalived, apart of firewall rules?
@kebastalinbritto1845
@kebastalinbritto1845 2 жыл бұрын
Could you please share a volume snapshot of Kubernetes video
@faridakbarov4532
@faridakbarov4532 2 жыл бұрын
Hi Venkat where are you bro, miss without your videos)) all is ok?
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Hi Farid, thanks for checking. I am guilty of not posting videos regularly these days. I will try and get back to routine weekly videos. Hopefully from the coming week. Been so busy last few weeks. Cheers.
@UdaySingh-im4hd
@UdaySingh-im4hd 2 жыл бұрын
Hi In my case (rhel8), my both LBs have VIP. is that a problem?
@prabhujeeva2228
@prabhujeeva2228 2 жыл бұрын
Hi, What about high Availability Plan for ETCD ?
@arnabdas5166
@arnabdas5166 2 жыл бұрын
Hi Venkat , Thanks for the tutorial , it really helps a lot . But I am facing one issue . When I do kubeadm init with the VIP address it is failing. although the I can see the VIP in my HAProxy's base eth port and it is working perfectly fine . If I stop one of the HA proxy services then also the VIP is switching to other server . Can you suggest
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Hi Arnab, thanks for watching. How is it failing? The kubeadm init command should have some meaningful errors.
@arnabdas5166
@arnabdas5166 2 жыл бұрын
I am using AWS EC2 instance. The VIP is unreachable from my kubernetes server although they belong to the same subnet
@justmeandopensource
@justmeandopensource 2 жыл бұрын
@@arnabdas5166 I haven't tried this in the cloud where you can't control the IP address assignment.
@mukeshvooka8725
@mukeshvooka8725 Жыл бұрын
@Arnab Das have u finished this assignment using ec2 instances it worked ?? Can I follow this approach for ec2 instances. Please reply
@elabeddhahbi3301
@elabeddhahbi3301 Жыл бұрын
can u do the same with cilium i tried but the cluster always crash for some reason
@phalla6646
@phalla6646 3 жыл бұрын
Very well explanation, I love your videos. Anyway I have couple questions: - do you have basic vigrant tutorial? - how do you setup your local environment for run this? I want to do practicing - your pc os is windows or linux? Thanks
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Phalla, thanks for your interest in this video. I don't have any basic vagrant tutorial. You can just learn from examples. I explained in this video how I set up my environment. All I do is vagrant up in the directory where I have vagrantfile. All you need is vagrant and virtualbox. It can be Windows, Mac or Linux. Works everywhere the same vagrant setup. I use Linux on my Laptop. Cheers.
@phalla6646
@phalla6646 3 жыл бұрын
Thank mate!
@justmeandopensource
@justmeandopensource 3 жыл бұрын
@@phalla6646 No worries
@allisondealmeida
@allisondealmeida 2 жыл бұрын
hello I'm creating a kubernetes cluster but haproxy keeps losing connections with the control-planes. haproxy pods keep crashloopbackoff. could you help me to fix this? I'm using only one server for load balancing.
@chandrasekharreddy8177
@chandrasekharreddy8177 3 жыл бұрын
Hi how to resolve readiness and liveness in kubernetes issue, Could you Please help me out this
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi, please give me more context about your issue/question. With this one line, I don't know whats the issue you are having with readiness and liveness probes. I have done a video on this topic. See if this helps. kzbin.info/www/bejne/aYWtg56BjNqJpa8 Cheers.
@chandrasekharreddy8177
@chandrasekharreddy8177 3 жыл бұрын
@@justmeandopensource ok thanks
@chandrasekharreddy8177
@chandrasekharreddy8177 3 жыл бұрын
@@justmeandopensource We have faced one issue in argocd error is " comparison error. rpcerror:code=deadlineExceeded desc = content deadline exceeded" Could you please help me out this?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
no worries
@justmeandopensource
@justmeandopensource 3 жыл бұрын
@@chandrasekharreddy8177 if you could ask the questions in the related video's comment section, it would be helpful for others as well.
@venkatkishore1277
@venkatkishore1277 2 жыл бұрын
Your explanation is awesome... it’s helped me a a lot .... but For me virtual ip is not accepting during kubeadm init -control-plane-endpoint=:6443 Can you please help me what is the exact virtual ip concept.. how can I pick that... I am using EC2 instances here actually..
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Hi, thanks for watching. I haven't attempted this in the cloud where the handing out of ip addresses are not under your control. This was done on my local machine.
@mukeshvooka8725
@mukeshvooka8725 Жыл бұрын
@Venkat Kishore have you completed this system using ec2 instances it worked fine in ec2 instances can I follow this approach in ec2 instances please reply it will save my time
@Rectapanca
@Rectapanca 2 жыл бұрын
How to set domain + ssl for keepalived virtual IP?
@DenisZaletov
@DenisZaletov 3 жыл бұрын
Are you use xenial kube repo for 20.04 ubuntu?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Yeah. I know that is odd but thats what suggested in official docs. kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
@mukeshvooka8725
@mukeshvooka8725 Жыл бұрын
Hi venkat, can I use same set up in aws using ec2 instances ??
@mukeshvooka8725
@mukeshvooka8725 Жыл бұрын
Please reply
@justmeandopensource
@justmeandopensource Жыл бұрын
Hi Mukesh, thanks for watching. I haven't tried this setup in cloud environment. But I think you will have problems with the virtual ip as you don't have control over ip address assignment in the cloud.
@mukeshvooka8725
@mukeshvooka8725 Жыл бұрын
@@justmeandopensource hi venkat, I need to implement this in public cloud like aws or azure . Please let me know how can I manage ip address assignment. Please help.
@shivbratacharaya4199
@shivbratacharaya4199 3 жыл бұрын
Hi venkat, faced issue while adding 2nd master node, etcd and kube-apiserver are in crashloopbackoff, can you please guide me
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Is your first master node where you ran kubeadm init all fine? You can always do kubeadm reset followed by kubeadm join on other nodes.
@shivbratacharaya4199
@shivbratacharaya4199 3 жыл бұрын
@@justmeandopensource yes venkat 1st master in fine, all pods in that is running fine
@shivbratacharaya4199
@shivbratacharaya4199 3 жыл бұрын
@@justmeandopensource Even now the cluster is in ready state for both nodes, but the etcd and api-server pod for 2nd node is in crashloopbackoff state
@justmeandopensource
@justmeandopensource 3 жыл бұрын
@@shivbratacharaya4199 in that case do a kubeadm reset on the failed master and restart or recreate that vm and try again. If you used the vagrant provisioning, then all master VMs should be identical.
@shivbratacharaya4199
@shivbratacharaya4199 3 жыл бұрын
@@justmeandopensource Its identical, i deployed them using vagrant only. Done the deployment multiple times but not getting any clue now
@muthuaravindthirumalaikuma5716
@muthuaravindthirumalaikuma5716 Жыл бұрын
how the multi cluster setup behaves with the etcd ?
@dineshbabushankar848
@dineshbabushankar848 Жыл бұрын
Did you got the answer for that pls let is know
@weilunyi
@weilunyi Жыл бұрын
note: explain keepalived 09:00
@deshipino
@deshipino 3 жыл бұрын
It's still a single point of failure. You should had at least one more worker node there.
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Good point. Yes. We definitely need more worker nodes. I was focusing on control plane HA in this demo as I am limited on the number of VMs I can run on my laptop.
@deshipino
@deshipino 3 жыл бұрын
@@justmeandopensource Make sense! I am wondering if you could make a video covering Persistent Volume availability across the nodes would be great!
@sahaniarunitm
@sahaniarunitm 3 жыл бұрын
कहा थे भाई
@limkeke1639
@limkeke1639 4 ай бұрын
Hi.
@RameshKumar-rt8xb
@RameshKumar-rt8xb Жыл бұрын
Even this is not highly available... Just imagine the aws goes down.. How could we setup cluster with multiple nodes from different networks like some nodes on azure, oracle and gcp?
[ Kube 2 ] Setup Kubernetes Cluster with Vagrant
31:24
Just me and Opensource
Рет қаралды 36 М.
How To Setup Highly Available Kubernetes Clusters And Applications?
17:40
This mother's baby is too unreliable.
00:13
FUNNY XIAOTING 666
Рет қаралды 38 МЛН
Watermelon magic box! #shorts by Leisi Crazy
00:20
Leisi Crazy
Рет қаралды 114 МЛН
Seja Gentil com os Pequenos Animais 😿
00:20
Los Wagners
Рет қаралды 22 МЛН
Do NOT Learn Kubernetes Without Knowing These Concepts...
13:01
Travis Media
Рет қаралды 306 М.
Using *HA* Kubernetes at home, was never so simple!
32:14
Christian Lempa
Рет қаралды 170 М.
Make your homelab services highly available with keepalived!
18:41
Sonoran Tech
Рет қаралды 1,9 М.
[ Kube 1.3 ] Set up multi master Kubernetes cluster using Kubeadm
20:56
Just me and Opensource
Рет қаралды 34 М.
HA Kubernetes Cluster Setup | Highly Available K8 Cluster
28:44
DevOps Shack
Рет қаралды 5 М.
Building a highly available load balancer with keepalived and HAProxy
19:10
[ Kube 31 ] Set up Nginx Ingress in Kubernetes Bare Metal
30:17
Just me and Opensource
Рет қаралды 72 М.
Day 27/40 - Setup a Multi Node Kubernetes Cluster Using Kubeadm
1:00:34
Tech Tutorials with Piyush
Рет қаралды 3,3 М.
This mother's baby is too unreliable.
00:13
FUNNY XIAOTING 666
Рет қаралды 38 МЛН