Amazing... I struggled to build a multi-node K8s cluster after watching this content even easier to set up ..i am trying this weekend with my VMware workstation ..thank you ..keep up and we learning from your content
@justmeandopensource11 ай бұрын
Hi Kumar, Thanks for watching.
@neodzen3 жыл бұрын
Зачем нужны всякие платные непонятные курсы, когда есть такой замечательный Venkat. Лучшие объяснения, из того что я находил. Спасибо!
@redhatfan2304 Жыл бұрын
Я согласен. Спасибо Венкат.
@aureli4nus3 жыл бұрын
Super tutorial and the way you update your tutorials is just awesome, you really care about what you teach!!
@justmeandopensource3 жыл бұрын
Hi, Thanks for watching.
@lousab63412 жыл бұрын
In 30 minutes you have explained very clear every step i needed. Thanks a lot!!
@justmeandopensource2 жыл бұрын
Glad you liked it. Thanks for watching. Cheers.
@lousab63412 жыл бұрын
@@justmeandopensource i'm going to automate all this with an ansible project. It could be interesting to put loadbalancing on docker instead of installing directly on the host and put every thing on EC2 or other VM
@soundgt Жыл бұрын
Fantastic video! I've been tasked with setting up a k8's cluster on RHEL using an vIP from a network appliance. I didn't find anything on the interwebz as good as this video to explain/simplify the process! Checking out some of your other videos as well! Much appreciated! Thanks!
@justmeandopensource Жыл бұрын
Glad to hear that. Thanks for watching.
@samueln5506 Жыл бұрын
This is just beyond simplicity! Nice one.
@justmeandopensource Жыл бұрын
Hi Samuel, Thanks for watching.
@taylormonacelli2 жыл бұрын
Straight to the point and thorough. Very nice
@justmeandopensource2 жыл бұрын
Hi Taylor, many thanks for watching. Cheers.
@abdelazizsharaf73059 ай бұрын
Best video for k8s setup I've ever seen Thanks
@justmeandopensource9 ай бұрын
Thanks for watching. Glad you liked it.
@ffoxxa204711 ай бұрын
Clearly straight forward. Thanks a lot
@justmeandopensource11 ай бұрын
You’re welcome and Thanks for watching.
@metalmasterlp3 жыл бұрын
Great help as always ! keep it up! thank you
@justmeandopensource3 жыл бұрын
Thanks for watching.
@kevinyu99343 жыл бұрын
Thank you so much for the amazing content! Look forward to the next one!!!
@justmeandopensource3 жыл бұрын
Hi Chinglong, thanks for watching.
@alifiroozizamani7782 Жыл бұрын
Thanks for this awesome tutorial🍻
@justmeandopensource Жыл бұрын
Thanks for watching.
@mohammadtalep9913 Жыл бұрын
Great Work. Thanks
@justmeandopensource Жыл бұрын
Thanks for watching
@julien35733 жыл бұрын
Hi, nice explanation! You said you will do another video explaining how to set up this using static pods on the master nodes. Maybe it would be great to explain how to use kube-vip! Which would be essentially the same thing but without having to configure keepalived and haproxy separately plus it has some features worth checking out.
@Ajmalkhalil-cx4gf10 ай бұрын
Thank you!
@justmeandopensource10 ай бұрын
Hi Ajmal, Thanks for watching.
@karemsalhi69543 жыл бұрын
Hi, thanks for this tuto how do that with internal-keepalived-haproxy are installed on the masters and how test the HA?
@justmeandopensource3 жыл бұрын
Hi, thanks for watching. Its best to have keepalived/haproxy on a separate set of machines for true HA scenario. But you can also have these on the master nodes. Same process. Testing HA is done just by shutting of master nodes one by one and see if you can still access the cluster.
@yassinehakim40513 жыл бұрын
Can you please add a example to access a pod with nodeport ? Regards
@justmeandopensource3 жыл бұрын
Hi, thanks for watching. NodePort is for service and not for pod. You create a service of type nodeport and access whatever (application/service) running on the pods behind that service. If you directly want to access the pod, you can do something like kubectl port-forward
@asadujjamannur6938 Жыл бұрын
Splendid
@justmeandopensource Жыл бұрын
Hi Asad, thanks for watching.
@VelanKrishnan Жыл бұрын
Hi Venkat,can you please post video for installing keepalived and Haproxy in master node itself for a multi master cluster
@thaocrouch2 жыл бұрын
Hi, This video is clear to understand and What's terminal name you using in video? (which support show old script hint in background). Thanks...!
@justmeandopensource2 жыл бұрын
Hi, thanks for watching. You can find more about my terminal setup in the below video. kzbin.info/www/bejne/hoa6n3aYp56WhJo
@thaocrouch2 жыл бұрын
@@justmeandopensource Tks so much...
@justmeandopensource2 жыл бұрын
@@thaocrouch No worries.
@Zeid_Al-Seryani3 жыл бұрын
I have a very good question here, first of all thank you for your efforts, I have used keepalived before, and I was going to build the same thing you did here, but regarding keepalived, lets say after 51 down times of any load balancer , the keepalived priority will becomed below 0 is that correct ? will keepalived reset the number on every switch to 100 ? I am intrested to know what will the priority of the load balancers in the keepalived configuration will be ? blesses to you and your efforts.
@justmeandopensource3 жыл бұрын
Hi, thanks for watching. priority is just used to determine which of the competing backup nodes can become a master. The node with higher priority becomes the master in case of election. I haven't noticed the priority going to 0 despite the health check script failed many times. And priority can be any number. I believe the priority will be reset to what it was in the configuration when the node restarts but I could be wrong. Priority is only significant when there are more than one backup nodes waiting to become master. If you have just two nodes, no matter what the priority of the other backup node is, it will become master when the original master node crashes.
@Zeid_Al-Seryani3 жыл бұрын
@@justmeandopensource another question please, what is the difference between control-plane-endpoint and apiserver-advertise-address ? what is the usage of each one please, thank you.
@craftsmanshopswoodworking51973 жыл бұрын
Thank you very much, great tutorial. How would you address SSL Certificates and access from outside of the local network. Would there be port forwarding to the virtual ip? Thanks again.
@justmeandopensource3 жыл бұрын
Hi, thanks for watching. These clusters that I am running are just for demo purpose that I only access from my host machine. So really not worried about accessing it from outside. If I wanted to have access from outside the host machine, I would have setup bridge network and get a LAN IP for each vm that can be reached from all other machine in my LAN and not just the host machine running the VMs.
@incredibleearth54442 жыл бұрын
great but where in the keepalived conf mentioned of its peer ? How does the keepalived know of its peer without setting its ip address ?
@amitchettri_ac2 жыл бұрын
Hi Venkat, Can you post the link for the video where you have setup the keepalive + haproxy as a pod. Thanks in advance
@mikeschem64412 жыл бұрын
did he ever post this?
@kiki-vu9if11 ай бұрын
this was fantastic! what about rke2? is it the same? Also, do you have a patreon where I can send you a thank you $$?
@justmeandopensource11 ай бұрын
Hi, thanks for watching. RKE is a separate topic and this video is about setting everything up manually on bare metal kubernetes. I don't have patreon but if you do like to contribute, then there is a paypal link in the channel page or in video description. Thanks again for your interest in this channel.
@kiki-vu9if11 ай бұрын
@@justmeandopensource ok great! i was asking the same thing with rke2 setting it up on bare metal... anyways to contact you directly?
@kamoliddinxojiyev35002 жыл бұрын
great tutorial. if I add service to this cluster how can I connect this service outside the cluster ?. can I use virtual ip ? or I need another loadbalancer for handle fronend requests ?
@kranteshshrestha3 жыл бұрын
Hi thanks for all the tutorial. My question is how can i configure multi master on already setup kubernetes cluster. Thank you
@justmeandopensource3 жыл бұрын
Short answer is you can't. You may have noticed that I passed --control-plane-endpoint option to the kubeadm init command during cluster initialization. You can't add additional master nodes to an existing single node cluster.
@suraj25333 жыл бұрын
Hi Venkat, when we setting up cluster with static pod load balancer, we will have to use different ip in configuration file and to be added it in manifest directory.
@chandrasekharreddy81773 жыл бұрын
Hi how to resolve readiness and liveness in kubernetes issue, Could you Please help me out this
@justmeandopensource3 жыл бұрын
Hi, please give me more context about your issue/question. With this one line, I don't know whats the issue you are having with readiness and liveness probes. I have done a video on this topic. See if this helps. kzbin.info/www/bejne/aYWtg56BjNqJpa8 Cheers.
@chandrasekharreddy81773 жыл бұрын
@@justmeandopensource ok thanks
@chandrasekharreddy81773 жыл бұрын
@@justmeandopensource We have faced one issue in argocd error is " comparison error. rpcerror:code=deadlineExceeded desc = content deadline exceeded" Could you please help me out this?
@justmeandopensource3 жыл бұрын
no worries
@justmeandopensource3 жыл бұрын
@@chandrasekharreddy8177 if you could ask the questions in the related video's comment section, it would be helpful for others as well.
@gouterelo3 жыл бұрын
Great video Venkat ! One question... that --apiserver-advertise-adress you config in all master nodes, thas its only if you have multiple network interfaces right ? or if you have multiple masters, you have to declare all of them to the api trough the proxy ? (i've have a HA cluster, but it only have one proxy for masters)
@justmeandopensource3 жыл бұрын
Hi Gonzalo, thanks for watching. That option is only required if you have multiple network interfaces and you want to use a specific one for your cluster. If you don't specify that option it will use the first available NIC by default. In my case, the first available NIC is eth0 which I don't want to use. If you just have one NIC on all nodes, you can ignore this option. Cheers.
@hprompt1662 жыл бұрын
Hi there Great video, do you have to change the ethernet intterface to add the VIP? if so can you point me to a link for that using ubuntu 22.04.1
@justmeandopensource2 жыл бұрын
Hi, thanks for watching. The infrastructure for this video has been provisioned using vagrant and virtualbox. If you look in my vagrantfile, I have added a private network in the range 172.16.16.0/24 and these will be on eth1 interface on all my VMs. And at 8:56, you can see my configuring keepalived in /etc/keepalived/keepalived.conf where I specify which interface to use and I have specified eth1 there. Hope it helps. Cheers.
@hprompt1662 жыл бұрын
@@justmeandopensource I first added the vip/24 to each interface, then when I shutdown everything and bought up the loadblancers one at at time for testing, I saw that it was working with the vip/32 and I removed the vip/24 from the interface. thanks for your help.
@justmeandopensource2 жыл бұрын
@@hprompt166 You are welcome.
@krishnajyothi97963 жыл бұрын
Hi Venkat, great presentation. I'm more into Openshift and trying to understand how vanilla K8s work. In OCP we have 443 frontend is forwarded to backend nodes where routers are configured, in this way we manage ssl termination at LB for applications if needed. However, I don't see 443/80 port config in LB/HA Proxy in k8s discussion, how is that managed here? any insights on this would be helpful. Thanks.
@sergimeana10623 жыл бұрын
How do you secure the cluster for production in case you have a public ip for the keepalived, apart of firewall rules?
@SinghBalraj1025 ай бұрын
Hey Mate, I really enjoyed your video. Would you be willing to share the tool you used for connecting servers and managing multiple sessions on a single page?
@ahmedkaroui54122 жыл бұрын
Hello i have a problem with joinging the worker node, all other configurations worked when i use the kubeadm join command in the kworker1, it's seems not able to connect properly
@nikhilwankhade3953 Жыл бұрын
Hi Can we have video on K8s worker node app deployment? Thanks
@Mastermnd16 ай бұрын
Great video! after setting up my cluster, when I turn off one of the control plane machines to simulate a problem, HAProxy and Keepalived work as expected, but if you check the healthz endpoint you will see one error: [-]etcd failed: reason withheld. I guess I need to watch more videos to find out how to HA the etcd part. Another suggestion: instead of blindly turning off the firewall, advice would be appreciated about which ports to actually open.
@angelosnm Жыл бұрын
perfect tutorial! I have a question though. How could you migrate an existing cluster to an HA one?
@MdRokon-of6pm3 жыл бұрын
Hi Venkat, This video is really helpful. I have a query, Will there be any split-brain situation in this case if two nodes or more than 50% nodes are down? If so then how can I overcome the situation between hypervisors? Actually, I have only two Proxmox hypervisors.
@justmeandopensource3 жыл бұрын
Hi, Thanks for watching. Split brain case is possible. The approach would be to bring the third node up as soon as possible. Don’t leave the cluster with 2 nodes for longer.
@Muiterz3 жыл бұрын
Excellent!,
@justmeandopensource3 жыл бұрын
Hi Tim, Thanks for watching.
@Muiterz3 жыл бұрын
@@justmeandopensourceI'll implement tomorrow and Monday when succesfull @customer :D. many thanks
@Muiterz3 жыл бұрын
@@justmeandopensource Which SSH client do you use? (multiple windows?, looking for a windows application) :D
@justmeandopensource3 жыл бұрын
@@Muiterz Its just the standard ssh client. What you are asking is my terminal emulator. Again you can use any terminal emulator. On top of it I use tmux which allows the windows to be split into multiple panes.
@Muiterz3 жыл бұрын
@@justmeandopensource thank you!
@codereaper3 жыл бұрын
Hi Venkat, thank you for your tutorials, always a great help! I have a question, i'm using the exact same setup in your afore mentioned video "Set up multi master Kubernetes cluster using Kubeadm" in production enviroment on bare metal, so my question is, it's possible to expose the services on the pods to be accesible from outside of the local network using that setup? Here we plan to access services like the Dashboard, MySQL, and Ngnix, from outside, and since i'm kinda new to Kubernetes i'm having some trouble to figure that out. Thanks in advance, looking forward to the next one!
@vknvnn3 жыл бұрын
Thank your video, it's very helpful for me, could you please make a video about kubemq inside kubernetes?
@justmeandopensource3 жыл бұрын
Hi Thanks for watching. I can try.
@elabeddhahbi3301 Жыл бұрын
can u do the same with cilium i tried but the cluster always crash for some reason
@KingsDev-xe2wo9 ай бұрын
Hey, thanks for amazing tutorial. after setting up this HA, I tried to start only one load balancer and one master, it failed when running kubectl. it's normal if I keep two masters running. can you help me check, thanks.
@kebastalinbritto18453 жыл бұрын
Could you please share a volume snapshot of Kubernetes video
@trolierxdgaming1682 ай бұрын
Can I configure HAProxy and Keepalived directly on the master node VMs to achieve a working high-availability setup without needing separate load balancer VMs (like loadbalancer1 and loadbalancer2)?
@rachneetsachdeva43 жыл бұрын
Hi Venkat. Thanks a lot for your efforts. Is it possible to use traefik instead of HAproxy in this configuration?
@justmeandopensource3 жыл бұрын
Hi Rachneet, thanks for watching. Traefik is used for incluster load balancing and traffic routing. HAProxy is used externally to load balance the traffic to control planes. Traefik is used within the cluster to load balance and route traffic between internal services.
@rachneetsachdeva43 жыл бұрын
@@justmeandopensource Oh I see. But can't we use a traefik ingress controller?
@justmeandopensource3 жыл бұрын
@@rachneetsachdeva4 There is nothing stopping you from using Traefik as ingress controller in your k8s cluster.
@rachneetsachdeva43 жыл бұрын
@@justmeandopensource Thanks for clarifying.
@justmeandopensource3 жыл бұрын
@@rachneetsachdeva4 no worries
@MrDungvh3 жыл бұрын
Thanks for your tutorials. Would you please make the scenario with 2 Cluster group ? And how to control each one with your host machine ? I don't know how to customize cluster name.
@justmeandopensource3 жыл бұрын
Hi Vuong, do you mean running two k8s clusters on my host machine and accessing them?
@MrDungvh3 жыл бұрын
@@justmeandopensource Yes, please do the scerario
@justmeandopensource3 жыл бұрын
That should be simple. You can point to different kubeconfig via KUBECONFIG env variable or by passing --kubeconfig to your kubectl commands. Or you can merge those kubeconfig into one and switch contexts when working with different clusters.
@MrDungvh3 жыл бұрын
@@justmeandopensource Ok. But my problem is : 2 Cluster have same name, and I don not know how to change name of Cluster. Could you make a video that how to create 2 Cluster with different name ?
@testingutopia3 жыл бұрын
@@MrDungvh you need to read more on config file used by kubectl. The command : kubectl cluster-info, tells u abt current cluster being used in config context. Keywords u should use to research: Kubectl Config, kubectl context
@SuperMati99992 жыл бұрын
Hi there. Very good video. I have one profesional question. I would like to run a ha kubernetes for production, what do you recommend to contract? A dedicated server and install this vms inside, or a number of virtual private servers in different parts of the world? i can only use bare metal, cant use kubernetes cloud
@chanveasna_noun3 жыл бұрын
Thank you so much for great tutorial, Can you share me which tools terminal you use?
@justmeandopensource3 жыл бұрын
Hi thanks for watching. kzbin.info/www/bejne/hoa6n3aYp56WhJo
@yusefsherief11 ай бұрын
Can we use hardware load balancer instead of HAproxy?
@justmeandopensource11 ай бұрын
Hi, thanks for watching. Hardware load balancer is an overkill in this setup. Why would you want to go the hard way?
@Rectapanca2 жыл бұрын
How to set domain + ssl for keepalived virtual IP?
@prabhujeeva22282 жыл бұрын
Hi, What about high Availability Plan for ETCD ?
@colossuselka-zc7hb Жыл бұрын
hi, i love your channel. I followed your tutorial and ended up with same internal ips and keep getting error: error upgrading connection: unable to upgrade connection: pod does not exist error while tring to access or port forward nginx. Could you help me with that
@fork043 жыл бұрын
I wonder, if you would like to make a video about installing Percona XtraDB Cluster 8.0?
@arnabdas51662 жыл бұрын
Hi Venkat , Thanks for the tutorial , it really helps a lot . But I am facing one issue . When I do kubeadm init with the VIP address it is failing. although the I can see the VIP in my HAProxy's base eth port and it is working perfectly fine . If I stop one of the HA proxy services then also the VIP is switching to other server . Can you suggest
@justmeandopensource2 жыл бұрын
Hi Arnab, thanks for watching. How is it failing? The kubeadm init command should have some meaningful errors.
@arnabdas51662 жыл бұрын
I am using AWS EC2 instance. The VIP is unreachable from my kubernetes server although they belong to the same subnet
@justmeandopensource2 жыл бұрын
@@arnabdas5166 I haven't tried this in the cloud where you can't control the IP address assignment.
@mukeshvooka87252 жыл бұрын
@Arnab Das have u finished this assignment using ec2 instances it worked ?? Can I follow this approach for ec2 instances. Please reply
@allisondealmeida2 жыл бұрын
hello I'm creating a kubernetes cluster but haproxy keeps losing connections with the control-planes. haproxy pods keep crashloopbackoff. could you help me to fix this? I'm using only one server for load balancing.
Hi Venkat, First of all great video! It helped me a ton! I was just wondering of something: I tried to use a similiar setup but with two master nodes instead of 3 and once I was bringing one of the masternodes down (shutdown on the machine) I was not able to access the K8S API from the second master node. Do you know why that is? With 3 master nodes everythings works perfectly. Another question that I have is about using the load balancer machines as NFS servers. Would you recommand such a solution or not and how would you implement NFS storage from a high availability perspective?
@justmeandopensource3 жыл бұрын
Hi Cezar, thanks for watching. With the setup explained in this video, I haven't tried it with 2 master nodes as that is not a proper cluster anyway. But I can surely test it. And NFS is not welcomed very well with kubernetes persistence. Its not distributed and fault tolerant by default. You can look at cloud native storage solutions like OpenEBS, ceph/rook, Longhorn, Glusterfs. I have done videos on few of them. Longhorn cloud native distributed storage kzbin.info/www/bejne/iXWsaoeirpqMetE Glusterfs fundamentals kzbin.info/www/bejne/f3iopYmPnZV2aNE Glusterfs in Kubernetes kzbin.info/www/bejne/bGa7gJ-XerepoNk
@mudasir21682 ай бұрын
I followed the same setup however when one haproxy server is up wit the virtual IP assigned, on the other haproxy the haproxy service fails to start with the error that cannot bind IP address as it is already in use.....is that normal behaviour or I have missed anything?
@spatil68843 жыл бұрын
Hey can you please make video on how to install ArgoCD on EKS from scratch I don't found any video on KZbin
@justmeandopensource3 жыл бұрын
I can try.
@spatil68843 жыл бұрын
@@justmeandopensource we are eagarly waiting
@shivbratacharaya41993 жыл бұрын
Hi venkat, faced issue while adding 2nd master node, etcd and kube-apiserver are in crashloopbackoff, can you please guide me
@justmeandopensource3 жыл бұрын
Is your first master node where you ran kubeadm init all fine? You can always do kubeadm reset followed by kubeadm join on other nodes.
@shivbratacharaya41993 жыл бұрын
@@justmeandopensource yes venkat 1st master in fine, all pods in that is running fine
@shivbratacharaya41993 жыл бұрын
@@justmeandopensource Even now the cluster is in ready state for both nodes, but the etcd and api-server pod for 2nd node is in crashloopbackoff state
@justmeandopensource3 жыл бұрын
@@shivbratacharaya4199 in that case do a kubeadm reset on the failed master and restart or recreate that vm and try again. If you used the vagrant provisioning, then all master VMs should be identical.
@shivbratacharaya41993 жыл бұрын
@@justmeandopensource Its identical, i deployed them using vagrant only. Done the deployment multiple times but not getting any clue now
@benkaplan85963 жыл бұрын
can you make a tutorial on making ISP proxies from google cloud, and creatingmass amount not just 8?
@petrivanov15983 жыл бұрын
Venkat thank you for great video. But for this video or all others HA i thing better solution will be HAProxy with access to Master nodes and workers with local NGINX installed, because HAproxy for all tasks it's single point of failure anyway (even we have 2 of them, 3) - if HAproxy will be loosed all cluster stop working. On other hand with ngnix no matter how many worker or master nodes we will lose (and all haproxy can be gone also) but others workers still know how to communicate with masters (balanced). haproxy for adm access only masters. Maybe you can latter update ansible lesson with this configuration (workers nginx - adm api haproxy).
@UdaySingh-im4hd3 жыл бұрын
Hi In my case (rhel8), my both LBs have VIP. is that a problem?
@muthuaravindthirumalaikuma5716 Жыл бұрын
how the multi cluster setup behaves with the etcd ?
@dineshbabushankar848 Жыл бұрын
Did you got the answer for that pls let is know
@mukeshvooka87252 жыл бұрын
Hi venkat, can I use same set up in aws using ec2 instances ??
@mukeshvooka87252 жыл бұрын
Please reply
@justmeandopensource2 жыл бұрын
Hi Mukesh, thanks for watching. I haven't tried this setup in cloud environment. But I think you will have problems with the virtual ip as you don't have control over ip address assignment in the cloud.
@mukeshvooka87252 жыл бұрын
@@justmeandopensource hi venkat, I need to implement this in public cloud like aws or azure . Please let me know how can I manage ip address assignment. Please help.
@DenisZaletov3 жыл бұрын
Are you use xenial kube repo for 20.04 ubuntu?
@justmeandopensource3 жыл бұрын
Yeah. I know that is odd but thats what suggested in official docs. kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
@surendarmurugesan97973 жыл бұрын
Hi Venkat, Can you please start videos for Openshift?
@justmeandopensource3 жыл бұрын
Hi, That’s in my list but I am not sure when I’ll get to it though. Will try my best. Cheers.
@i7HX2 жыл бұрын
tnx
@siddhantjain10353 жыл бұрын
Thank you so much for such great tutorials on k8s. I have one query, Can we setup HA cluster using only keepalived without using HAProxy or any other load balancer ? like If I setup keepalived in all my master nodes & init the cluster using virtual IP? will this work?
@weilunyi Жыл бұрын
note: explain keepalived 09:00
@faridakbarov45323 жыл бұрын
Hi Venkat where are you bro, miss without your videos)) all is ok?
@justmeandopensource3 жыл бұрын
Hi Farid, thanks for checking. I am guilty of not posting videos regularly these days. I will try and get back to routine weekly videos. Hopefully from the coming week. Been so busy last few weeks. Cheers.
@batbeo74236 ай бұрын
Thanks you for your amazing video. I have a issue that i spend my whole day and still can not debug, hope you can help me out. The load balancer work just perfectly fine, but the master doesn't The issue is: when I shutdown any one of my master node , the cluster seem it go down too. I can not using kubectl, if I try it will throw an error : "nodes is forbidden: User "kubernetes-admin" cannot list resource "nodes" in API group "" at the cluster scope" . Sometime it throw another error that said : "etcdserver: request timed out". Sorry for my terrible english. Hope you have a great day
@venkatkishore12772 жыл бұрын
Your explanation is awesome... it’s helped me a a lot .... but For me virtual ip is not accepting during kubeadm init -control-plane-endpoint=:6443 Can you please help me what is the exact virtual ip concept.. how can I pick that... I am using EC2 instances here actually..
@justmeandopensource2 жыл бұрын
Hi, thanks for watching. I haven't attempted this in the cloud where the handing out of ip addresses are not under your control. This was done on my local machine.
@mukeshvooka87252 жыл бұрын
@Venkat Kishore have you completed this system using ec2 instances it worked fine in ec2 instances can I follow this approach in ec2 instances please reply it will save my time
@deshipino3 жыл бұрын
It's still a single point of failure. You should had at least one more worker node there.
@justmeandopensource3 жыл бұрын
Good point. Yes. We definitely need more worker nodes. I was focusing on control plane HA in this demo as I am limited on the number of VMs I can run on my laptop.
@deshipino3 жыл бұрын
@@justmeandopensource Make sense! I am wondering if you could make a video covering Persistent Volume availability across the nodes would be great!
@sahaniarunitm3 жыл бұрын
कहा थे भाई
@limkeke16398 ай бұрын
Hi.
@RameshKumar-rt8xb Жыл бұрын
Even this is not highly available... Just imagine the aws goes down.. How could we setup cluster with multiple nodes from different networks like some nodes on azure, oracle and gcp?
@phalla66463 жыл бұрын
Very well explanation, I love your videos. Anyway I have couple questions: - do you have basic vigrant tutorial? - how do you setup your local environment for run this? I want to do practicing - your pc os is windows or linux? Thanks
@justmeandopensource3 жыл бұрын
Hi Phalla, thanks for your interest in this video. I don't have any basic vagrant tutorial. You can just learn from examples. I explained in this video how I set up my environment. All I do is vagrant up in the directory where I have vagrantfile. All you need is vagrant and virtualbox. It can be Windows, Mac or Linux. Works everywhere the same vagrant setup. I use Linux on my Laptop. Cheers.