how did you make the terminal windows connected to each other like that
@ravindranaths5135 күн бұрын
Is that book application internally adding end-user user defined header, while making request to service?
@shivangpithadiya66958 күн бұрын
Hello venkat! i have 1 scenario in that i have 1 k8s cluster in that i have 1 service mesh and also i configure kiali using addons and i deploy my application in default namesapce in that istio injection is enabled i see the traffic flow till workload but need to see my pod also in this traffic flow graph so it it possible to do that and id yes then how
@kaellith10 күн бұрын
Не знал что Мигель разбирается в линуксах
@funkiam921411 күн бұрын
Very nice video i just have one qurstion what will happen if you restart etcd nodes you have parameter new
@primersigno17 күн бұрын
Thank you very much from Colombia, excellent, I had the problem for days!!
@ahmed-samer18 күн бұрын
great playlist, thanks man
@justmeandopensource18 күн бұрын
Thanks for watching.
@janiel47118 күн бұрын
Thanks for great tutorials.btw, your browser looks strange. What browser is it? 😅
@imagineabout415318 күн бұрын
What a mad lad you are my man <3. Thank you
@GAURAVAREGE19 күн бұрын
Thank you very much bro for this video.
@justmeandopensource19 күн бұрын
Always welcome. Thanks for watching.
@imagineabout415319 күн бұрын
Incredible content.
@justmeandopensource19 күн бұрын
Thanks for watching.
@MKH-9219 күн бұрын
Can't access the dashboard
@justmeandopensource19 күн бұрын
Can you provide more details?
@MKH-9219 күн бұрын
@@justmeandopensource Well. It just gives me the This site can’t be reachedlocalhost refused to connect. Try: Checking the connection Checking the proxy and the firewall ERR_CONNECTION_REFUSED error. Also, how do you save the edits in svc? Eg the part where you edit the nodeport. It's going a little too fast
@shiningshaan22 күн бұрын
Hsllo Sir, How can we automate the argocd app deployment using jenkins?
@sosereyboth146923 күн бұрын
Dear brother, it's great teaching. I followed your instruction and faced a private key authentication problem as shown below. May you help? ==> kmaster: Waiting for machine to boot. This may take a few minutes... kmaster: SSH address: 127.0.0.1:2222 kmaster: SSH username: vagrant kmaster: SSH auth method: private key kmaster: Warning: Connection reset. Retrying... The guest machine entered an invalid state while waiting for it to boot. Valid states are 'starting, running'. The machine is in the 'aborted' state. Please verify everything is configured properly and try again. If the provider you're using has a GUI that comes with it, it is often helpful to open that and watch the machine, since the GUI often has more helpful error messages than Vagrant can retrieve. For example, if you're using VirtualBox, run `vagrant up` while the VirtualBox GUI is open. The primary issue for this error is that the provider you're using is not properly configured. This is very rarely a Vagrant issue.
@prateeksarangi918723 күн бұрын
Great Video
@justmeandopensource23 күн бұрын
Thanks for the visit
@rudypieplenbosch675223 күн бұрын
Amazing explanation 😎
@justmeandopensource23 күн бұрын
Thanks for watching.
@piotrwawrzen211326 күн бұрын
could you advice how to use terraform with rancher to provision rke2 cluster ? all kind of guidelines are welcomed :)
@BilalAmjad-pj8kf28 күн бұрын
Sir, i'm not able to browser argoCD UI
@huongminh19629 күн бұрын
17:14 i get this err when cmd kubectl cluster-info. Can you help me error: cluster "minikube" does not exist
@huongminh19629 күн бұрын
I realize because i've installed minikube and alias kubectl to minikube kubectl. after ```unalias kubectl```, it's worked.
@user-iz9zp7ko1n29 күн бұрын
Im currently working on a kubeadm k8s cluster and it is currently use static provisioning. I got a requirement to configure replica sets on my mongodb . cant we do it without dynamic provisioning ? what If I create PV s manually on all worker nodes ?
@mostafaviiАй бұрын
Thanks, You're a life saver!. I like the way you used the documents. Keep up your good work.
@justmeandopensourceАй бұрын
Glad it helped! Thanks for watching.
@monwabisisithaba7803Ай бұрын
Top video. Thank you for all your work greatly appreciated.
@g-nice_pimpАй бұрын
Thanks for the video ! It would be awsome if you could a headsup for the traefik 3.0 update. I banged my had a few minutes as i found out i was using traefik 3.0 which has a few slight difference from the previous version like the apiversion for example
@justmeandopensourceАй бұрын
Sure. Thanks for watching.
@palanikumar4150Ай бұрын
I have configured using metallb and it assigned external ip I am not able to access my nginx application using external ip from browser. I can able to access with node IPO. Please suggest me how to access using external ip from browser?
@karamjeetsinghpadam507Ай бұрын
This is the best explaination i heard. Thanks for making this free video for the people struggling with this topic.
@mohamedelsherif7388Ай бұрын
Thank for your video, I've tried this but it doesn't revoke the old config file.. both are working. I need to invalid the config because it compromised.
@harshananayakkara4854Ай бұрын
Hi, I have 02 K8S clusters each with 1 master and 2 workers. I installed Cilium, enabled cluster mesh and joined these together. I am switching between the clusters using contexts from a separate Ubuntu machine. I also enabled hubble ui in both clusters on master nodes. However, when listing the hubble nodes it only shows the cluster 1 nodes as connected. All cluster 2 nodes shows as unavailable. I ran cilium hubble port-forward& while in cluster1 context. Is that why cluster 2 nodes are shown as unavailable? Could you pls let me know where I went wrong? Thanks!
@krishnayaswanth2608Ай бұрын
hey I have doubt regarding when storage class spec has parameeter value archiveondelete: true and reclaim policy: Retain so now does storage class will create an archive directory and when deleting the pv/pvc happens ?
@LunaSicilianАй бұрын
archiveOnDelete keep data at nfs host at name "archive-<pv-name>" after the delete the pv
@sukeshragiАй бұрын
I get the following error after 20 minutes or so while booting kworker1 node. Im running this script on windows kworker1: SSH username: vagrant kworker1: SSH auth method: private key Timed out while waiting for the machine to boot. This means that Vagrant was unable to communicate with the guest machine within the configured ("config.vm.boot_timeout" value) time period.
@limkeke1639Ай бұрын
Hi.
@GravityDevopsАй бұрын
I think the ArgoCD has a better visualization
@fuliaamiyaАй бұрын
Very straight and specific to solution but details. It is really helpful.
@justmeandopensourceАй бұрын
Thanks for watching.
@mehdibakhtyari5861Ай бұрын
Thanks in advance for the great tutorials, by any change, is there any load balancer that can support Diameter and be used inside K8s cluster?
@RpgrameshАй бұрын
Nice Explanation which is related to KOPS; Thanks bro 😀
@justmeandopensourceАй бұрын
Glad you liked it
@tonychia2227Ай бұрын
How to enable this on eks ?
@krunal4bapsАй бұрын
Hi, I have pfsense router at home and K8S bare metal running on ubuntu with 2 master 2 worker setup.. I configured BGP on pfsense and I am hoping it's good, the logs are showing the assigned LoadBalancer IPs but somehow I can't resolve them on browser... I tried multiple times but something somewhere missing, any clue where should I be looking at? I have External IPs assigned correctly on get service output and on pfsense. It does show external ip in BGP logs that the next hop if worker nodes but when I put them in browser, it's bad gateway!!
@devops-vidyaartheeАй бұрын
Hi, have you created any video on how to add a extra external etcd to HA cluster after k8s cluster is created ? for example after 1 month of my self managed cluster I want to add one more etcd for some reason or let's say delete one etcd node and replace it with new one. if not can you please create a new video on that please.
@ravindranaths513Ай бұрын
Please make next vodeo for Authentication & Authorization as you said in First video - [ Kube 50 ] Installing Istio in Kubernetes Cluster kzbin.info/www/bejne/jXfYaYKCjbp4irs
@ravindranaths513Ай бұрын
Hi, From where I can download these yaml files?
@praveenkore8422 ай бұрын
hi Venkat, can you please have a look on ' teleport' to secure the environments
@anvicom2 ай бұрын
Imagine I have 7 virtual machines (VMWare) and I want to host 3 master nodes and 3 worker nodes and 1 load balancer. How do I configure this setup?
@paulfx50192 ай бұрын
Great tutorial! Does this mean with Cilium one no longer need MetalLB for on-premis K8s cluster?
@kks21052 ай бұрын
Thank you for the clear explanation. Devs who are jumping to sharding in mongodb should definitely have a look. Thank you.
@justmeandopensource2 ай бұрын
Glad it was helpful! Thanks for watching Kishore.
@HikmatUllah-jp4zo2 ай бұрын
@justmeandopensource is haproxy is mandatory for creating multi-master Kubernetes clusters
@vitusyu95832 ай бұрын
I would like to know the ssh terminal tool that you use in the video?
@yousufturkey92732 ай бұрын
Hi Vivekjain, Thanks for the nice video. i have a question if the master node and work nodes are being replaced with the worker node after the kops upgrade command. how the payload will be migrated. do we have to re run the deployment or it should be done automatically?
@tylorbillings40652 ай бұрын
Shoudn't there also be a nodeport service created to allow access from the haproxy to the ingress controller?
@TM-Cloud2 ай бұрын
Please why are you provisioning Vagrant first on the host before running LXD/LXC, given that LXD/LXC does not need a hypervisor like you mentioned...
@TM-Cloud2 ай бұрын
What you did is Bare Metal > LinuxOS > Vagrant > VMs > LXC, doesn't this contradict the fact that LXC does not need an hypervisor
@TM-Cloud2 ай бұрын
In the first video of the playlist, the diagram was Bare Metal > LinuxOS > LXC/LXD API > GuestOSs
@KingsDev-xe2wo2 ай бұрын
Hey, thanks for amazing tutorial. after setting up this HA, I tried to start only one load balancer and one master, it failed when running kubectl. it's normal if I keep two masters running. can you help me check, thanks.