Learn Authentik in 10 Minutes
11:07
Learn Vault in 10 Minutes
15:49
6 ай бұрын
Learn Helm in 10 Minutes
12:23
7 ай бұрын
Пікірлер
@sandeeppandey5364
@sandeeppandey5364 2 күн бұрын
Great Vedio !!
@Drewbernetes
@Drewbernetes Күн бұрын
Thanks!
@felipe88alves
@felipe88alves 13 күн бұрын
What do you do when your ISP changes your IP? Do you just manually update it on Cloudfare everytime that happens?udo when your ISP changes y
@Drewbernetes
@Drewbernetes 12 күн бұрын
I just use ddclient on one of my linux machines. This will detect if a change has occurred and update the record in Cloudflare if required. Cloudflare have a list of tools you can use with them, here - developers.cloudflare.com/dns/manage-dns-records/how-to/managing-dynamic-ip-addresses/. If I recall correctly, ddclient supports a few different providers.
@AbbasiMohamad
@AbbasiMohamad 15 күн бұрын
so fast speak!
@Drewbernetes
@Drewbernetes 14 күн бұрын
Yeah I do need to control my speed :-D
@JustinPerez-z8f
@JustinPerez-z8f 15 күн бұрын
Hey, I’m running into an issue. Every time I try to start the control plane node, it gets stuck during the API check and eventually exceeds the deadline. I’ve tried using Kubernetes and also set up Calico, but I still can’t get past this problem with the API server.
@Drewbernetes
@Drewbernetes 14 күн бұрын
Hi! So unfortunately there can be a raft of issues which may stop the API server coming up from a node name not being correct/matching what it expects, to network issues, to a misconfiguration and more. Is there anything specific in the kube-apiserver logs that stand out? I get it is timing out, but if you use crictl on the node (presuming you're using containerd of course) then you can get the list of containers, one of which will be the API server and then get the logs for that container. That will highlight the reason why it is failing to come online for you. The logs will give you a point in the right direction of what to check next. :-) Hope that helps you solve it!
@dalidavila
@dalidavila 15 күн бұрын
Simpler impossible, tnx man!
@Drewbernetes
@Drewbernetes 14 күн бұрын
No problem!
@Intaberna986
@Intaberna986 Ай бұрын
10:35 Mate I was going bonkers for a week trying to set up a HA cluster until I came accross your channel. You can edit as you see fit, I've gone through lots of videos and this is perfect.
@Drewbernetes
@Drewbernetes Ай бұрын
Haha nice one! Yeah, it was one of those scenarios where I considered chopping it out or recording it again but in the end thought: "Naaa, leave it in". It's good for people to see errors really - we all make them and anyone who pretends they are flawless on KZbin... well they're not :-D" Glad it helped though!
@ecuas_7
@ecuas_7 Ай бұрын
Thank you so much!
@Drewbernetes
@Drewbernetes Ай бұрын
You're welcome!
@chromerims
@chromerims Ай бұрын
SURPASSING 👍 Liked and subbed!
@Drewbernetes
@Drewbernetes Ай бұрын
Thanks! Welcome 🙂
@chromerims
@chromerims Ай бұрын
@@Drewbernetes May I cordially ask . . . what was your reason(s) for 3 raids, md0, md1, and m2? Instead of just one raid with 3 logical volumes atop a single LVM volume group? Edit to add: I wonder whether mirrored SSDs will properly wear-level with a raid1 swap space being written to? Kindest regards, friends and neighbours. P.S. I am inclined to just make a single raid, a single volume group on top of that, and then as many logical volumes as needed from there. Yes, for UEFI, tick off the 'use as boot drive' before everything else.
@Drewbernetes
@Drewbernetes Ай бұрын
@@chromerims You sure can! I'll be totally honest, it's a habit from my DC days to reduce blast radius should one array fail! That's literally it 😀. The way you're looking at doing it is 100% fine! There will be (very minor) performance tradeoffs doing it that way but it's negligible for the most part. But there are pros such as simplicity for partition management should you decide to increase the disk. What I've done isn't "the right way", it's just one of many ways. Hope that answers the question!
@chromerims
@chromerims Ай бұрын
@@Drewbernetes Thank you 👍
@Drewbernetes
@Drewbernetes Ай бұрын
@@chromerims No worries!
@kovahcastle
@kovahcastle Ай бұрын
Thanks for this great straightforward guide!
@Drewbernetes
@Drewbernetes Ай бұрын
No problem at all! Thanks for watching 🙂
@anilpatel-ds3nx
@anilpatel-ds3nx Ай бұрын
Hey Drew , Thanks great video . been trying to get HA setup for few days now , after few goes I ran into problems with Kubernetes Version 1.29,1.30 and 1.31 control plane doesn't initialise with versions 1.28 it just works every time, would appreciate any feedback or anyone else has any thoughts . Thanks
@Drewbernetes
@Drewbernetes Ай бұрын
Hi! The process should generally be the same no matter which version you're using however the config may change slightly. It could be something as simple as a Feature Gate not being supported anymore or being moved into GA. What's happening when you start the process? Where is it failing? Depending on where in the process it fails for you, you should be able to check things like the kube-apiserver logs and the kubelet logs. These are your two main sources of errors.
@anilpatel-ds3nx
@anilpatel-ds3nx Ай бұрын
@@Drewbernetes Thank you for the quick feedback , it doesn't seem to add the second IP address for kub-vip on the nic but I will dig into the logs more as you suggested. oh by the way great channel , gone through most of your videos :)
@Drewbernetes
@Drewbernetes Ай бұрын
@@anilpatel-ds3nx Thanks very much! Yeah have a check through those logs. Also check out the logs for kube-vip too. I've updated my Kubernetes installation to 1.31.0 today to check things over and everything is working as I would expect, so it does definitely work 🙂. However that's an upgrade from 1.30.4 where it was already installed. It's not a fresh installation. That being said if it wasn't going to work, it wouldn't work on the upgraded cluster either 🙂
@boubadeus
@boubadeus Ай бұрын
Thanks a lot for the explanation. In my case, I have an issue at the end of the OAuth2 configuration: when I click on 'Finish', nothing happens... I tried another browser, but nothing. After some research, I'm not the only one, and I may have to consider reinstalling AuthentiK. By the way, thanks 👍. I'm on version 2024.8.0, and I may need to downgrade to 2024.6.4.
@Drewbernetes
@Drewbernetes Ай бұрын
No problem! I've not actually upgraded to 2024.8.0 yet - I planned to this week/weekend so I'll have to keep an eye out for that bug.
@MahmoudMohamed-e9w
@MahmoudMohamed-e9w Ай бұрын
Great explanation, Thank you Drew
@Drewbernetes
@Drewbernetes Ай бұрын
No problem at all!
@delioardiente5611
@delioardiente5611 Ай бұрын
Nice bedeo❤❤❤❤❤❤
@Drewbernetes
@Drewbernetes Ай бұрын
👍
@dalidavila
@dalidavila Ай бұрын
tnks a lot bro, watching all this series now.
@Drewbernetes
@Drewbernetes Ай бұрын
No problem at all. Enjoy!
@HansPeterSloot
@HansPeterSloot 2 ай бұрын
Excellent indeed. Don't understand why the number of views isn't much higher.
@Drewbernetes
@Drewbernetes 2 ай бұрын
Thanks so much! However, I'm the worst promoter in the world which probably doesn't help 🤣. I post on all the socials once when the videos are released and then move on as I don't really use social media! Maybe I should spend some more time on shameless self promotion to push it out more though. I should get inspired by Brett Fisher, NetworkChuck and Jeff Geerling! 🤔
@HansPeterSloot
@HansPeterSloot 2 ай бұрын
@@Drewbernetes Yeah I know the mall. At least I have found you anyway now.
@HansPeterSloot
@HansPeterSloot 2 ай бұрын
Super! The best I have seen sofar
@Drewbernetes
@Drewbernetes 2 ай бұрын
Thanks very much!
@madhavamelkote4554
@madhavamelkote4554 2 ай бұрын
Brilliant video, absolutely perfect.. subscribed!!!
@Drewbernetes
@Drewbernetes 2 ай бұрын
Thanks very much!! Welcome!
@CBHAppleConsulting
@CBHAppleConsulting 3 ай бұрын
How does the disk configuration change for a RAID 5? In 22 and 24 do we still need the swap partition, I thought they were using a swap file now?
@Drewbernetes
@Drewbernetes 3 ай бұрын
Hi, On the face of it, the disk configuration remain the same. You'd need minimum of 3 disks for RAID 5 though, and all would need to be set as bootable. However, I would strongly recommend against using RAID 5 on any disks that are larger than 1TB in size. It can cause all sorts of rebuild issues and you'd be better off using RAID 10 (1 + 0). As for swap, yes, if you don't create a swap partition then it will indeed create a swap file by default, that is correct! I'll be honest though, I don't even bother creating swap space of any kind these days. Machines have so much memory now that I can't actually remember the last time I hit my swap space in terms of running out of memory. If I was installing Linux on a laptop I'd still have it for the sake of hibernation/sleep, but on a server I don't see the need really :-) Maybe I'll do a RAID 5 example soon. I need to update this video for Ubuntu 20.04 anyway ;-)
@pfsykes
@pfsykes 3 ай бұрын
Awesome... Clear concise and to the point every time. Keep it up I could listen to you for hours on any topic
@Drewbernetes
@Drewbernetes 3 ай бұрын
Thanks very much @pfsykes !
@emilmihailpop6162
@emilmihailpop6162 3 ай бұрын
Hi! Thank you for a very good video. I want to ask: kube-api-server and apiserver-advertise-address ips should be different or they can be the same?
@Drewbernetes
@Drewbernetes 3 ай бұрын
Hi, Thanks very much. So the "kube-api-server" in this video is just an DNS alias that resolves to the Kube-VIPs' virtual IP address. This is like having a real domain pointing to a loadbalancer. It also means your certificates are generated using that domain name instead of an IP allowing you to change the underlying IP without having to regenerate certificates. For example, say you owned the domain my-kube-cluster.example.com, you could point that to a load balancer which had an IP of 1.2.3.4 and this would then route traffic through to (in this case) the three nodes that run as control planes, which may have 192.168.0.201-203 as their (local) IPs. The advertise-address is the IP address that is used by other members nodes of the cluster. so in my case, it's the local IP address of the 1st control plane node. So yes, in theory you could use the same IP address for both fields, but it's not recommended in a HA setup as all traffic will hit one node before being directed to the correct location. If you were setting up a single control plane node, it would be fine to do this but to be honest, I'd still setup Kube-VIP and use a DNS record (or host file adjustment as I do in this video) as it would allow me to A. add more control plane nodes at a later date B. change the ip address without having to regenerate all of the cluster certificates I hope that helps and clarifies this!
@eyahou9375
@eyahou9375 3 ай бұрын
thank youuuuuu
@Drewbernetes
@Drewbernetes 3 ай бұрын
No problem at all!
@Michael_Hyb
@Michael_Hyb 4 ай бұрын
Thank you very much for your very clear, structured and easy-to-follow guidlines. I am looking forward to a time where your valueable inputs and time to answer comments will find proper reward for you and every other active contributor. 🙏
@Drewbernetes
@Drewbernetes 4 ай бұрын
Thanks so much! It means a lot that people find this useful and when people take the time to comment - good or bad, it's all taken in and I read every one! It's great to hear you've found this useful!
@MakeAnything-
@MakeAnything- 4 ай бұрын
Mine fails right before I get to the lvm screen. Just after the 12:04 screen. I have tried to use versions 22 and 24. I cant figure it out
@Drewbernetes
@Drewbernetes 4 ай бұрын
Hi. That sucks to hear! What is happening for you when it fails? Are you getting any errors? If the installer is crashing or claiming "an error occured" then it's usually, in my experience, a sign of a corrupted install ISO - though with it happening on both them that seems unlikely too. I can promise this process 100% works though 🙂. I've done this countless times and others have used this method too as evidence by their comments on here. I can't speak for 24.04 yet, as I'm yet to go through it on there believe it or not! I imagine the process is the same though.
@po6577
@po6577 5 ай бұрын
thanks! possible to have a update video which use security context?
@Drewbernetes
@Drewbernetes 5 ай бұрын
No problem :-) I do have a video on Security Contexts already over here -> kzbin.info/www/bejne/gIq5dJemjq2Insk Is this what you were looking for?
@devinmina5170
@devinmina5170 5 ай бұрын
Very clear and easy to follow! Thanks!
@Drewbernetes
@Drewbernetes 5 ай бұрын
I'm glad you found it useful. Thanks!
@bhanusudheer493
@bhanusudheer493 6 ай бұрын
Thanks for the videos, Recently we are using external secrets really helpfil
@Drewbernetes
@Drewbernetes 6 ай бұрын
Thanks! I plan on doing a video on that asap. Authentik next, then I'll dive in the External Secrets Operator integration with Vault.
@pfsykes
@pfsykes 6 ай бұрын
Just wanted to say awesome work keep it up... Your videos are a no nonsense guide and have helped me solve so many issues 🤩. The github link is broken 404
@Drewbernetes
@Drewbernetes 6 ай бұрын
Hi and thanks so much! And yeah, silly me! It was still private 😀. I've made the repo public now so you should be able to see it all. Thanks for bringing it to my attention!
@michaelpalumbo559
@michaelpalumbo559 7 ай бұрын
Great video. I was looking for how to setup RAID and this was very easy to follow. Thank you! One question though, when I do the first step: "Use As Boot Device", I end up with a `/boot/efi` mount point. Do I still need to create the `/boot` mount point in that case? I ended up with 4 mount points instead of 3 because of this: (details on this screenshot: i.imgur.com/17iqFA3.jpeg) / /boot /boot/efi SWAP
@Drewbernetes
@Drewbernetes 6 ай бұрын
Hi! Sorry for not responding. It seems I'm still getting the hang of this KZbin thing and it went into "for review" comments for some reason. Technically no, you don't need to have a boot partition at all. I generally create on to separate it out from the root FS though. Whilst it comes down to a matter of preference on the whole,there are some benefits such as reducing file system complexity which can improve the bootup process due to less demand being placed on the bootloader.
@michaelpalumbo559
@michaelpalumbo559 6 ай бұрын
@@Drewbernetes It sounds like /boot and /boot/efi are not redundant then. Sounds good, thank you for your response. :)
@jaideepnigam
@jaideepnigam 7 ай бұрын
i followed exact same method but when I do kubeadm init, i am getting below error ( on RHEL7 and ppc64le arch, using kubeadm 1.29 and kubevip 0.7.2 ) Mar 14 14:48:13 dx11520-hs kubelet[40620]: I0314 14:48:13.740056 40620 kubelet_node_status.go:73] "Attempting to register node" node="dx11520-hs" Mar 14 14:48:14 dx11520-hs kubelet[40620]: E0314 14:48:14.264412 40620 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"dx11520-hs\" not found" Mar 14 14:48:15 dx11520-hs kubelet[40620]: E0314 14:48:15.934387 40620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"kube-api-server-endpoint:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/dx11520-hs?timeout=10s\": dial tcp 192.168.16.202:6443: connect: no route to host" interval="1.6s" Mar 14 14:48:15 dx11520-hs kubelet[40620]: E0314 14:48:15.934410 40620 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"kube-api-server-endpoint:6443/api/v1/nodes\": dial tcp 192.168.16.202:6443: connect: no route to host" node="dx11520-hs" It looks like kube-vip is not able to bring the VIP up. Please give some suggestions. I have added etc hosts entry for the master node on which I am running these commands as well as DNS entry for VIP
@Drewbernetes
@Drewbernetes 6 ай бұрын
Hi! Sorry for the delay in responding. I'm not sure why but this comment ended up in the "for review" section and I've only just seen it 🤦‍♂️. So based on the error it means the node "dx11520-hs" cannot be resolved from the node on which you're running kuebadm init on. If you try and ping that hostname, do you get a response? If not then there is something not working around the hosts file entry you've added. The host file approach is one method but if you have a DNS server that can resolve the hosts, that would be better. I'd start by pinging the hostname first and seeing what you get back. The routing is not working from what I can see in the error though.
@gulamahsan5902
@gulamahsan5902 7 ай бұрын
I liked this and I will share it with my team
@Drewbernetes
@Drewbernetes 7 ай бұрын
Glad to hear it! Thanks for watching.
@CraftHound22
@CraftHound22 7 ай бұрын
I have been so frustrated getting anything done with Raid in Ubuntu, until i saw this video. Thanks!
@Drewbernetes
@Drewbernetes 7 ай бұрын
Glad I could help!
@Aminech1920
@Aminech1920 7 ай бұрын
kubeadm init fail with ip and dns no route to host
@Drewbernetes
@Drewbernetes 7 ай бұрын
Hi! Sorry to hear that. Are you able supply any more information on this? It should be working if you've followed along with the tutorial. can you copy and paste the kubeadm init command you're running? Also confirm the configuration file you're supplying is valid too - it could be a typo that is causing this. That being said some things you can check is from the node are: ping google.com dig google.com nslookup google.com If any of those fail then you have an issue with the node itself. In which case you'll need to resolve those first before continuing.
@Aminech1920
@Aminech1920 7 ай бұрын
@@Drewbernetes this is the command i am running kubeadm init --control-plane-endpoint vip-k8s-master --apiserver-advertise-address 192.168.1.16 i set record in etc/hosts when i do nc -v 192.168.1.40 6443 nc: connect to 192.168.1.40 port 6443 (tcp) failed: No route to host port 6443 is allowed in ufw i am using ubuntu 22.04.2
@Drewbernetes
@Drewbernetes 6 ай бұрын
Hi, sorry this comment ended up in "held for review" for some reason. So you have "192.168.1.40 vip-k8s-master" in /etc/hosts? If so then as long as you've correctly configure the kube-vip steps, this should work. I would recommend running "crictl ps" and reviewing the logs for the containers that were successfully created. Kube-vip creates an additional IP on the interface you've supplied to it so as long as that configured, and the container is running, it should do that. Also check the interface itself to ensure the IP has been added."ip a" will list all the interfaces and addresses associated with them. Hopefully that will help you get to be bottom of why this isn't working for you.
@filipforst9048
@filipforst9048 Ай бұрын
Same issue here
@evelioguaperas
@evelioguaperas 7 ай бұрын
This channel is so underrated! I'm loving the videos, I hope you go more in depth in future installments.
@Drewbernetes
@Drewbernetes 7 ай бұрын
Thanks so much! I hope to keep going for as long as I can on various topics around Kubernetes.
@radokhlebov
@radokhlebov 8 ай бұрын
life saver 🙏
@Drewbernetes
@Drewbernetes 8 ай бұрын
I'm glad it helped!
@jairuschristensen2888
@jairuschristensen2888 8 ай бұрын
These installation videos are incredible! I've spent the last two days following along with your three installation videos with my own cluster (and scripting the entire thing). There's no tricks, just you doing it with us, and there's something special about that. Like a humble class TA. Great job, keep it up!
@Drewbernetes
@Drewbernetes 8 ай бұрын
Thanks so much for the kind words! I wanted this series to be exactly that. I wanted to presume no prior knowledge and just take people through the whole process so I'm glad that it's come across that way!
@zaheerhussain5311
@zaheerhussain5311 8 ай бұрын
Hi have you made a video on set up Kubernetes with external etcd cluster with VIP.
@Drewbernetes
@Drewbernetes 8 ай бұрын
Hi, This video does use a VIP for the Kubernetes cluster via KubeVIP and an External ETCD cluster. Do you mean to use a VIP for the External ETCD cluster itself? If so, I'm not sure if that's possible (or maybe recommended) to be honest due to how ETCD is intended to be used. KubeVIP just works as a real-world LoadBalancer would work in that it provides a single IP that you can use to hit any API endpoint.
@letsops
@letsops 8 ай бұрын
Great series! Like the one for Linux. Thanks Drew for this work. It's making me much more comfortable with these technologies on a daily basis. Can't wait to see the rest.
@Drewbernetes
@Drewbernetes 8 ай бұрын
Thanks very much! It's good to hear people are finding it useful. I'm taking my time with each one this year as I did myself in trying to get one out almost every week throughout last year and the quality wasn't where I felt it could be! Still, there is a Helm video coming up in the next week or so!
@letsops
@letsops 8 ай бұрын
Maybe it's more a question of conforming to the codes of the youtube algo than of quality, I think, because your videos are already often clear and well illustrated with examples, which is the most important thing when you want to make explanatory technical videos in my opinion.@@Drewbernetes
@Drewbernetes
@Drewbernetes 8 ай бұрын
@@letsops yeah I certainly don't know the algo, that's for sure! But then I'm not about the numbers or making money from it all. For now it's about trying to teach people things in as clear a way as possible. Now I've done the in depth videos, I'll look at the "K8S in 15 minutes" and things like that, but they'll be quick vids rather than in depths technical explanations. I wanted the in depth stuff done first so they were there for the people who wanted to know more from the quick vids.
@letsops
@letsops 8 ай бұрын
I totally understand! That's why I mainly enjoyed this series. It's much more comprehensive and detailed than most of the other videos out there.@@Drewbernetes
@Drewbernetes
@Drewbernetes 8 ай бұрын
@@letsops thanks!
@pierreancelot8864
@pierreancelot8864 8 ай бұрын
What's the rush?
@Drewbernetes
@Drewbernetes 8 ай бұрын
I rush everything unfortunately! I just have to get things done... and now :-D
@RajasekharSiddela
@RajasekharSiddela 9 ай бұрын
@drewbernets hey Drew I'm when i test for cluster health im getting tcp dail connection refused, how to solve this
@Drewbernetes
@Drewbernetes 9 ай бұрын
Hi! If you're seeing something like Get "localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused then I would start by double checking your kubeconfig to ensure it's configured correctly. It should be pointing to the IP address or DNS (if you have configured one) for KubeVIP. You can target the kubeconfig directly by setting `KUBECONFIG=/path/to/config` or by adding the flag to your kubectl command `--kubeconfig=/path/to/config`. If you're seeing that error above but with the IP or DNS name you've configured then it could be a firewall issue or that the KubeVIP Pod isn't running. In this case, you can rule out KubeVIP first by accessing the Control Plane you initialised first and running the same command whilst using the admin config located at `/etc/kubernetes/admin.conf`. If this works, then it's the firewall so you'll need to configure the firewall either on your nodes or the network to allow the appropriate traffic. If you've followed along with what I've done on Ubuntu in VMs, this should work by default. If it's not the firewall and you believe it to be KubeVIP then you can check via `crictl ps` that it's running. It may need reconfiguring and the manifest regenerating. I hope that helps!
@RajasekharSiddela
@RajasekharSiddela 9 ай бұрын
@Drewbernetes got it i missed vip configuration, now I have seen all 4 videos, going to do that from scratch, btw the way ur explaining the concept is stupendous
@RajasekharSiddela
@RajasekharSiddela 9 ай бұрын
I have one ques: For creating kube vip do we need to have separate node or we can assign any IP address with in our interface in main control plan ?
@Drewbernetes
@Drewbernetes 9 ай бұрын
@@RajasekharSiddela The way Kube VIP works is it makes use of an IP address that exists on your main network. It doesn't require a node of its own as it runs in a pod within your cluster. By main network I mean the same one from which your nodes get an IP. For example, if your control planes and worker nodes have an IP address of 192.168.0.x then the IP KubeVIP uses should be on that same network (192.168.0.0/24). Just make sure it's not an IP address that is in use by something else. Hope that helps!
@RajasekharSiddela
@RajasekharSiddela 9 ай бұрын
@Drewbernetes thanks for your quick response, I got one more doubt : I'm using RHEL 7.7 VMS for cluster creation, which is having cgroup v1 as default. Is it mandatory to have cgroup V2? If I'm going to use cgroup v1, so no need to change systemdCgroup in config.toml , I'm I right?
@CryptoRealAlpha-dp8jy
@CryptoRealAlpha-dp8jy 10 ай бұрын
I hate app armor so much
@Drewbernetes
@Drewbernetes 10 ай бұрын
I know of so many cases where rather than learning it, people disable app armor or selinux. I was guilty of it for so many years! But security is important and both of these are a link in the chain :-)
@po6577
@po6577 5 ай бұрын
@@Drewbernetes I actually never see someone use app armor, it's the exam bring me to this. Maybe cuz we use cloud manage cluster and it's not so important? I really have no idea.
@Drewbernetes
@Drewbernetes 5 ай бұрын
@@po6577 Honestly I think it's because people duck around security more than anything. With the amount of high profile breaches we've seen in the recent past, you'd think more people would be on top of it and practicing security-by-design, but the reality is that it is an afterthought. Even in the managed services, this is possible. Amazon, for example, have documented how to implement it (or use a 3rd party approach) to achieve this.
@simo47768
@simo47768 Жыл бұрын
Wow. Thanks. Can you add one more topic? Audit logs and sending logs to maybe to opensearch using filebeat oss
@Drewbernetes
@Drewbernetes Жыл бұрын
I certainly can! I knew I'd missed something - Audit Logs! Way back in my configuring KubeADM cluster videos, I *think* I mentioned setting up Audit Logs later - and forgot to make a note to actually do it :facepalm:. I'll do a video on enabling Audit Logs in a couple weeks time so that we can actually make use of them. As for the Opensearch/Filebeat request - I can do this but it won't be as part of this series I'm afraid, it'll arrive in the "Kubernetes Next Steps" series I'll start up after this one is done. I wanted this one to be purely Kubernetes native tools with a view to expanding on additional tools like monitoring, shipping logs, CI/CD and more in a later series. I was going to use Loki for log viewing due to its easy integration into Grafana (as it's made by the same people) instead of OpenSearch as I have much more familiarity with that. However, I'm no stranger to ElasticSearch and let's face it they're one in the same!
@simo47768
@simo47768 Жыл бұрын
Please do oidc and federation service please.
@Drewbernetes
@Drewbernetes Жыл бұрын
Hi! I shall be doing soon™ I want to get the security sections done and then I'll be moving onto admission controllers and probes. Once done, OIDC is next - maybe 5/6 videos away. I'll likely be using something like KeyCloak for the OIDC provider. As for federation - I'm still deciding what path to go down for this. When I started making my list of videos I would do this year at the back end of 2022, I intended to do a video on kubefed to cover federation. The problem is that project was archived in April 2023 and with it being no longer maintained, it didn't seem right for me to do a video on that. More here: groups.google.com/g/kubernetes-sig-multicluster/c/lciAVj-_ShE?pli=1 I'm looking into alternatives, the most viable of which seems to be Karmada - karmada.io/docs/ This isn't kuberentes-sigs project like kubefed, but it is inspired by the federation and kubefed projects. It is also part of the CNCF sandbox right now which gives me hope that it could be a good alternative. All that being said, I'll need time on that to figure out how it works before putting anything together for it. But I will be doing a video on federation asap.
@simo47768
@simo47768 Жыл бұрын
​@@Drewbernetes My colleague used kubelogin with keycloack (oidc). He can login using two ways. Browser redirect or directly providing password to command line. He wants to use now federation without browser redirect. Just login to k8a with username password as argument. . Now from linux you get a message that you need to redirect browser Do you think this is possible.? He wants to use AD account for automation. Not service token. As tokens are not bound to a person and can be shared . Great content. Soon i will do cks :)
@juidas
@juidas Жыл бұрын
“It was me!” 😂😂
@juidas
@juidas Жыл бұрын
Wishing you a great health, and hope everything is okay with you. Again great video Drew! Privileged learning from you.
@Drewbernetes
@Drewbernetes Жыл бұрын
Hi @juidas! Thanks very much! Yes I'm fine thankfully. It took a while for results but it turns out it was nothing to be concerned about in the end. They were just being over cautious to ensure there was nothing wrong with my lungs as it turns out!
@juidas
@juidas Жыл бұрын
@@Drewbernetes great to hear that! Wishing you a long healthy life filled with happiness
@paulfx5019
@paulfx5019 Жыл бұрын
Hey Drew, You are the man! You have succeeded where others have failed...many thanks for the great tutorials, the others are all pretenders. Cleanest Kubernetes build to-date and I've played with the all K3S, Kubespray, Rancher and etc for months trying to find the best solution for us. Cheers
@Drewbernetes
@Drewbernetes Жыл бұрын
Thanks very much! I appreciate the comment and it's nice to know people are finding it useful. This series was always about helping people understand the basics and ensuring they have the skills needed for passing the exam (if they so choose to take it). The more in depth stuff will come later :-) For example I'll be expanding on these with things like Cluster API (CAPI) and maybe K3S (but I'm undecided on that) at some point. However, I have always been a firm believer that it's best to know what happens under the hood before going in with the tools that automate the building and maintaining of clusters, because when things inevitably go wrong, it's good to know how it all fits together at the ground level. I did the same when starting out too. I was looking at Juju, Rancher, Kubespray and all sorts and I just couldn't get my head around the infrastructure. In the end I just dropped it all and did it the hard way (via SystemD services) then moved onto KubeADM. I now use CAPI day-to-day and feel it's one of the best tools for manging multiple clusters. If I were only running one though I might not bother with it and stick with KubeADM when running clusters outside of a managed cloud.
@paulfx5019
@paulfx5019 Жыл бұрын
CAPI looks interesting although keen to have a deep dive into the Gateway API. Am also keen to know how to use Cinder Plugin from CEPH cluster (and explore if possible to operate on separate network interface), I've implement Longhorn for now and have also had a look at Rook & OpenEBS but think they all are a little noisy to embed in a cluster as SDS and feel this stuff should be external to the nodes within a cluster (never been a fan of iSCSI). Anyway, have subscribed to you channel and look forward to future tutorials.
@Drewbernetes
@Drewbernetes Жыл бұрын
@@paulfx5019 I plan on diving into Gateway API on this tutorial series later down the line once I've had more time with it as I think it's going to be a huge step forward with regards to managing ingreds traffic. As for the cinder plugin, I use that in my job and it's really useful. It also has great support when using the Snapshot Controller so that backups (or snapshots) can be taken of Persistent Volumes in the case of disaster recovery. Thanks for the sub, great to have you on board!
@izidr0x770
@izidr0x770 Жыл бұрын
Good, hey Drew, a question, I'm thinking about implementing Kubernetes in a medium scale on-premise project (10-100 physical nodes), what Kubernetes technologies do you recommend to implement it with? Kind of between k3s and k8s, etc.
@Drewbernetes
@Drewbernetes Жыл бұрын
Hi! So there are a couple of options here and it does depend on your underlying infrastructure to what would be best for you. For example, how would you be deploying your instances? IE are they bare metal, OpenStack? Something else? If you're using OpenStack for example, then I would look into the CAPI/CAPO (Cluster API and Cluster API Provider OpenStack) options as this makes manging cluster rather easy on the whole. I haven't played with K3S yet as I've not had chance to date but I intend to research and test it soon enough. My manager absolutely loves it (and used to work for Rancher) but I think either one of those is a good place to start. I wouldn't recommend manually installing clusters via KubeADM to be honest. It's good to know about it and how it functions but there are tools that wrap around it to make life easier now (which I'll get into in much later videos). CAPI/CAPO does have some really minor limitations around how much control over KubeADM you get, such as not being able to hide the control plane (which is supported in KubeADM but not in CAPI). I believe K3S does support this so if this is something that matters to you it's worth noting. If you do decide to go down the OpenStack/CAPI/CPO route, then take a look at the kubernetes-sigs/image-builder project on GitHub for building your Kubernetes images - I've recently contributed a feature to enable the building of images directly in OpenStack which should help you on your way there. I'd personally recommend looking at OpenStack for your instance management. It's stable and has good support within Kubernetes. Whatever you choose though, make sure it has good, maintained support around how the LoadBalancer Services are created, a supported and developed CSI and other core "cloud-like" components so that you're not having to build your own work-arounds into the mix. I hope that helps get you started and any other questions fell free to fire them my way.
@izidr0x770
@izidr0x770 Жыл бұрын
@@Drewbernetes Thank you very much. I was talking to my team and as such it is bare metal, and the idea is to implement several containers with nginx services for the front end of the applications and wildfly for the back end. So we are thinking between what to use, if k3s, kops, kubespray or something like that and whether to use containerd or docker, I don't know what things you can recommend.
@Drewbernetes
@Drewbernetes Жыл бұрын
@@izidr0x770 No problem! So based on what you've said, you might find K3S to be the better option. KOPS only supports AWS and GCE officially and if I recall correctly, KubeSpray is a bunch of Ansible scripts and requires an orchestrator to mange the nodes. I will say if you're not using MAAS, OpenStack or anything else to manage the nodes then scaling the cluster will be a manual task which kind of goes against the flow of how you should be using Kubernetes. I'm not sure what your burst would look like but it's something to be aware of - which is why I recommended OpenStack as something to orchestrate the nodes. You can orchestrate bare metal nodes with OpenStack by the way, it's not just VMS ;-) kolla-ansible for OpenStack is a great place to start. K3S does support a HA or single node configuration so it's worth seeing which would best suit your needs. Remember ClusterAPI is an option too if you're using OpenStack, vSphere or any other orchestrator for your nodes. With regards to the container Runtime, containerd is likely the best way to go as the dockershim was deprecated and removed a few releases back. So if you're already considering containerd, go that way. All the Dockerfiles/images etc will work with it as Docker actually developed containerd and donated it to the CNCF! www.docker.com/blog/what-is-containerd-runtime/ If you are going to just go Bare Metal and you know your burst traffic for your app won't require the scaling features, then that's fine but it's good to be aware of it. Also, install MetalLB or Kube-VIP if you need any sort of external access to your app! I have mentioned Kube-VIP already in my videos and will touch on MetalLB and some point in the near future. An my final thought on it is this... as much as I'm an advocate of Kubernetes I will say that it's worth looking into whether Kubernetes is right for your project. Consider the features it provides vs the trade-off of manging the cluster itself. Sorry for the second essay! 😀
@izidr0x770
@izidr0x770 Жыл бұрын
@@Drewbernetes Hi Drew, don't worry, I like your long answers, I know you take your time to make them and I appreciate it. I don't think I gave you enough context so that you could really recommend me something, and to tell the truth my knowledge about kubernetes and servers is basic, as such in the project I'm an assistant and I'm still learning. As such the project is being done in an educational institution that wants to migrate their servers to kubernetes, and as far as I understand and have been explained to me, the current server has different sections, some open to the public, which would be the part in production and other sections that are only available to developers, which would be the section of pre development, development and pre production, apart from the databases that handle the information of all students, teachers, etc.. Those of us who are involved in this task have been doing some research, but we still haven't defined exactly what to use, and the idea is to make an HA cluster and manage the database externally. Tomorrow they are going to explain to me a little more in depth how everything is going and, well, right now I was thinking of doing some research and, by the way, taking advantage of your knowledge to contribute adequately to the decision. And now that I think about it, I think I'm not entirely clear on the concept of bare metal, in this case as far as I have seen, the project is going to handle virtual machines that go inside the physical machines which are owned by the institution, so they would not be hiring machines in the cloud, and from what I was researching the concepts right now, I think that in this case it would not be bare metal, or so I think.
@Drewbernetes
@Drewbernetes Жыл бұрын
@@izidr0x770 aaaah the context helps! so yeah I suspect what will happen is they'll have their own blade servers, the bare metal nodes, that will run a hypervisor of some sort (OpenStack, vSphere et al) which will be responsible for deploying the VMs on which the Kubernetes clusters will be setup. That's a totally legit and sensible way of setting things up. I'd recommend in that case to look at Cluster API (CAPI) and either Cluster API Provider OpenStack (CAPO) if using OpenStack or Cluster API Provider vSphere (CAPV) if using vSphere. This allows you to have a tight integration with the hypervisor and enables things like auto scaling, Load Balancing and more. It allows you Kubernetes cluster to behave like it is in a cloud provider like AWS, GCP etc. With regards to the public/private setup, this again is a sensible effort. Having a Production and Staging cluster allows the following to happen. You'd have your code repository, such as GitHub, GitLab etc which hosts the code. Then you can do something akin to the following: 1. Create a Development/Release branch off of main 2. Each developer can then work in their own branch and when ready, merge it back into the Development/Release branch. 3. The staging cluster can target the Release branch using GitOps tools such as ArgoCD or Flux meaning it's kept in sync with what is in the Development branch. 4. Once testing is complete and you're happy to promote to production, merge your Development branch into main and create a tag/release. 5. Update your production cluster to target the new release 6. Sync development with main and start the cycle again. I think you've got an interesting path ahead of you and you'll learn ALOT playing with it in a real world scenario. Nothing beats learning things like this more than hands on work. I wish you all the best of luck with this and hope you gain a lot from it.
@bhanusudheer493
@bhanusudheer493 Жыл бұрын
Nice one, getting hands on Kind
@Drewbernetes
@Drewbernetes Жыл бұрын
Good stuff! kind is wonderful for testing things locally and simulating clusters without the need for VMs.
@bhanusudheer493
@bhanusudheer493 Жыл бұрын
Nice and Clear
@Drewbernetes
@Drewbernetes Жыл бұрын
Glad you think so!