5:08 the most important thing in this tutorial. Thanks for such a great content..
@BKearal3 жыл бұрын
You might want to mention that it also offers a single point of access into multiple clusters via the rancher proxy for kubectl. This is great for centralized access control that can even be per namespace etc. without deploying anything special onto the clusters itself.
@DevOpsToolkit3 жыл бұрын
Oh yeah. I forgot to mention that one. That is indeed a very good feature.
@mtik0003 жыл бұрын
This is why we plan on keeping "Rancher" even though our clusters have moved to EKS. Appears to be much easier to handle RBAC than deal with a bunch of IAM roles/users/etc.
@DevOpsToolkit3 жыл бұрын
@@mtik000 Why did you do that? Why did you mention IAM? Whenever I hear that word, I have nightmares and cannot sleep.
@mtik0003 жыл бұрын
@@DevOpsToolkit hah! sorry :) i need to offload my burden to someone else
@DevOpsToolkit3 жыл бұрын
@@mtik000 No worries. I'll have a gin&tonic. That usually fixes it.
@evilqaz3 жыл бұрын
I love rancher :)
@DanielRolfe3 жыл бұрын
For on prem rancher also has rbac with AD which is super nice , also another rancher project worth looking at is LongHorn distributed storage which again is amazing if your on prem storage isn’t anything super special
@DevOpsToolkit3 жыл бұрын
LongHorn is indeed a very interesting (and good) project. I'll probably make a video about it soon.
@AsifSaifuddinAuvipy10 ай бұрын
And Harvester
@junejuan85613 жыл бұрын
Hi, Just to answer some of the CONS part 1. Rancher has k3s and rke2 which uses containerd by default and I think they are moving away from rke (the first version) anytime soon. 2. For ingress controller nginx was already enabled just go to the default namespace>load balancing For the storage you have a lot of options in their apps section but rancher recommends longhorn. 3. Again just use k3s and rke2.
@DevOpsToolkit3 жыл бұрын
2. Ingress was not there. I checked the Services and it wasn't there. Storage is there but you need to fiddle with a bunch of options and hope that those available are supported with your provider. While I do think that you should be able to fine-tune storage, it's silly that it does not come with at least a single SC set to default as with literally any other Kubernetes distribution. 1. 3. I agree, but Rancher needs to make it prominent and not hidden. If you just follow what it suggests, neither RKE2 or k3s are there, at least not today.
@BKearal3 жыл бұрын
Ingress controller definitely does deploy with rancher launched rke clusters. I have multiple clusters deployed via rancher that did not require an extra step for this.
@DevOpsToolkit3 жыл бұрын
It's possible that it doesn't with DigitalOcean. Let me double check again. Back in 30 min...
@BKearal3 жыл бұрын
@@DevOpsToolkit Would be interesting to know. I've just been using it with bare metal nodes so far, which as you mentioned is a great use case for it anyway.
@DevOpsToolkit3 жыл бұрын
@@BKearal For some reason, Let's Encrypt decided not to work so I cannot create certs for the new cluster where I wanted to install Rancher (connecting to new clusters doesn't work without certs) and double-check whether Ingress is indeed installed. Nevertheless (until Let's Encrypt gets back to work), here's the part of the video where I'm confirming that Ingress is not there: 17:11 .
@MohamedBelgaiedHassine2 жыл бұрын
Rancher has not only RKE as a Kubernetes distribution but also K3s and RKE2, which both are NOT based on Dockershim, but on containerd. Second thing is: Kubernetes is deprecating Dockershim as part of the Kubernetes project, but Dockershim will continue to exist as a plugin, like some other CRI plugins. It will be maintained by Docker Inc, Mirantis and Rancher. So, the comment about Rancher not being up-to-date is irrelevant.
@DevOpsToolkit2 жыл бұрын
As far as I know, RKE2 was no GA and integrated with Rancher at the time I recorded that video. I'll add it to my TODO list to review it again and create a new video about Rancher. As for Docker in Kubernetes (and Docker Shim), it's clear that there is no future for it in Kubernetes clusters. There is no good reason why anyone would use Docker as a container engine except for legacy reasons. Docker Shim exists only so that people who did things that should not be done can prolong the inevitable remediation of past mistakes. Even RKE2 removed Docker.
@lightspeed794 ай бұрын
One thing to be careful about that I struggled to figure out is that it installs on a docker IP (in example: 172.16.0.5)and not on the localhost 127.0.0.1, therefore if you try to access it via the localhost or 127.0.0.1 it might not work. Had to spend many hours to see this is how it worked, since the rancher academy or other tutorials stated otherwise.
@MrTheGurra3 жыл бұрын
I don't use rancher for cluster management or creation. Like you say, it feels unnecessary when using managed clusters on DO, Google, AWS etc. However I always attach rancher post creation to manage RBAC, ingress, and get a basic UI overview of what is going on. If there was one thing I still feel like rancher does best it is RBAC management (linking to github, AD, etc). Also for getting some quick templates up for basic apps and visualizing where things are located, editing configs and secrets and so on, it is very handy. On the other hand that kind of works against git-ops, so :D. back to pros and cons..
@DevOpsToolkit3 жыл бұрын
You're right. RBAC is really good with Rancher, and I feel silly for not even mentioning that in the video. Editing anything through Rancher Web UI is great if you are a small team. But, as soon as things scale, both in people and operations, operating something from a Web UI becomes dangerous and unproductive.
@sukurcf2 жыл бұрын
5:11 Loved it how you switched to Dark mode
@DevOpsToolkit2 жыл бұрын
That's the first thing I do in every app :)
@mr_wormhole Жыл бұрын
K9S gang rise up
@RaviSharma-vw7py2 жыл бұрын
I think you have explained very nicely thanks u so much from India...
@jiaxinshan67533 жыл бұрын
I finally get why people love your videos. You do point out those terrible features/bugs. I really love this kind of hands on experiences rather than those apopathetic tutorials
@DevOpsToolkit3 жыл бұрын
Thanks
@Flyingnobull3 жыл бұрын
Most important thing that everyone should do: change it to dark mode! YES!
@jmmtechnology4539 Жыл бұрын
Very interesting, thanks for the video!
@Peter12153 жыл бұрын
Really interesting video and also on time for me. I've been working with Rancher a few years back (on prem) and will go back to working with it towards the end of the year. The setting with docker and the fact that it uses post-provisioning installs on node pools made me wince a bit. I hope they will fix it soon. Personally I prefer CLIs over UIs, so the dashboard view is not a killer feature for me. Still I prefer provisioning with terraform and am also exploring Crossplane and cluster API. Could you make a video about fully automated provisioning lifecycle including Day2 OPs? There are plenty of videos about how to start, but rarely good ones that dive into Day2 specific challenges. Thanks for great vid as always :)
@DevOpsToolkit3 жыл бұрын
Personally, I think that the main value of Rancher is in it's Web UI. Those who prefer working with CLI or IaC (me included) are probably better off without Rancher. RKE alone managed through a CLI or IaC is probably enough. Adding "automated provisioning lifecycle including Day2 OPs" to my TODO list... :)
@DukeofTech902 ай бұрын
Please help. I have an interview and I was given task to do . I've done most of it but I'm having issues with Rancher cause I'm to use it for my kubernetes deployments. So the problem is that I can't find the add nodes button on the UI
@DaiquiriFlavour2 жыл бұрын
Very authentic and helpful video! Thanks!
@Textras2 жыл бұрын
Excellent video.
@mahdirashki67523 жыл бұрын
Thank you so much for your video but there are more advanced options for K8's cluster lifecycle mangers like CAPI or Hive that have MachineSet/MachineDeployment/MachineConfig and cluster auto-scaler options..
@DevOpsToolkit3 жыл бұрын
Oh yeah. There's much more cluster managers can do. Still, in my experience, that level is usually combined with everything-as-code, stored in Git, and managed with a different type of tools. Rancher is mostly for those who prefer using a Web UI and, more often than not, the majority of such users do not go deep.
@spy.catcher3 жыл бұрын
would also be interested in showing us how best to utilize and implement tailscale/taildrop in your preferred cluster setup and config..tnx
@DevOpsToolkit3 жыл бұрын
Adding both to my TODO list... :)
@vladtarasenko1363 Жыл бұрын
thanks for the video
@gameprofitsGalactic2 жыл бұрын
Brother, clearly you have to know , only the coding and cluster junkies like me love rancher :) Well done video
@DevOpsToolkit3 жыл бұрын
IMPORTANT: A new review of Rancher is now available at kzbin.info/www/bejne/gHekfZeeqaeriJo
@StephaneMoser3 жыл бұрын
Kubeadm + Terraform
@baumbaer3 жыл бұрын
Using Rancher for on premise Kubernetes Clusters. I used kubespray before. I Really like the user Management and the integrated logging/monitoring options.
@oftheriverinthenight3 жыл бұрын
Kubespray at work RKE1 at home, RKE2 have containerd, but there is no guide or option to update at the moment (git hub issue 562 on rke2). So probably next installation coudl be also kubespray
@DevOpsToolkit3 жыл бұрын
@@baumbaer Rancher for on-prem is a no-brainer. It is probably the best option we have, especially in the "free department".
@DevOpsToolkit3 жыл бұрын
@@oftheriverinthenight That's what i understood as well. It should be available in the next release or, at least, very soon.
@codecoffee-farsi33923 жыл бұрын
What's your idea about Running Rancher 2.x on a three node on-premise cluster? VM specification (4CPU, 16GB)
@DevOpsToolkit3 жыл бұрын
For control plane nodes, 2 CPU and 4 GB RAM should be enough for a starter, unless you plan to combine control plane and worker nodes. Now, worker nodes are more complicated and no one can answer that question since it depends on the workloads you'll have in that cluster. You'll probably lose 1 CPU and 1 GB RAM (or less) on system-level processes and the rest depends on your workloads (apps).
@TeresaShellvin3 ай бұрын
can u pls make a vid on how to upgrade helm charts using cicd or how to automate the deployment of helm charts.
@DevOpsToolkit3 ай бұрын
I tend to use argo CD or flux for that. You'll find quite a few videos with those on this channel.
@TeresaShellvin3 ай бұрын
@DevOpsToolkit today interviewer asked me " how do u upgrade helm using automation or have i ever used cicd for upgrading helm " , i prefer argocd tbh , i introduced argocd in my previous organizations as well.
@DevOpsToolkit3 ай бұрын
@TeresaShellvin essentially, you just need to change the tag in values.yaml and push changes back to git. From there on, either argo CD does the job or, if you're not using it, helm upgrade does the trick.
@TeresaShellvin3 ай бұрын
@@DevOpsToolkit awesome , thank u so much
@GeertBaeke3 жыл бұрын
Would be interested to know what you think of VMware Tanzu? 😀
@DevOpsToolkit3 жыл бұрын
Adding it to my TODO list... :)
@DevOpsToolkit2 жыл бұрын
It took a while to move Tanzu to the top of my TODO list, but now it's finally done and available at kzbin.info/www/bejne/n4CZi395p7NroaM.
@jaysistar27112 жыл бұрын
I'm still not able to retire Docker Engine for some nodes. While containerd is used for k8s pods, there are a few labeled nodes that have Docker Engine as well bind mounted named pipe to it for build agents. Kanako didn't work for multistage Dockerfiles when I tried it (a few months ago). Do we have a non-Docker Engine way to build container images, yet, that works for multistage Docker files?
@DevOpsToolkit2 жыл бұрын
I had the same problem with multi stage builds but that was fixed (at least in my case) a while ago. Your situation might be an edge case so I suggest opening an issue in the Kaniko project.
@cyberlord642 жыл бұрын
5:29 0.6/5.8 cores? 05.8? Am I missing something here? What is 0.8 cores exactly?
@DevOpsToolkit2 жыл бұрын
0.6 is how much CPU is used while 5.8 is how much CPU is allocatable. Rancher does not see how much memory and CPU a node has. Instead, it sees what Kubernetes sees which is allocatable resources. That's always less than physical CPU and memory since a bit is taken by system processes.
@cyberlord642 жыл бұрын
@@DevOpsToolkit interesting. I wonder what was the thought process behind the decision of calling this "cores" as opposed to something more abstract as "resources".
@DevOpsToolkit2 жыл бұрын
It is, in a way, cores, but those available to Kubernetes.
@S0070013 жыл бұрын
Mirantis-k30s would improve Rancher k3s limitation by providing Cri-O as the runtime alternative, in addition providing integration with multiple csi storages
@DevOpsToolkit3 жыл бұрын
I think they already addressed that in RKE2. I just hope that it will be available in Rancher soon.
@tdeutsch3 жыл бұрын
Which k3s limitation do you mean? K3S is not docker, it's containerd.
@DevOpsToolkit3 жыл бұрын
@@tdeutsch I think that he mixed k3s with Rancher.
@tdeutsch3 жыл бұрын
@@DevOpsToolkit maybe. But comparing k0s and k3s would make sense. Do you by chance have a video comparing those two? Afaik they share the same goal and it would be nice to have a comparison of them
@gcezaralmeida3 жыл бұрын
Thank you for you video. I like to much. You are very updated. Could you create a video comparing OS distro to run Kubernetes on-premise? Which is the best one?
@DevOpsToolkit3 жыл бұрын
Adding it to my TODO list... :)
@Peter12153 жыл бұрын
Seconded, I'm going to start working with k8s on prem more (I have been working with AKS for almost 2 years now) and I'm interested in what distros would be best. Also RKE seems like a great choice for on-prem
@DevOpsToolkit3 жыл бұрын
@@Peter1215 RKE is indeed a great choice for k8s on-prem. It might easily be the best choice, at least among free options. I'll do my best to bump "k8s/os distros on-prem" topic closer to the top of my TODO list.
@Flyingnobull3 жыл бұрын
hey Viktor - could you make a video on k8s security applications such as stackrox & twistlock? How necessary they are, are they replaceable with other measures, effect on cluster performance etc.?
@DevOpsToolkit3 жыл бұрын
Great suggestions! Adding them to my TODO list... :)
@subzizo091 Жыл бұрын
how to fix below error: Error: chart requires kubeVersion: < 1.25.0-0 which is incompatible with Kubernetes v1.25.3+k3s1
@DevOpsToolkit Жыл бұрын
Are you refering to gist.github.com/vfarcic/a701b929d1416b095bd58daa24f8b013#file-82-rancher-sh-L24?
@subzizo091 Жыл бұрын
@@DevOpsToolkit no the helm chat version my current k3s version is 1.25 which is not compatible with the chart
@DevOpsToolkit Жыл бұрын
That's common. Vendors tend to be 2 minor versions of k8s behind (approx.). However, most vendors do tend to work on transitions away from deprecated features much earlier since deprecations in k8s tend to last for at least a year. Rancher might have failed to do that and you might need to wait or downgrade your k3s version until than.
@mustaphanaji25232 жыл бұрын
Is there any reference architecture for an active active between on prime and cloud rancher cluster?
@DevOpsToolkit2 жыл бұрын
I'm not sure I understood the question.
@prasathl19972 жыл бұрын
In varibale RANCHER_ADDR what you given?
@DevOpsToolkit2 жыл бұрын
It's in the Gist that accompanies the video. You can find the export command in gist.github.com/vfarcic/a701b929d1416b095bd58daa24f8b013#file-82-rancher-sh-L54.
@kingroc3651 Жыл бұрын
My lab has no internet connections. Does this work for this kind of environment? I see the nodes need to download images from internet.
@DevOpsToolkit Жыл бұрын
I think that you can configure it to use image from any registry. If that's true, you can setup a local registry, download images (one way or another), and push them to that registry.
@TovergO3 жыл бұрын
About immutable images, are there any pre-built, up-to-date kubernetes vm images ready for use ?
@DevOpsToolkit3 жыл бұрын
As far as I know, there aren't any as far as self-managed Kubernetes clusters are concerned. I hope that will change once RKE2 comes into Rancher.
@alvarotorres35292 жыл бұрын
Great content! What do you think about Gardener? Do you think it is a good choice for running a KaaS on top of OpenStack? Thanks
@DevOpsToolkit2 жыл бұрын
I haven't been using OpenStack for a long time now, so I cannot comment on a specific combination between the two. That being said, Gardener is great, but it also has its own issues. The short answer: Gardener is good A longer answer: I have it very high on my TODO list and a detailed video is coming soon :)
@rabigurung71882 жыл бұрын
Is it possible to add Raspberry PI (ARM64) as a node in the RKE cluster? I get stuck in the node provisioning stage when I try to add a Raspberry PI as a node in the RKE cluster. Much appreciated.
@DevOpsToolkit2 жыл бұрын
Unfortunately, I haven't tried it with Raspberry Pi so I cannot say whether it works there or not. My best guess is that it does since it's based on k3s which does work with Pi, but I cannot confirm that.
@rabigurung71882 жыл бұрын
@@DevOpsToolkit Thanks.
@sf29983 жыл бұрын
is Rancher the best tool for creating and managing multiple clusters, or is there a better option?
@DevOpsToolkit3 жыл бұрын
That depends on whether you prefer to use a web UI for those tasks or IaC/CLI. If it is the former, Rancher is a good choice. If it's the latter, use Terraform, Pulumi, or Crossplane.
@sf29983 жыл бұрын
How about Lens IDE?
@DevOpsToolkit3 жыл бұрын
@@sf2998 Lens is a UI for managing resources inside a k8s cluster, not for managing clusters themselves.
@DevOpsToolkit3 жыл бұрын
Just published a review of Lens. kzbin.info/www/bejne/p5DSoHZnrch6eck
@Yrez12343 жыл бұрын
Nice video viktor! What are alternatives to Rancher in order to monitor multiple Kubernetes clusters in cloud using a single UI. I prefere using IaC to manages multiples clusters (terraform, argocd) , but what about monitoring and visualizations of all clusters? Grafana, and service mesh UI give us this kind of details for a dedicated cluster, but it would be useful to have a unified UI to check health of all clusters, manage alerts... Did you already explore this kind of tool?
@DevOpsToolkit3 жыл бұрын
One of my complaints or, to be more precise, lost opportunities in Rancher is that it does not have cross-cluster dashboard. Everything is still based on single-cluster views and the only thing it gives you are links to each of those clusters. It would be awesome if Rancher would provide some kind of unified view of all the clusters. If you use IaC, you probably do not use dashboards to manage clusters but mostly for monitoring. The best bet is to ship metrics from all the clusters to a single DB. That could be Prometheus with Thanos or one of SaaS offerings like DataDog.
@Yrez12343 жыл бұрын
@@DevOpsToolkitThanks! Yes, I mainly use it for monitoring. I'm not aware of Thanos, I will have a look to this project. It could also be good topic to cover: how to monitor multiple Kubernetes clusters.
@tdeutsch3 жыл бұрын
@@DevOpsToolkit @UCHkbuUtgCl2wWzvE1D4csjA I'm not aware of a Multicluster Dashboard with a unified view. And tbh, I don't think I would need one, from a business perspective. I would have to separate my customers anyway, if it's a shared "Dashboard". For people managing multiple clusters beeing cli addicts, maybe k9s is something to look at.
@DevOpsToolkit3 жыл бұрын
@@Yrez1234 The problem with Prometheus is that it does not scale. Thanos solves (or tries to solve) that problem.
@Yrez12343 жыл бұрын
@@DevOpsToolkit Got it! Thanks for the answer
@nguyenquang52162 жыл бұрын
Hello Admin, I install rancher on my local K8s cluster (1master node + 2 worker node). I build this cluster from scratch with 3 ubuntu VM on VMware workstation. Each VM have 2 NIC (1public + 1private) I run follow command from your script: # If NOT EKS export INGRESS_HOST=$(kubectl \ --namespace ingress-nginx \ get svc ingress-nginx-controller \ --output jsonpath="{.status.loadBalancer.ingress[0].ip}") #then echo $INGRESS_HOST >> however the result still blank. Can you suggest me on how to solve it? Thank you very much!
@DevOpsToolkit2 жыл бұрын
If it's an on-prem Kubernetes cluster, Ingress service probably cannot be load balancer type. That would result in creation of an external lb which would (probably) not work. Instead, you need to change the service to be NodePort. That will open a port on you cluster nodes. After that, you need to configure whichever lb or proxy (e.g. nginx) you're using to forward requests to IPs on the nodes that that port. As an alternative, you can skip the lb/proxy altogether and use IP of one of the nodes and the port of the ingress service directly.
@nguyenquang52162 жыл бұрын
@@DevOpsToolkit thank you very much. I just make it with 2 way: + use the Alternative solution as you told me that is nodeport type of ingress services + the second way is i use metallb for loadbalancing of cluster. Thanks alot :)
@eamonnmccudden10703 жыл бұрын
Can you do an Openshift video(s)?
@DevOpsToolkit3 жыл бұрын
Adding it to my TODO list... :)
@eamonnmccudden10703 жыл бұрын
@@DevOpsToolkit looking forward to it! Thanks as always
@squalazzo3 жыл бұрын
i had issues with it, on a local lab machine (32gb ram, i7, 480ssd): some services refuse to start up, the kubectl "web gui" gives error 1006 and does not show anything... looking for alternatives, what do you suggest to create a local test lab? I put proxmox on this machine, and created 6 vms using ubuntu 20.04, 3 master and 3 workers, having the host (proxmox is based on debian 10) sharing its own disk space as nfs share... but i'd like to move away from rancher, so, suggestions? :)
@DevOpsToolkit3 жыл бұрын
If it's for a local lab, I strongly recommend k3d. For a while now, it's the only local k8s I'm using. Check out kzbin.info/www/bejne/o3TIpKh9oJJ5odU ...
@squalazzo3 жыл бұрын
@@DevOpsToolkit yup, already watched that (all your latest 6 months videos, really), thanks!
@tdeutsch3 жыл бұрын
@@DevOpsToolkit Without having seen the video yet: Why k3d and not k3s on a VM or k3os?
@DevOpsToolkit3 жыл бұрын
@@tdeutsch k3d is k3s running inside containers. As a result, it is much faster and requires less resources than VMs, especially if you try to run a multi-node cluster.
@tdeutsch3 жыл бұрын
@@DevOpsToolkit I was under the impression k3d is "k3s in docker" and not "k3s in container". So I was like you: "docker!? whywhywhywhywhywhy" :-) Last weekend, I discovered I have podman on my router Iand I can make him run other containers than "only" his GUI. So I gave the rancher/k3s image a try and used it there with podman :-)
@m19mesoto3 жыл бұрын
KOPS? What do you think?
@DevOpsToolkit3 жыл бұрын
It lost its purpose the moment EKS went public.
@DevOpsToolkit3 жыл бұрын
As a side note, KZbin deleted the comment you made about branching strategies. It tends to do that when there are links. Can you post it again but without ?
@holgerwinkelmann62192 жыл бұрын
By all the comments about the glory of cloud managed k8s, many many user can and will not running on public cloud, at least not our customers, non of them are allowed to run there. I would rather prefer you are fair and focus on the rancher use case, Which is mostly on prem. And then compare with other on prem alternatives
@DevOpsToolkit2 жыл бұрын
I do agree that rancher is mostly for on-prem users and I believe I said that in the video. Nevertheless, rancher marketing claims for on-prem and cloud so both are valid choices from their perspective.
@DevOpsToolkit2 жыл бұрын
As a side note, the comparison with other alternatives is coming. I just released a video about tanzu, soon comes openshift, and comparison of th three after that.
@holgerwinkelmann62192 жыл бұрын
@@DevOpsToolkit sure. Marketing must cover this, but If I would have a public cloud only strategy, I would not bother with the rancher candidates. I would make a crossplane composition for infra, cluster, services and applications composed from cloud APIs. But, hey, what you do if you don’t have it, if you want it must run on edge, bare metal, on prem etc as all our customers must?? there are not much alternatives, you either build your self a CAPI based Plattform and provide your own CAPI provider package for composition. API wise not really complicated, just work ;). But our biggest BUt??? Who provides you maintained node images or k8s distributions you can rely on??? The example images coming from its CAPI are ok for testing, but production??? Specifically for 2nd day ops. Rancher at least provides the RKE distribution or Images. May you can make a comparison of the alternatives. Imho this would be the following short list, but you Might be aware of others ? * rancher + RKE2 * RH openshift * Microsoft, Kinfolk Lokomotive * Tanzu ? * Gardener * Mirantis * kubermatic * …. * plain DIY CAPI
@holgerwinkelmann62192 жыл бұрын
@@DevOpsToolkit I’m carry on with testing rancher 2.6 with some RKE2 (containers) tech preview clusters. ;) nice weekend!
@DevOpsToolkit2 жыл бұрын
RKE2 (not 1) is one of the best OSes for on-prem Kubernetes. Thanks for the list. I did not have lokomotive on mine. Adding it...
@hazi.m3 жыл бұрын
If the main "pro" for rancher are on-prem installations, how does it compare to OKD ? as that should be free OpenShift without RedHat support right? Would OKD be a better option for on-prem ? you would get Storage management, mutable provisioning, non-docker setup in addition to all the stuff rancher provides
@DevOpsToolkit3 жыл бұрын
I haven't used okd enough to compare it. At least not yet. I haven't seen it much in the field either. Most of the companies I worked with are either using OpenShift, Rancher, or something else that is not okd. That being said, the number of companies I worked with is limited and that does not mean that okd is not widely used but rather that I haven't been around it.
@hazi.m3 жыл бұрын
@@DevOpsToolkit I've had the same experience. Most companies that need and have the workforce to use/understand OpenShift are willing to pay for it, otherwise they go for Rancher or home built solutions as you mentioned. I'm also trying to figure out why I haven't seen OKD being used as compared to other free/low-cost alternatives. Complexity could be one reason, but I wonder if it is worth it if you (or your team) are able to handle that. Anyways would love to see your videos on OpenShift and okd :)
@TeresaShellvin3 ай бұрын
docker
@yuewang78543 жыл бұрын
docker: whywhywhywhy? lol
@m19mesoto3 жыл бұрын
I think, I can sense Suse influence already :D The new interface v2 is terrible, anyway I really like Rancher in some kind..
@DevOpsToolkit3 жыл бұрын
Rancher has a special place in my heart. It helped me a lot when I was less experienced and Kubernetes was much less mature. For a while, it was, without doubt, the best way to create and manage k8s clusters. In the meantime, k8s got much better and managed Kubernetes services (e.g. EKS, AKS, GKE, etc.) got better over time. As a result, Rancher started having less and less differentiating features.
@RoyOlsen2 жыл бұрын
Weird how people think Kubernetes is something you should buy from a public cloud provider. Easy, yes. But so expensive.
@DevOpsToolkit2 жыл бұрын
It all depends on cost analysis. Hosted Kubernetes (e.g., GKE, EKS, AKS) can reduce operations. You're paying for a service and that might or might not be cost-effective depending on your skill level, needs, etc.
@RoyOlsen2 жыл бұрын
@@DevOpsToolkit Any particular reason you erased my comment? Don’t care for insights?
@DevOpsToolkit2 жыл бұрын
@@RoyOlsen I never deleted anyone's comment. KZbin, on the other hand, tends to delete comments automatically, especially if they contain links. Please try again and if that fails, send me a DM on twitter (@vfarcic) or linkedin and I'll publish the comment for you. In any case, I'm sorry your comment was deleted. Unfortunately, I do not have any means to control KZbin policy.
@RoyOlsen2 жыл бұрын
@@DevOpsToolkit Strange. It was a fairly long comment, but no links and nothing impolite or terribly controversial. All right then, thanks for the the reply, appreciate it.
@DevOpsToolkit2 жыл бұрын
@@RoyOlsen KZbin algorythm is a mistery and their policy with comments is very frustrating. I wish there would be something that I can do but, after days spent with their support, my conclusion is that there isn't anything I can do :(
@zenmaster243 жыл бұрын
is this rancher giving up, or suse pausing development after acquisition? what is the better free cross-provider ui management alternative (including on-prem and cloud clusters)?
@DevOpsToolkit3 жыл бұрын
I do not think there is a better free cross-cluster UI solution. Most of the work in that area is around IaC tools rather than UI-based.
@DevOpsToolkit3 жыл бұрын
Also, I doubt that SUSE is pausing anything. They would not acquire Rancher if they do not have plans. My guess is that they are reorganizing it instead. Also, SUSE needs to figure out a revenue stream for Rancher and my best guess is that it is not going to be Cloud since that is already dominated by others. It will more likely be on-prem as a milking cow and edge as the future. Those are purely guesses though since I do not have any inside information.
@zenmaster243 жыл бұрын
@@DevOpsToolkit its a decent guess, but may not make them the revenue they expect - most on-prem kube clusters that i have seen in large orgs are openshift, which has its own ui. re-organizing could also introduce an unintentional pause in development, as things are being changed
@stormrage88723 жыл бұрын
I hate rancher and all it does, it's way too intrusive in the communication between the control plane and the nodes. We got left out with no management working on production clusters for a while until we decided to redeploy everything from scratch without Rancher. If you don't pay for support, it's a ticking bomb
@DavidBerglund3 жыл бұрын
I totally agree. Switched to MicroK8s as we were running Ubuntu anyway. Manage workloads like with any cluster and have a helpful CLI for cluster management. And, optionally, enterprise support!
@lavishly2 жыл бұрын
DO NOT use Rancher. Sad it sold. It took yrs of my life in frustration and stress. Team sucks. Arrogant and not helpful. Left it and never looking back!!!
@tdeutsch3 жыл бұрын
Rancher deploys ingress. You just "using it wrong" :-D In the video, you are enabling ingress but not the default backend. Therefore, you can not see a service. Because thats the only service you would see for ingress: $ kubectl get service -A | grep ingress ingress-nginx default-http-backend ClusterIP 10.43.160.19 80/TCP 288d However, even without the default backen, you should have ingress. Please check this: $ kubectl get pods -A | grep ingress ingress-nginx default-http-backend-6977475d9b-4km5v 1/1 Running 0 24d ingress-nginx nginx-ingress-controller-mjblv 1/1 Running 0 24d ingress-nginx nginx-ingress-controller-sld26 1/1 Running 0 24d ingress-nginx nginx-ingress-controller-thbcc 1/1 Running 0 24d kube-system rke-ingress-controller-deploy-job-4c4hq 0/1 Completed 0 24d You should have the ingress-controller and should be able to create ingresses. But you do not have the "default backen" service. Regarding docker: I agree mostly. However, two notes on this: I) dockershim will be supported by 3rd Parties for a longer period. II) While RKE depends on Docker they already have K3S and RKE2 which are not. Therefore, one should consider RKE as a "soon to be replaced" product. The pitty is, RKE2 and K3S are supported by Rancher for beeing imported and managed (and upgraded), but they do not support creating clusters with them out of Rancher. Hopefully this may come sooner than later. Especially K3S and K3OS are really awesome, did a lot with them recently. PSP will not go, it will be reimplemented differently. But as of now, its still here and therefore should be supported by Rancher. I fully agree on Kubernetes Versions of cloud providers. It's even worse: In Azure, if you create a AKS cluster out of Rancher, everything is fine. It knows it's AKS and it automatically grays out stuff like etcd and controller, because those are "hidden" in AKS. If you create it in Azure and import it, you have big red failures right on your dashboard (that one with the gauges) because it does not detect ist AKS.
@DevOpsToolkit3 жыл бұрын
My issue was mostly related to Cloud (I recognize that Rancher is amazing for on-prem). Rancher does not understand Clouds it supports. It is supposed to create a LoadBalancer Ingress that spins up an external LB. I'm eagerly waiting RKE2 and K3S in Rancher. I'm surprised that those are not already there. My best guess is that acquisition by SUSE created temporary delays.
@tdeutsch3 жыл бұрын
@@DevOpsToolkit I see. Not ingress is what you want, loadbalancer it is :) I give you this point. However, I'm not aware of anything "On Prem" that has loadbalancer built in. It is (or was) a cloud-only thing. I tried MetaLB which works, but never saw it "in the wild". And for myself (for my Homelab) it's useless because I need one IP to which I can forward the traffic to. What I currently plaing around with (and may suit my homelab use case best) is keepalived deployed into the cluster. Because I use k3os now and you really can't so fancy stuff in it directly. But it does upgrade itself together with k3s :-) Like OpenShift/RHCOS does, but in "free" and "lightweight" :D
@tdeutsch3 жыл бұрын
@@DevOpsToolkit And for cloud LB: they are only "included" in the cloud provided Kubernetes clusters, right? If I use AKS, I can have LB. If I build my own cluster on Azure VMS, I need to bring my own K8S-LB. Is this different with DO? Never used DO tbh.
@DevOpsToolkit3 жыл бұрын
@@tdeutsch Oh yeah. I was referring only to Cloud. I never saw an implementation of LB Services in self-managed clusters. They might exist, but I haven't used them. Nevertheless, the whole purpose of LB Services in k8s is only to configure external LB with the IPs of the nodes and the port. OTher than that, it's the same as NodePort which is probably what you're using.
@tdeutsch3 жыл бұрын
@@DevOpsToolkit ok, maybe we speak about different things. my apologies, as English is not my mother tongue. Let me explain what I know: Speaking of LB deployed together with or in Kubernetes, there's the service typ loadbalancer used in CloudClusters like AKS etc., giving you the possibility to connect from the internet to the service. aka kind of a public IP. For having something similar for on-prem, there's MetaLB. Basically, you give it a range of IP Adresses and he gives them to the services of type "loadbalancer". Similar behavior to Cloud-LB. Where I think we have a confusion was the thing I would call a LB "in front of" K8s. The main reason for this is having a single HA endpoint which then get's forwarded to a nodes ingress (Port 80 or 443). For customer setups, we do that usually with a pair of LinuxVMs with HAproy on them and keepalived to give them a single IP. This is something I only use on-prem.