Idk how you pump out videos this fast. I could get used to this. Haha. Good work Jim and thank you.
@Jims-Garage Жыл бұрын
My pleasure! Don't want to leave people hanging with a cluster just waiting to be used ....
@theMeissullo Жыл бұрын
Just an advice for anyone learning kubernetes. while its totally possible to install rancher onto the main cluster, its not best practice. for a homelab, a separate docker installation for rancher to manage the productive cluster can be achieved easily and without much overhead. its also stated like this in the rancher docs under "tips for running rancher": Run Rancher on a Separate Cluster Don't run other workloads or microservices in the Kubernetes cluster that Rancher is installed on. otherwise great content and thanks for your videos, great stuff!
@Jims-Garage Жыл бұрын
For production I agree, good point 👍 but a single node docker container is also risky as you can lose cluster data (if you're running from this container). I think this is an acceptable compromise for a Homelab.
@juanmarioparra2 ай бұрын
@@Jims-Garage hello there, if i want to install argocd for gitOps must be in the admin machine or in the masternode machine?
@s4shermman7 ай бұрын
one of the most amazing aspiring youtubers Jimmy you are awesome been following you for my brand new home lab love your work
@Jims-Garage7 ай бұрын
Thanks, Superman. Really appreciate the feedback and support
@speppucci Жыл бұрын
fantastic, this also installed with zero errors. in Italy we say smooth as butter :) I saw that after installing rancher the resources of my PVE were significantly reduced, at the moment everything is running on a PVE N95 with 16Gb RAM
@Jims-Garage Жыл бұрын
That's great, thanks for reporting back
@marcelorruiz Жыл бұрын
Flawless! I initially had an issue and things were not spinning up but then I jumped over to your Discord and found a similar issue. It was a storage issue for me. After increasing that and trying again. Rancher up and running beautifully. I really do like the way you go through your steps and commands with explanations. This definitely helps if someone is new or if you need to research further an issue. I can see the amount of time you must take here. Much appreciated!
@Jims-Garage Жыл бұрын
Great to hear!
@Nehemoth_G9 ай бұрын
Can you point me to the solution?, Was running a problem about space, added space but still I got an error about ephemeral storage but I haven't found where to set the limits. Reading the comments to see if someone has the same issue.
@maximodakila2873 Жыл бұрын
You're an awesome teacher. I followed your instructions to the letter and got my rancher server working on my K3s home lab. You have a subscriber here.
@Jims-Garage Жыл бұрын
Thanks, really appreciate the feedback
@BromZlab Жыл бұрын
Thank you Jim. good video again. now Rancher is up and running :)
@Jims-Garage Жыл бұрын
That's great, good job!
@barryporg Жыл бұрын
Thanks for the videos Jim, they've been very clear and easy to follow so far. I had a little glitch around the 12:00 mark, the command "kubectl expose deployment rancher --name=rancher-lb --port=443 --type=LoadBalancer -n cattle-system service/rancher-lb exposed" ran and displayed the same error on your screen, but didn't seem to do anything else (i.e. "kubectl get svc -n cattle-system" didn't show an EXTERNAL-IP). This worked though: "kubectl expose deployment rancher --name=rancher-lb --port=443 --type=LoadBalancer -n cattle-system" which then displayed "service/rancher-lb exposed" so it almost seems that on your screen capture the command and its output are run together, making it look like the output "service/rancher-lb exposed" are supposed to be command parameters.
@Jims-Garage Жыл бұрын
Good job, sorry for the confusion
@vinaduro Жыл бұрын
I’ve been waiting for this video. 🎉
@Jims-Garage Жыл бұрын
Hope you enjoyed it!
@georgebobolas636311 ай бұрын
Another excelent video to get Rancher up and running.
@Jims-Garage11 ай бұрын
Thanks! More to come :)
@YannAlixworld8 ай бұрын
Your videos are awesome, after setting up correctly the vm with storage, all went well.
@Jims-Garage8 ай бұрын
Great, glad it worked for you
@jvrietveld7 ай бұрын
After two failed installations of K3S I have finaly have a succesful part 4. The reason was diskspace. I run now nodes with 20GB minimal diskspace.
@JPEaglesandKatz5 ай бұрын
Worked absolutely perfect (after fixing some of my own mistakes like trying to install to the very latest stable verson of k3s in the previous versions which will let you run into issues later with rancher). But great K3S guides.. Much appreciate... Nice to delve into this topic again after a long time with such an easy method of installation!!! :) Only problem I'm facing now is that my old ryzen 3600 with 48 gigs of memory and only 12 cores is too limited to do anything with proxmox... It keeps crashing my virtualized truenas scale with thos 5 k3s nodes up... I've noticed this before when you over provision the cores and especially memory... Guess I need to go shopping.
@Jims-Garage5 ай бұрын
Thanks, appreciate the feedback
@akaza1486 ай бұрын
Hi Jim, thanks for your great work. I installed version v1.28.8 +k3s1 oif Kubernetes to use Ranchers latest repo. It did install metall-lb automatically so I was wondering why it didn't when you installed yours. Is this because of the newer version of Kubernetes?
@Jims-Garage6 ай бұрын
Hmm, not sure. It should have done.
@sakshamconsul13899 ай бұрын
Really helpful! Thank you for this video series. Has provided me tremendous learning. For my remote installation, I had to set an external IP myself after exposing the port (was stuck in the pending state) kubectl patch service rancher-lb \ -n cattle-system \ -p '{"spec": {"type": "LoadBalancer", "externalIPs":["192.168.3.61"]}}' Then worked like a charm :) (Also important to note to set the server url to the correct one instead of localhost:XXXX if you're using SSH tunneling to get the webpage) :)
@chrisd1243Ай бұрын
As always, a great tut. I have been wanting to do this for a while. Being new to this, would be it safe for me to assume that if i want to add another worker node and have 3 masters and 3 workers that i just need to add the third worker to the script and rerun the script?
@Jims-GarageАй бұрын
@@chrisd1243 simply obtain the key and run the command on the instructions page docs.k3s.io/quick-start
@terjemoen819310 ай бұрын
Easy to follow and complete, awesome stuff!
@Jims-Garage10 ай бұрын
Thanks 👍
@zombiewolf0076 ай бұрын
Amazing video series. Thank you, my friend. Is that rancher-lb resilient? In other words, is it always available on all nodes and if a node fails, then the rancher-lb is still up and alive on another node? Does it act like a pod? Not sure if there's more of that info in a prior or later video from you. Thanks in advance!
@Jims-Garage6 ай бұрын
Yes, the loadbalancer is cluster wide. If a node fails the pod should migrate and be available
@taherboujrida81105 ай бұрын
fantastic , excelent work , clear to the point,..... keep it up Jim you are awesome
@Jims-Garage5 ай бұрын
Thanks, appreciate it.
@apatock3 ай бұрын
Hi Jim, on my site the 3 rancher-pods are running on the worker nodes. One on worker1 and two on worker2. In the video you said the rancher-pods are running on all three master nodes. What did I do wrong?
@Jims-Garage3 ай бұрын
Nothing, that's absolutely fine. They can move freely between any of the nodes.
@IstvanKovacs Жыл бұрын
Hi Jim, first of all, thank you very much for your effort to allow non-IT experts to try out the strengths of the Kubernetes cluster in their home lab. I have run the suggested installation several times and always get an error with the following command: kubectl -n cattle-system rollout status deploy/rancher After a few minutes I have got the following error message: "error: deployment "rancher" exceeded its progress deadline" For the first time I ran out of the recommended 3.5 GB of storage space on the ubuntu virtual master1 server disk. Now that I have increased the storage space I have got the above error again. Should I increase the capacity for a successful rancher implementation, or is there anything else that would help a successful implementation? Thank you for your effort!
@Jims-Garage Жыл бұрын
Did you make sure to shutdown the VM after altering the storage? A reboot will not work.
@IstvanKovacs Жыл бұрын
Thanks, I am going to check it.@@Jims-Garage
@zakhounet9 ай бұрын
Hi Jim, First of all "Merci Beaucoup" for your tuto, really appreciated. I 've followed your steps (only difference is UBUNTU 22.04) and despite 4 cores per VM (3 masters, 2 workers) and 4Gb RAM per VM on Proxmox, Rancher is displaying a total of 8 cores and 7.85 GB ram available. Very strange as all core in Rancher are marked as 4 cpu and 7.85GB Ram. Any ideas or leads to be investigated ? I am using K3S 1.27.7+3KS2 and the "latest" Rancher image. Thanks in advance for your help.
@Jims-Garage9 ай бұрын
Thanks, other users have reported the same. I believe it's a bug with the latest rancher version.
@joergbuesing50817 ай бұрын
Hey Jim , great work and thanks for the Snippets on your Github. All is working fine as expected, I shutted down the Masterst one by one to make a snapshot and the nginx site was all the time available. But as I shutted down worker 1 (with nginx), the connection war broken. nginx didn't switch to worker 2. Shouldn't it?
@Jims-Garage7 ай бұрын
Yes, it should failover to the other worker node. Can take a couple of minutes, I believe you can specify the tolerance (wait time).
@Glatze603 Жыл бұрын
Hi Jim, great job!!! Everything works fine and without errors, till I try to install rancher - here I get the error "The connection to the server MY-FIRST-MASTER-IP:6443 was refused - did you specify the right host or port?" when I check the status of installation?!? After a minute I get "Error from server (ServiceUnavailable): apiserver not ready". The only difference to your directions is that I use debian 12 instead of ubuntu. Do you (or some one else) have an idea to this? Thanks a lot.
@Glatze603 Жыл бұрын
One important info: I resized the disks of all vm´s to 6 GB bevor I started deploying k3s, but during installation of rancher, master1 was 100% full... Next step is to resize disks to 10GB, then let´s see if error comes up again.
@Jims-Garage Жыл бұрын
@@Glatze603 yes, increase disk size and try again. Let me know if it doesn't work.
@Glatze603 Жыл бұрын
@@Jims-Garage the worker disk space was the problem. I had previously expanded this from 6 to 8 GB and the disk usage is now (after successful Rancher installation) at 74% (worker1), 67% (worker2) and 82% (worker3). Depending on what comes next, the hard drive space will have to be increased further.
@JustinJ. Жыл бұрын
7:04 you configured the cert-manager ... 12:25 shows the certificate is not valid, does the cert-manager not provide the certificate for the application on a browser level? How does cert-manager work in this setup?
@Jims-Garage Жыл бұрын
Hey, thanks. Read up on self-signed certificates. The certificate is valid, just Google doesn't trust it because we created it ourselves. This is different to Traefik and letsencrypt where letsencrypt creates the certificate and is trusted by Google.
@NikolaNovoselec9 ай бұрын
Upon installing rancher-latest I first get the status of the pods ContainerCreating, then ErrImagePull, then ImagePullBackOff, then ContainerStatusUnknown / Evicted. I've followed your guides step-by-step but I'm not sure how to proceed now. Any idea what could be the problem?
@Jims-Garage9 ай бұрын
It might be a bug with the latest version. You can alter the script and set to stable instead, give that a try. Also, worth upping resources in case it doesn't have enough.
@NikolaNovoselec9 ай бұрын
@@Jims-Garage I'm dumb. I only allocated 4 GB to the HDDs and of course it wasn't enough. After increasing to 20 GB rancher installed without a problem. But now I'm getting the error message "Ensuring load balancer" and "Error syncing load balancer: failed to ensure load balancer: no address pools could be found". I can confirm that the address pool is available and I can access NGINX on the first and rancher on the secnond IP of the address pool. What am I doing wrong?
@torgrimt Жыл бұрын
Really good video serie, cant wait untill traefik and longhorn hits. Im planning on building an bare metal cluster using this method. But im still waiting. Would be cool if you did some real life homelab apps aswell. For instance using tags for tagging hosts with usb devices (like zigbee dongles and so on).
@Jims-Garage Жыл бұрын
Thanks, I'll cover taints and labels in the next video.
@fedefede843 Жыл бұрын
very very nice Sr.!
@Jims-Garage Жыл бұрын
You're welcome 😁
@Rockshoes13 ай бұрын
No issue but wanted to thank you!
@Jims-Garage3 ай бұрын
Much appreciated 👍
@wiesawpeche7273 Жыл бұрын
ditto 😉
@brodur5 ай бұрын
So what happens if the IP assigned to the load balancer ever changes? I noticed on the rancher login page that it was set to the specific ip address.
@Jims-Garage5 ай бұрын
Loadbalancer IPs should generally be static. All of mine are.
@victoranolu43764 ай бұрын
How were you able to fix the error to expose the rancher service to a loadbalancer with the rancher-lb. I am trying the code and getting the same error. The other one is i could edit the come along rancher to loadbalancer but the one in this demo you have 3 rancher services along with the come along rancher service. I am stuck there. Help. Thanks
@Jims-Garage4 ай бұрын
Don't use a KVM image for the VM, use standard.
@victoranolu43764 ай бұрын
@@Jims-Garage Thank you for reaching back. I use the standard and also i use the stable rancher helm release. But the error still arises. I am currently trying to expose the service to a loadbalancer. I want to know if a 3rd rancher service is required or do i just change the come along rancher service deployment to a loadbalancer?
@Jims-Garage4 ай бұрын
@@victoranolu4376 yeah, you just need to specify a loadbalancer (with metallb)
@victoranolu43764 ай бұрын
@@Jims-Garage I am back again. I have deployed Rancher in Azure Kubernetes but i also have this issue of rancher not having a loadbalancer IP for its ingress. If i change the rancher service from ClusterIP to loadbalncer. It shows the Rancher homepage but fails to login with the pre-defined password set in deployment.
@chrisd1243Ай бұрын
OK, i need some help. I broke a rule and didn't bother to pay attention to the password LOL. didnt realize it till this morning when i started the longhorn tut. Im sure there's got to be a way to reset the admin password via the command line for rancher. Or do i need to tear it all down and start from scratch? I looked at the rancher docs but the output they say i should get is not what i get. thanks
@Jims-GarageАй бұрын
@@chrisd1243 yes, you can reset via command line, the command is on their website.
@chrisd1243Ай бұрын
@@Jims-Garage yeah, I found this originally, $ KUBECONFIG=./kube_config_cluster.yml $ kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $(kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher --no-headers | head -1 | awk '{ print $1 }') -c rancher -- reset-password New password for default administrator (user-xxxxx): But when i run it it tells me the file kube_config_cluster does not exist. Its entirely possible im executing it in the wrong spot. I tried both the admin and the master.
@Jr-hv1ct Жыл бұрын
Hey Jim, thanks for the videos having an issue where the webpages for the nginx and rancher are not loading, I can ping the ip's assigned successfully. This is strange as I can reach other VMs on the same vlan including promox itself.. What could be the problem?
@Jims-Garage Жыл бұрын
What does kubectl get svc -n nginx show? Hop into Discord if you can, easier to diagnose.
@Jr-hv1ct Жыл бұрын
@@Jims-Garage that gives message "no resources found in nginx namespace". Runing for namespace cattle-system shows rancher, rancher-lb and rancher-webhook with only the lb having an external IP. To see the nginx have to run "kubectl get svc" which shows the nginx-1 load balancer and kubernetes. Edit* after finding out the cmd to list namespaces, the nginx and kubernetes are in the default namespace.
@Jims-Garage Жыл бұрын
@@Jr-hv1ct check that the traffic policy is cluster not local. (You'll see it in the updated GitHub manifest file)
@Jr-hv1ct Жыл бұрын
@Jims-Garage do you mean the k3s script or the rancher file
@Jims-Garage Жыл бұрын
@@Jr-hv1ct recommend you hop on Discord and share your configs
@levimeykens16147 ай бұрын
I keep getting a error code when i want to import the cluster : no objects passed to apply. Any thoughts
@Jims-Garage7 ай бұрын
You shouldn't need to import. Does it not show local?
@levimeykens16147 ай бұрын
@@Jims-Garage Dear Sir, I would like to express my sincere gratitude for your prompt response to my previous query. Upon careful consideration, I realize that my question may not have been entirely relevant, and for this, I offer my apologies for any inconvenience this may have caused. As someone who is still relatively new to the world of computers, I am continually in a learning process and exploring various approaches. During my attempt to execute a simplified installation method on Rancher, I encountered an error message when adding a cluster. Currently, I am studying your videos to understand the process as you have suggested, but I find that some concepts still elude me. Nevertheless, I am determined to persevere and tackle this challenge. It involves a significant amount of concepts to learn, but nonetheless, I would like to thank you in advance for your support. Yours sincerely,
@Jims-Garage7 ай бұрын
@@levimeykens1614 you're most welcome. Hop into Discord if you'd like more support
@BonesMoses8 ай бұрын
As a note, there seems to be a limit as to how far the alpha repo will allow. I tried this with k3s v1.29 and it said I had to use 1.28 or below. Version 1.27 was the latest supported as of this comment.
@danmcdaniel7099 ай бұрын
I get to the rollout step and then I get this: Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available... Waiting for deployment "rancher" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available... Those last two lines keep repeating over and over. It never finishes. Any ideas?
@Jims-Garage9 ай бұрын
Do your nodes have enough resources?
@danmcdaniel7099 ай бұрын
@@Jims-Garage That was it. Had to increase disk size to 5GB on master nodes and to 8GB on the workers in order for it to work. On to part 5!
@Jims-Garage9 ай бұрын
@@danmcdaniel709 you probably want a little more than that if you can stretch. Minimum of 10GB in my experience
@joshuafoley38567 ай бұрын
@@Jims-Garage I had this same issue (seems like in the comments a lot of us did). You might want to just go over increasing the HDD size as part of this guide. I was stumped for a while on it but just figured out how to do it.
@tolpacourt3 ай бұрын
⚠ WARNING: `installCRDs` is deprecated, use `crds.enabled` instead. cert-manager v1.15.1 has been deployed successfully!
@janiel4716 ай бұрын
I just wonder why…it’s ‘cattle system’??? 😂
@Jims-Garage6 ай бұрын
Because the inventory are "cattle not pets". Quite brutal, but you get the idea. Inventory should be disposable/replaceable.
@janiel4716 ай бұрын
@@Jims-Garage IP branding namespace 😝
@spiritcxz3 ай бұрын
if honestly rancher UI very bad, i don't like it.
@Jims-Garage3 ай бұрын
Do you have an alternative that you prefer?
@spiritcxz3 ай бұрын
@@Jims-Garage OpenShift/OKD console able to install vanilla k8s...
@subzizo0912 ай бұрын
hello a question regarding the install rancher to existing rke2 "v1.28.12+rke2r1" with traefik and cert manager. i get permission error for the service account (is this should be done by helm while install the debloyment ) noting iam using "rancher-stable/rancher" [FATAL] "clusters management cattle io" is forbidden: User "system:serviceaccount:cattle-system:rancher" cannot list resource "clusters" in API group "management.cattle.io" at the cluster scope