[ Kube 23 ] Dynamically provision NFS persistent volumes in Kubernetes

  Рет қаралды 40,624

Just me and Opensource

Just me and Opensource

Күн бұрын

Пікірлер: 312
@alexanderhill2915
@alexanderhill2915 5 жыл бұрын
Another Great Session,. Thank you for sharing your knowledge and making it simple and easy for us all to learn. You're doing an Excellent job! Much Appreciated
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Alexander, thanks for watching this video and taking time to comment/appreciate.
@sumitology
@sumitology 4 жыл бұрын
Clean and clear.... I was so confused in this topic before i watched this video. Thank you mate.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi, many thanks for watching and subscribing to my channel. Cheers.
@luttferreira8109
@luttferreira8109 5 жыл бұрын
Best and most complete Kubernetes video series. Way to go!
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Lutt, thanks for watching.
@mohammadmajdalawi5745
@mohammadmajdalawi5745 2 жыл бұрын
what a great session, you're perfect with mentioning all the details and use cases Appreciated, thanks
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Hi Mohammad, Thanks for watching.
@vasinev
@vasinev 5 жыл бұрын
Thank you! It's the first video where dude simple show how to do dinamic provision nfs.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Thanks for watching.
@s0j0urner15
@s0j0urner15 3 жыл бұрын
This is wonderful content which needs to be appreciated and also be monetized for your effort. I am planning to support you once I get a job.
@giovannicoutinho5966
@giovannicoutinho5966 5 жыл бұрын
I have been going through your channel in the past few days. Awesome material!
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Giovanni, thanks for watching this video and taking time to comment. Cheers.
@rakeshkotian3120
@rakeshkotian3120 4 жыл бұрын
This is too good, thanks for explaining the working of the dynamic provisioning of NFS with block diagrams.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Rakesh, thanks for watching.
4 жыл бұрын
Hello, This doesn't seem to work anymore. When i try do add pvc i get: Normal ExternalProvisioning 3s (x9 over 117s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator I'm running k8s 1.20 on 3 node cluster if i start pv with: nfs: server: SERVERIP path: NFS SHARE and pvc with that pv it works. but dynamic setup fails :/
4 жыл бұрын
when i get the logs of nfs-client-provisioner i get: I1214 22:08:03.402156 1 controller.go:987] provision "default/pvc1" class "managed-nfs-storage": started E1214 22:08:03.406446 1 controller.go:1004] provision "default/pvc1" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@ Hi Thanks for watching. I will re-test this video soon and let you know. Things might have changed.
4 жыл бұрын
@@justmeandopensource i tested, on 1.19.5 it works, on 1.20 it fails
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@ Ah okay. Something must have been changed. I will work on it. Thanks for confirming.
@shantanupareek6631
@shantanupareek6631 3 жыл бұрын
Best Kubernetes video series!
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Shantanu, many thanks for your interest in this channel. Glad you like it. Cheers.
@mazenezzeddine5260
@mazenezzeddine5260 4 жыл бұрын
Thank you so much for your videos. I beleive the most valuable ones even among those that are not available freely.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Mazen, many thanks for watching. Cheers.
@trezay5950
@trezay5950 Жыл бұрын
Great video, good to follow instructions. Thanks!
@justmeandopensource
@justmeandopensource Жыл бұрын
Thanks for watching.
@anandanthony4319
@anandanthony4319 4 жыл бұрын
Worth explanation of all videos
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Anand, thanks for watching.
@ajit555db
@ajit555db 5 жыл бұрын
Great session. Completed hands-on using kubernetes-dind-cluster. Would be very helpful in creating various deployments using helm in my homelab k8s cluster without relying on cloud storage.
@waterkingdom9839
@waterkingdom9839 5 жыл бұрын
Hello, I like your approach of teaching. These are all interesting and great videos for quick learning. May I request you to please share back the example of how to setup Dynamic provisioning on GCP using Persistent Disk. Awaiting your inputs...thanks
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@waterkingdom9839 Thanks for your comment. So far I have only been playing with k8s on bare-metal servers. I am yet to explore it on GCP and AWS. Soon you can see videos around these. Thanks.
@vamseenath1
@vamseenath1 4 жыл бұрын
Hi Venkat, Thank you. Now got a clear picture. I have Created a K8 Cluster in Vmware infrastructure, how to proceed with the creation of Persistent Volumes. The storage is available in the form of ISCSI Datastores & NFS Datastores!!! Thank you
@justmeandopensource
@justmeandopensource 4 жыл бұрын
No worries. I haven't used iscsi datastores but the below documentation looks the one for you. github.com/kubernetes-retired/external-storage/tree/master/iscsi/targetd Cheers.
@vamseenath1
@vamseenath1 4 жыл бұрын
@@justmeandopensource thanks Venkat
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@vamseenath1 No worries. You are welcome.
@sherifkhedr9362
@sherifkhedr9362 3 жыл бұрын
thanks for your hard effort, and appreciate it if you share the name of your terminal you use
@giovannicoutinho5966
@giovannicoutinho5966 5 жыл бұрын
I have a question regarding storage. If I'm going to use NFS to store persistent volumes how much disk space do I need on the master and worker nodes? What data does k8s nodes store? I would guess docker images and logs but I bet there is more. Kudos for your efforts on sharing knowledge with the community!
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Giovanni, It all depends on your needs. NFS server can be external running on a separate server. You manage disks with desired capacity on that NFS server. Worker nodes and master nodes, as you guessed only needs sufficient storage space for storing docker images. If you are using it in production and have lots of pods running, then you might need to have more storage. You would normally have monitoring solutions like Nagios or Check_mk implemented and will be alerted when disk space goes low. More applications in your cluster means more containers which means more space needed. One other thing to bear in mind is if you use hostpath for binding a node volume inside the container, then you have to think about how much storage you need. Thanks.
@kushagraverma7855
@kushagraverma7855 4 жыл бұрын
whats the bash prompt that you are using ? looks really cool
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Kushagra, thanks for watching. This is Manjaro Gnome edition. I have done a video on my terminal setup which you can watch in the below link. Cheers. kzbin.info/www/bejne/qaCkqIinZ8iEfrM
@waterkingdom9839
@waterkingdom9839 5 жыл бұрын
Hello Venkat, I like your approach of teaching. These are all interesting and great videos for quick learning. May I request you to please share back the example of how to setup Dynamic provisioning on GCP using Persistent Disk. Awaiting your inputs...thanks
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi, Many thanks for watching this video and taking time to give feedback. Much appreciated. I will add this request to my to do list. I have videos waiting to be released in the coming weeks. Thanks for requesting this new topic. Cheers.
@waterkingdom9839
@waterkingdom9839 5 жыл бұрын
@@justmeandopensource Thanks for your prompt response. Much appreciated. There are two videos which are dependent on dynamic provisioning but because I am not able to setup NFS based storage, not able to follow. Can you just send me the files with instructions to follow as you might have them handly? ...thanks
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Water Kingdom, unfortunately I don't have them as I am yet to try it on GKE. All my videos are based on bare metal. I only did couple of videos on google cloud.
4 жыл бұрын
great tutorial! One question pls: can I increate the replicas to ensure high viability?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Manfred, thanks for watching. I haven't completely looked at the code to say if that is required/supported. The way the nfs client provisioner works is by mounting the nfs share on the kubernetes worker node where it is running and distributing it to the pods. I will have to test it before I can comment.
4 жыл бұрын
@@justmeandopensource Got it working with an ISILON NFS Applicance. Really easy to install and use. It runs well in our productive environment.... But I use a helm chart for installing it - it is much easier (helm install nfs-provisioner --set nfs.server=[NFS-SERVER] --set nfs.path=[EXPORT_FS] stable/nfs-client-provisioner --namespace=nfs-provisioner --create-namespace)
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@ Perfect. Even though I personally prefer using Helm, whenever I do a video I prefer the manual way so that viewers get an understanding of what they are deploying. Thanks for sharing this detail. Cheers.
@msahsan1
@msahsan1 4 жыл бұрын
Wonderful Thanks again ...! BTW do you have any reverse proxy (nginx/traefik) video for docker ?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi, thanks for watching. I haven done nginx and traefik ingress in kubernetes but not as reverse proxy in Docker. kzbin.info/www/bejne/aIe4gmeNn7Gresk kzbin.info/www/bejne/d5Czm515gpaYgqM kzbin.info/www/bejne/gZ-yi6quq92ZpKM
@zachguo3357
@zachguo3357 4 жыл бұрын
It seems SC and PV are not namespaced, what are the best practices to provision PVs for different namespaces? (production/staging/test or multi-tenancy)
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Zach, thanks for watching. Yes I believe they are not confined to any particular namespace. I think persistent volumes are created in a particular namespace depending on which namespace you create the PVC.
@ratnakarreddy1627
@ratnakarreddy1627 5 жыл бұрын
Could you please make a video for Dynamically provision GlusterFS persistent volumes in Kubernetes?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Ratnakar, Thanks for watching this video. I haven't used GlusterFS before, but just had a quick glance at their docs and looks interesting. I am always open to learn new stuff. I will read through the docs and once I gain some understanding I will definitely make a video of it. Thanks for suggesting the topic though. Cheers.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@Pushp Vashisht Hi thanks for your interest. Yes I originally did the GlusterFS series to give users some basic knowledge about GlusterFS before using it in Kubernetes cluster. Since then I am struggling to find time. Its in my list and I will certainly do some videos soon on it. Cheers.
@krishnaveni-cf5sy
@krishnaveni-cf5sy 4 жыл бұрын
hi Mr. Venkat , can we configure nfs client in different namespace ?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Yes you can on any namespace.
@RohitSingh-ku4hb
@RohitSingh-ku4hb 4 жыл бұрын
can we edit the pvc and increase/decrease requested volume without restarting the PODS ??
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Rohit, thanks for watching. I don't think you can do that without restarting the pod. I am not 100% sure on that without testing.
@nagendrareddybandi1710
@nagendrareddybandi1710 4 жыл бұрын
Hi, I've red in some sites got the info as below FYI! I think resize of the disk is possible ( go here kubernetes.io/docs/concepts/storage/persistent-volumes/#storageclasses) provisioner: kubernetes.io/glusterfs parameters: resturl: "192.168.10.100:8080" restuser: "" secretNamespace: "" secretName: "" allowVolumeExpansion: true
@RohitSingh-ku4hb
@RohitSingh-ku4hb 4 жыл бұрын
@@nagendrareddybandi1710 right, i also came throw this flag to allow the expansion of pv
@JeffGmi
@JeffGmi 4 жыл бұрын
Very interesting... One comment/question regarding your NFS server, though. As a storage admin/engineer, I'd need a really good reason to export something to the world, especially with the no root squash option. Can you comment on why you've chosen to demo this in this manner as opposed to creating an export following the principle of least privilege?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Jeff, thanks for watching. I am not an expert in storage administration. I agree with you with the least privilege concept. This video is just to demonstrate the idea of using NFS as persistent volumes in kubernetes cluster. I didn't want to concentrate on NFS in this video. Cheers.
@shehbazsingh911
@shehbazsingh911 2 жыл бұрын
Great Video...Thanks for sharing this demo.....I just need to understand how the Storage Class is identifying the provisioner. The provisioner name is an environment variable in the NFS_Provisioner POD (Which is name-spaced resource). How it is accessible in the Storage class (which is cluster wide resource)
@ryanfradj6143
@ryanfradj6143 4 жыл бұрын
Great video! I was wondering if the nfs server could be deployed in it's own container on my namespace?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Ryan, thanks for watching. You can always do that but still the volumes (underlying storage) for that nfs server pod should come from somewhere outside the cluster through a central storage solution.
@jahanzaibkhan3027
@jahanzaibkhan3027 4 жыл бұрын
Hi! isnt this slower? IOPS etc... network latency?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
I was just showing the possibility of using nfs server as a dynamic provisioning storage backend. You will have to analyze the issues/bottlenecks around the chosen solution.
@nagendrareddybandi1710
@nagendrareddybandi1710 4 жыл бұрын
HI Sir. Thankyou sooooo much for this... all created well (SA,role,rolebinding,clusterrole,clusterrolebinding, storageclass, provisioner pod,) but when I try to create PVC ... its in pending state and waited for longtime.. what could be the reason? === Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ExternalProvisioning 9s (x20 over 4m42s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator ==== If the worker1 crashed is the provisioner pod will move to another node?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Nagendra, thanks for watching. Yes, the nfs-provisioner pod is again a normal k8s resource and if it gets crashed or the node where it runs crashes, it will get started on another node. Regarding your pvc in pending state, have you verified manually that you can mount the nfs share on your worker nodes? And did you specify the storage class name as mentioned in this video?
@nagendrareddybandi1710
@nagendrareddybandi1710 4 жыл бұрын
@@justmeandopensource Yes Sir, I had verified manually nfs share is able to mount that..that's good. I will reconfigure everything ...if any issues I'll come back to you on this. Thanks for support.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@nagendrareddybandi1710 You are welcome.
@arunprasath2281
@arunprasath2281 5 жыл бұрын
Hi Venkat, Firstly, thanks for taking time and creating videos for us. In simple words Sema Sema!! my cluster is in vmware pks, do I need nfs pod in the cluster for auto provisioning of pvc claim? or i can directly create pvc claim ? Also could you tell whether I need NFS in my cluster?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Arun, thanks for watching this video. GIven its a cloud provider, you can make use of vsphere volumes. You (or your k8s cluster admin) will have to create a storage class after that you can create pvc specifying that storage class and a pv will get created for you. kzbin.info/www/bejne/amHJnWx-os5neKc You can also use NFS persistent volume provisioning if you want. Thanks.
@JonnieAlpha
@JonnieAlpha 4 жыл бұрын
Venkat, i wonder if it is the best practice to use the PV on NFS for the purposes of deploying an SQL database? Our Kubernetes will be on premise instead the cloud.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi, thanks for watching. You definitely need to have persistent volumes for your database. No doubt in that. The type of storage solution you could use on-prem depends on various factors. There are quite a lot that you can use. I have explored just NFS. But in production, you can use Ceph/Rook, Gluster FS or any other clustering storage solutions.
@giovannicoutinho5966
@giovannicoutinho5966 5 жыл бұрын
If I want to have two NFS mounts how should I procceed with your examples? I have a RAID0 and a RAID1 shares on my NFS server and I wanted to create a fast storageclass and a normal storageclass
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Giovanni, thanks for watching this video. You can follow the same steps as shown in this video for each of your provisioner. You can't use one provisioner to provide both storage class. So please follow the steps and use one nfs provisioning first. Then use the same set of manifests (github.com/justmeandopensource/kubernetes/tree/master/yamls/nfs-provisioner) and change the name as follows. In class.yaml, change line#4 and line#5 (you should change the provisioner name from exampe.com/nfs to something else like example.com/fastnfs) In default-sc.yaml, remove annotations (line#5 & line#6), then change provisioner name on line#7 to example.com/fastnfs In deployment.yaml, update line#23 with provisioner name and then update the nfs path accordingly. Make sure to change the name of the resources and app labels accordingly as you are deploying another nfs-provisioner. Hope this makes sense. Thanks.
@mongaunique86
@mongaunique86 4 жыл бұрын
I am getting error "waiting for a volume to be created" and pv is not getting created can any one help
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi, thanks for watching. Did you check if you can mount the nfs shares from the worker nodes first?
@balaspidy
@balaspidy 4 жыл бұрын
Hi sir.. All you videos are simple and great. I have a question here. Why we need rbac for NFS dynamic here?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Bala, thanks for watching. I don't understand you question. Could you put it in a different way please? Thanks
@balaspidy
@balaspidy 4 жыл бұрын
@@justmeandopensource I have created storage class, pvc nfs-provisioner.But not created rbac.yaml. hence the status of pvc is pending. Once I applied rbac.yaml , pvc was avaiable. My question is why we need clusterrole role service account fot nfs provisioner
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@balaspidy If you look at the resources we deploy for this nfs-provisioner, we are using a service account that needs certain privilege to list/create resources across all namespaces, hence we need a cluster wide privilege.
@balaspidy
@balaspidy 4 жыл бұрын
@@justmeandopensource thanks so much for your quick response...will be there video for cluster role and service account?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@balaspidy I will try. There is also this RBAC related video I did a while ago. Might be useful as I covered some roles. kzbin.info/www/bejne/i2eagKqDYspsqLM
@afriyieabrahamkwabena9068
@afriyieabrahamkwabena9068 4 жыл бұрын
Thanks for this video, very helpful. I have written an application Go that use mongoDB as it database. When the frontend application (http server) start it connects to the mongoDB server and then listen for CRUD request from any http client. I have created a docker image of frontend application pushed to docker hub. I would like to ask if i would be able to deploy my application on kubernetes with this setup, NFS persisten volume and MongoDB Replicaset deployed on VMs running on my localhost machine. I have already set up my kubernetes cluster and following this video created the persistent volumes. Is this possible?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Afriyie, thanks for watching. So you have your MongoDB replicaset running on virtual machines in your workstation. And you want to run your frontend in your k8s cluster and have it talk to mongodb replicaset. Yes, thats definitely possible. How is your k8s cluster provisioned? Can your k8s worker node ping the ip address of the virtual machine where mongodb is running? if yes, then you can just point your frontend to connect to the ip address of the mongodb vms. Or you might have to do some portforwarding between your workstation and the mongodb vm. Then use your workstation ip to access the mongodb. I might not have explained it very well, but its doable. Cheers.
@Siva-ur4md
@Siva-ur4md 5 жыл бұрын
Hi Venkat, Thanks for your video, I have doubt in PV, If I have created PV(PVC-storageclass-PV) by storage class, will that possible to increase PV size after the creation of PV
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Siva, Was it you or someone else. I had this same question. I haven't tested that. Actually the other question was would we be able to use more than the allocated pv. For example, if we defined a pvc and got a pv for 100MB, can we use more than that. I am going to test these and will share the results. Thanks.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Siva, I just had a try on this one. Interestingly, the nfs provisioner I used in this video (for bare metal) doesn't support strict volume size. I tried by creating a 5MB pv and attach it in a pod. And I was able to write more than 5MB. I created a file which was 100MB. So no strict size implemented which is a limitation. github.com/kubernetes-incubator/external-storage/issues/1142 If you use one of the cloud implementation like GCP Persistent Disk or AWS EBS or Azure Disk, then you will only get what you requested and won't be allowed to use more. Although from kubernetes version 1.11 and above, you can resize a pv by updating your pvc. You don't have to delete and recreate the pv. It will be dynamically resized. However, the pod needs to be restarted to make use of the increased size. Shrinking volumes is not supported as yet. In the below link you will find some useful information. It also has list of supported storage providers that has this resize feature. kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims Thanks.
@smrutimohanty1241
@smrutimohanty1241 3 жыл бұрын
Hi venkat,I followed vdo and have done the setup efs dynamic provisioning..have created different pods/containers with same images but want to handle different volume path for each pod level..please suggest how I can proceed..
@kunalbagwe6091
@kunalbagwe6091 4 жыл бұрын
Hello, Thanks for such a helpful video. I have question around Helm Chart and the PVC rwx NFS provisioner. The pvc is created by subchart for parent chart deployment to use. But when performing "helm uninstall chart", the pod and the pvc status gets to TERMINATING state. Any way to specify configuration so that the pod and the PVC deletes smoothly ?
@davehouser1
@davehouser1 3 жыл бұрын
Hi, When trying to create the pvc (4-pvc-nfs.yaml), its stuck at "Pending". When I describe the pvc, I see "Warning ProvisioningFailed 6s (x2 over 8s) persistentvolume-controller no provisionable volume plugin matched". I am running 1.18.5 on ubuntu 18.04. Do you know of a way of fixing this?
@nguyentruong-po4mx
@nguyentruong-po4mx 4 жыл бұрын
If you create persitent volumes before pvc then Dynamically provision NFS can create pvc map to pv which created before ?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Nguyen, thanks for watching. I didn't get your question. Sorry.
@nguyentruong-po4mx
@nguyentruong-po4mx 4 жыл бұрын
@@justmeandopensource ​ In this Video if I create a persistent volumes ( such as name of persistent volume is pv1) before create any pvc . Will pod nfs-client-provisioner create dynamically one persistent volumes claim to bound to pv1 ?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@nguyentruong-po4mx So your question is if you manually create a persistent volume named pv1 and then create a pvc, will the provisioner use pv1 or create a new persistent volume? Is this right? If you have a persistent volume and if that satisfies the persistent volume claim you created (like the storage size), then the existing pv will be used.
@shaikvali852
@shaikvali852 5 жыл бұрын
Hi Venkat, how to change the default behavior of persistentVolumeReclaimPolicy to Retain while using Dynamic Provision ?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Shaik, As the persistent volumes are created automatically when request them by creating a pvc, you will have to update the ReclaimPolicy once the pv is created. $ kubectl get pvc Look at your desired pvc and check the corresponding pv name. Then you can update the policy using below command $ kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' I just brought up the cluster and tested this which worked fine. Once you apply this patch to the pv, it won't get deleted when you delete the associated pvc. You have to then manually delete this pv. Hope this makes sense. Thanks, Venkat
@kasimshaik
@kasimshaik 5 жыл бұрын
@@justmeandopensource Thanks, Venkat. It worked for me too. Is this the same process even for AWS EBS & Azure disk ?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
HI Kasim, If you are running your K8s cluster in the cloud (GCE, AWS, Azure), there are built-in dynamic storage provisioners for each of them. I made this video as the series I am doing is on bare metal and we don't have any solution for dynamic provisioning built-in for it. You can check the below link. Scroll down to section "Types of Persistent Volumes". kubernetes.io/docs/concepts/storage/persistent-volumes/ Hope it makes sense. Thanks, Venkat
@kasimshaik
@kasimshaik 5 жыл бұрын
@@justmeandopensource Thanks Venkat for the clarification. It makes sense.
@darkmagician666
@darkmagician666 4 жыл бұрын
Hi Venkat, I'm trying to run the nfs-client-provisioner in it's own namespace. I got everything working in the default namespace after following your tutorial, then: deleted the resources in rbac.yaml, class.yaml, deployment.yaml created a new namespace called storage, created a new context with cluster=kubernetes, user=kubernetes-admin, namespace=storage, used the new context created the resources in the yamls again but now PVCs are pending forever. Am I missing something that needs to be done to get this running in another namespace? EDIT: Ah I figured it out, "namespace: default" is written in the clusterrolebinding and rolebinding resources. Just changed those and it worked :)
@ratnakarreddy1627
@ratnakarreddy1627 5 жыл бұрын
Hello Venkat, If we make this storageclass as default then is it required to modify "storageClassName" during the PVC creation in the yaml file?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Ratnakar, thanks for watching this video. In this video I didn't talk about default storage class. But later realized that I should have done. Later I added another yaml file named default-sc.yaml in the same directory in the github repo. It has annotation to make it a default storage class. So please use default-sc.yaml instead of sc.yaml. Then you don't have to mention the storageclassname in you pvc definition. Thanks.
@ratnakarreddy1627
@ratnakarreddy1627 5 жыл бұрын
Thank You Venkat.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@ratnakarreddy1627 You are welcome. Cheers.
@irfanpratama9750
@irfanpratama9750 3 жыл бұрын
hi can you help me i always error like this Output: mount.nfs: access denied by server while mounting if pull the image from gitlab
@justmeandopensource
@justmeandopensource 3 жыл бұрын
How did you export your nfs share? Whats the content of your /etc/exports file?
@nah0221
@nah0221 4 жыл бұрын
you are genuine man !
@justmeandopensource
@justmeandopensource 4 жыл бұрын
😄
@subhashs2586
@subhashs2586 5 жыл бұрын
anna romba nalla iruke unga sessions ella ..god bless you:)
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Magizhchi and nandri for watching my videos.
@NitinSharma-xv9od
@NitinSharma-xv9od 3 жыл бұрын
Sir when i create nfs client provision pod it showing error cashloofbackoff and logs showing this Error getting server version: Get 10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: i/o timeout. Please give me suggestion.
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Nitin, thanks for watching. I did an updated video recently on this topic which might be of some help. kzbin.info/www/bejne/eneWp2WGbaqBe8k
@sarfarazshaikh
@sarfarazshaikh 5 жыл бұрын
Hi Sir, I have one question regarding aws efs. I have docker magento image and it contains all the installation files and folder inside /var/www/html directory but when i mount the efs pv claim to /var/www/html then the data inside html is not showing . it becomes empty. I want that the data which is already there inside html of my docker image should remain after mounting efs . Otherwise i wont be able to do the installation.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Sarfaraz, thanks for watching this video. So you have some data in /var/www/html in your docker image. Okay. The basic Unix/Linux behaviour is that whenever you mount something to a directory, the underlying data in the original directory won't be available. This would make sense. You can mount your AWS EFS pv in a different location inside the container. There is no way to retain the data after mounting to the same directory. Thanks.
@sarfarazshaikh
@sarfarazshaikh 5 жыл бұрын
@@justmeandopensource Can I mount efs directly to worker node fstab and then mount container volume /var/www/html as a hostpath. Then will it retain the data?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Why do you have data in the container image? Why don't you copy all the data to the EFS and just mount it as PV? Thanks.
@sarfarazshaikh
@sarfarazshaikh 5 жыл бұрын
@@justmeandopensource I working on product. I want to have base image ready for magento store. Whenever user sign up with their name a new magento store is created with mytest.example.com. That's why I want base image ready. So that we have to make changes only in database. I am using RDS for database and for persistent storage I am using EFS.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@sarfarazshaikh I understand. But when you mount the persistent volume on /var/www/html, the data already there will not be accessible. So you will have to mount EFS under different directory like /var/www/data and change the logic of your web application to use this directory as the data directory or something like that. thanks.
@WojciechMarusiak
@WojciechMarusiak 4 жыл бұрын
Thank for the great video. I have following questions: Should we have more than one replica of nfs client provisioner? Let's say I have prometheus&grafana up&running and I need the data to be saved no matter what. We have accessmode ReadWriteMany. How does this work with pods with more than 1 replicaset which are storing the data on pv?
@p55pp55p
@p55pp55p 5 жыл бұрын
Nice video agian Venkat. Thanks for that. I have question. My usecase is to have one shared NFS so every new POD will claim the same persistant volume. Can you advice how to achieve something like this? Your tutorial works great for situation when new pod comes and it does PVC which creates PV. But here I would need to allways attach each pod to the same volume.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Peter, There is no guarantee that the same persistent volume will get mounted by the pod each time you delete the pod and recreate it. Persistent volumes are released once you delete the associated pod (with pvc) but not available to the next pod. The persistent volume will have to be manually deleted. There are certain storage classes if you are using the cloud provider where the persistent volumes gets deleted automatically if the reclaim policy is set accordingly. To attach the pv to the same pod, I don't think there is anyother option other than to use statefulset with one replica. So you will get one pod and one pv. Every time you delete the pod in this statefulset, it will attach to the same pv. Thanks, Venkat
@p55pp55p
@p55pp55p 5 жыл бұрын
@@justmeandopensource Hi Venkat seems I found a solution for my case. I basically created one PV and one PVC (out of statefulset). Then in statefulset I removed part for dynamic provisioning (volumeClaimTemplates part) and instead I have added "volumes" where I specified the PVC I created before. This allows me to create multiple replicas using the same PVC. I tested this solution and it gives me exactly what I need so I'm having 2 replicas which are accessing the same NFS mount. Thanks Peter
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@p55pp55p Cool. Hope you also set the AccessModes to RWX (ReadWriteMany) in pv and in pvc so that same volume can me mounted on multiple worker nodes with read/write permissions. Depends on your use case, but something to bear in mind. Thanks, Venkat
@p55pp55p
@p55pp55p 5 жыл бұрын
@@justmeandopensourceSure accessmode is RWX. I think it wouldn't allow me to do this if I would keep there RWO. Peter
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@p55pp55p Yeah. It might allow you to mount but you won't be able to write to it. Not sure just a guess.
@ahmadmiqdaadabdulaziz6163
@ahmadmiqdaadabdulaziz6163 4 жыл бұрын
does we need to deploy nfs-client-provisioner across each namespace for each name space to use the nfs service ?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Ahmad, thanks for watching. Its enough to deploy it in one namespace and can be used by pods cluster wide. Cheers.
@xochilpili5480
@xochilpili5480 2 жыл бұрын
Nice tutorial, but im facing a different error. Turns out that when i create that PersistentVolumeClaim, it creates the PVC and the PV, but in my NFS server there's no additional folder! Can you please help ? Looking at the logs for nfs-provider pod, there's the creation, and when i try to tested with a busybox pod, then got mount failed stat volumes/long-hash does not exists. Which is right, because in NFS shared folder there's nothing.
@mohammadkhwaja4049
@mohammadkhwaja4049 4 жыл бұрын
Firstly thank you very much for making such wonderful videos in simple language to understand. I have doubt about the RWX concept. So when we say multiple read and writes, are we referring to writing into the Container when logged in from multiple nodes by multiple users? Is that it refers to? Please clarify. Thank you.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Mohammad, thanks for watching. RWX mode means that you can mount that persistent volume on more than one containers and all containers can write to that volume.
@himanshujoshi2345
@himanshujoshi2345 5 жыл бұрын
Awesome tutorial,keep up the good work.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi himansh joshi, thanks for watching this video.
@saicharana5361
@saicharana5361 4 жыл бұрын
Hi Venkat , while creating a file inside the pod , I'm getting permission denied , the /mydata has nobody owner , while creating nfs given chown nfsnobody since its centos., could you please suggest , thanks in advance
@varun898
@varun898 5 жыл бұрын
Is it really necessary for dynamic provisioning of persistent volumes if my K8s cluster is hosted on a cloud provider?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
There is a difference though. Are you using one of cloud managed Kubernetes cluster like GKE, EKS or AKS? Then there is no need for dynamic nfs. You will have to create storage class though. But if you are not using managed service, instead launch instances in the cloud and install kubernetes yourself, then you will need this setup. Thanks.
@varun898
@varun898 5 жыл бұрын
Thank you for getting back , I am using digital ocean's managed Kubernetes cluster.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@varun898 I am not sure but they should have storage provisioning enabled. CHeers.
@nevink3123
@nevink3123 5 жыл бұрын
Hi Venkat, Came across your channel while searching for PVC using NFS. All your videos are awesome and very detailed. I was trying out this, but my nfs-client-provisioner pod creation failed due to Error getting server version: Get 10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: i/o timeout. I followed your vagrant script to create the cluster. Any idea what could be the issue ? Any pointer would be really helpful.. Thanks in Advance.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Nevin, the nfs-client-provisioner pod usually fails if it has problems accessing the nfs server. Hope you have setup your nfs server. And did you verify that you can mount it from your worker nodes? Cheers.
@nevink3123
@nevink3123 5 жыл бұрын
@@justmeandopensource Venkat, thanks a lot for taking time to read and reply promptly. Yes my NFS server is up and running and I am able to mount the volumes. FYI... I was able to fix the issue. I used a vagrant file which is some what similar to your git hub one. One mistake which I had in the script was my api server advertise address and pod network cidr where in the same ip range (192.168.56.XXX and 192.168.0.0/24) respectively. I read in one of the google link the ip range should be different else may result in conflict while using Calico. When i checked your script, I noticed your vagrant file has it mentioned right. --apiserver-advertise-address=172.42.42.100 --pod-network-cidr=192.168.0.0/16 Once that is fixed, my client provisioner pod started running and PVC got binded. Thanks a lot once again. You publish lots of advanced topics which are not found any where. Appreciate your effort. Keep up the great work.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@nevink3123 very glad that you got it resolved. Good job. Cheers.
@peterbratu
@peterbratu 3 жыл бұрын
For newer k8s versions, please use the kubernetes-sigs/nfs-subdir-external-provisioner repo. Do not edit kube-apiserver.yaml
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Peter, thanks for your comment. I already posted a follow up video using nfs-subdir-external-provisioner. kzbin.info/www/bejne/eneWp2WGbaqBe8k
@nachi160
@nachi160 3 жыл бұрын
can we configure multiple NFS servers with one deployment?
@justmeandopensource
@justmeandopensource 3 жыл бұрын
I don't think you can.
@nachi160
@nachi160 3 жыл бұрын
@@justmeandopensource with two deployment of the provider pod will that work. Will try that.
@justmeandopensource
@justmeandopensource 3 жыл бұрын
@@nachi160 yes it will I guess and exposed through 2 different storage classes
@nithinbhardwajsridhar4018
@nithinbhardwajsridhar4018 4 жыл бұрын
Hi Venkat, Firstly thanks for the amazing tutorial!! I have a problem and would like some insight! I have created a windows share, and I can mount it into one of my cluster workers and I can write data (with sudo). So there is no connectivity issue. I want to use this NFS share for which I have assigned all possible read+write access included it to Everyone, but every time I configure this like the way you have done it I have issues creating the pvc and pv I get an error stating could not create default-*****-*******-**** directory permission denied. Do you have any ideas on this?? Thanks! Cheers, Nithin
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Nithin, thanks for watching. I have only tried exporting NFS shares from a Linux machine. But that shouldn't stop you from using Windows machine for NFS sharing. You might have to change the directory permissions that you are sharing. I believe you have already done that. But just double check. Give everyone read/write permissions on that shared directory. In Linux, I used chmod 777 on the exported directory.
@nithinbhardwajsridhar4018
@nithinbhardwajsridhar4018 4 жыл бұрын
@@justmeandopensource Hi Venkat, Thanks for the time 😄
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@nithinbhardwajsridhar4018 No worries. You are welcome.
@rajeshbastia8502
@rajeshbastia8502 5 жыл бұрын
Hi Venkat, Please let me know from where the ip is getting from ( 10.95.65.213)? Did you created any new network interface ??
@justmeandopensource
@justmeandopensource 5 жыл бұрын
HI Rajesh, thanks for watching this video. Where did you see this ip address? That looks like internal cluster ip which all managed by Kubernetes.
@akk2766
@akk2766 4 жыл бұрын
What I didn't get from this video is whether one can get data persistence from these dynamically provisioned "persistent" volume claims. I noticed that after you deleted the claims, the volumes also disappeared - I guess I'm struggling to understand where the "persistence" comes from since the data created in the "/mydata" mount point is now gone. Did I miss something?
@yuven437
@yuven437 5 жыл бұрын
If I am following along at home, should I change provisioner in class.yaml? to NFS maybe? Also in deploy.yaml Should I use the path on the NFS server, or the path that will be mounted to the NFS server? e.g I have /var/nfsshare on my NFS, and /mnt/nfs/var/nfsshare on my nodes. which ones should I use?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Yuven, Firstly, thanks for watching this video. Query 1: Should I change provisioner in class.yaml? In class.yaml, line 5, I have used "example.com/nfs" as provisioner. In deployment.yaml, line 22 and 23, I have specified the provisioner name environment variable You have to make sure the provisioner name you give in deployment.yaml matches that in class.yaml. Its just a name. You can have any name, but needs to match in these two files. Query 2: Which path should I use in deployment.yaml? You should use whatever you exported in your nfs server /etc/exports file. In your case, you should use /var/nfsshare Hope this makes sense. If not, let me know. Thanks, Venkat
@yuven437
@yuven437 5 жыл бұрын
@@justmeandopensource Thank you very much! You have been a great help to me. Good to know that the provisioner name is only a name. I figured out the path a bit after asking the question... by rewatching parts of you video ;) Keep up the amazing work! The world need more people like you :)
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@yuven437 You made my day. Cheers.
@yuven437
@yuven437 5 жыл бұрын
@@justmeandopensource eeh, this is getting embarassing :'D I now get an error: MountVolume.SetUp failed for volume "pvc-427e53bf-70bb-11e9-8990-525400a513ae" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/9b02aec2-70be-11e9-8990-525400a513ae/volumes/kubernetes.io~nfs/pvc-427e53bf-70bb-11e9-8990-525400a513ae --scope -- mount -t nfs 11.0.0.75:/var/nfsshare/default-pvc3-pvc-427e53bf-70bb-11e9-8990-525400a513ae /var/lib/kubelet/pods/9b02aec2-70be-11e9-8990-525400a513ae/volumes/kubernetes.io~nfs/pvc-427e53bf-70bb-11e9-8990-525400a513ae Output: Running scope as unit: run-r68af7a0af3c3404eb50d1e9baf90632d.scope mount.nfs: mounting 11.0.0.75:/var/nfsshare/default-pvc3-pvc-427e53bf-70bb-11e9-8990-525400a513ae failed, reason given by server: No such file or directory When I deploy busy box. I notice that the pvc gets created, but it does not show up in the shared folder. Even though I have checked, and the worker nodes have access to the share (I created a sample file, and it works just fine) Any idea about what is wrong? I am closing in on my deadlines and I am quite stressed.
@yuven437
@yuven437 5 жыл бұрын
in deployment.yaml I use spec: containers: volumeMounts: mountpath: /persistentvolumes env: - name: PROVISIONER_NAME value: example.com/nfs - name: NFS_SERVER value: 11.0.0.75 - name: NFS_PATH value: /var/nfsshare volumes: - name: nfs-client-root nfs: server: 11.0.0.75 path: /var/nfsshare I am guessing there is something wrong here? the path on my NFS is /var/nfsshare and on my Node: /mnt/nfs/var/nfsshare should I make them the same?
@yijiang6037
@yijiang6037 4 жыл бұрын
I follow your instruction, I create PVC successfully, but it don't bound with PV, I 'm not sure what happened. I suspended my NFS server go wrong, but I can mount NFS server directory with my client successfully, hope for help from you , very nice lecture.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Yi, thanks for watching. I believe I replied to this comment on a another video.
@SelvanPonraj
@SelvanPonraj 5 жыл бұрын
Hi Venkat, I am trying this video and the host is Mac. I am running Vagrant k8s cluster. Host Machine: Mac NFS Server is running. /srv/nfs/kubedata - permission as below drwxr-xr-x 3 nobody admin 102 1 Sep 11:57 /srv/nfs/kubedata KWorker: [root@kworker2 ~]# showmount -e 192.168.68.XXX Export list for 192.168.68.XXX: /srv/nfs/kubedata (everyone) mount -t nfs 192.168.68.XXX:/srv/nfs/kubedata /mnt mount.nfs: access denied by server while mounting 192.168.68.XXX:/srv/nfs/kubedata Any clue what could be the issue? Thanks in advance.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
What options you have in you nfs exports configuration? In my Linux server, i had to pass "insecure" option as well. Could you try it with insecure option? Thanks.
@elabeddhahbi3301
@elabeddhahbi3301 3 жыл бұрын
i follow thi vedio and ddn't work for me i have 4 node 1 master and 3 workers my dist is centos7 i am using calico network my firewall is disable setenforce 0 i got this message from the describe command wrong fs type bad option bad superblock on 192.168..... missing codepage or hleper program or other error(e.g,nfs,cifs) you might need a /sbin/mount helper program
@adityajaiswal1580
@adityajaiswal1580 4 жыл бұрын
Hi Venkat , as we know that there are many volume plugin available .I want to ask u that if we use gceperesitentDisk then we just need to create a storage class and automatically pvc will take when they required but while using nfs we first need to create nfs server nfs-pod using deployment and then storage class and then pvc will take place .please help me am I right and which one is better 🙏
@atulbarge7445
@atulbarge7445 5 жыл бұрын
hi but i a facing some issue when i am creating nfs Dynamically provision i am using your all files that is rbac.yaml,class.yaml and deployment.yaml when i am applying rbac and class file it all work fine and also in deployment is says created but when you check " kubectl get all -o wide "command it shows tha nfs container in creating mode and then it will be same as laong as and never creates and gives this [ pod/nfs-client-provisioner-7b94998b9-lpn6w 0/1 Containercreating 0 29s ] please helm for this i need to add in my production
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Atul, thanks for watching. Did you verify that your nfs server is running and that you can manually mount it on the worker nodes? If you can't mount it manually on the worker nodes, the nfs-provisioner pods will not be ready. First thing is to check as shown in this video that you can mount the nfs share from your worker nodes. Then make sure the deployment.yaml has the right ip address. Also what version of Kubernetes are you running? Thanks.
@atulbarge7445
@atulbarge7445 5 жыл бұрын
@@justmeandopensource i am using minor 16 version and from my all workers we can able to mount nfs share folder but even i am trying using helm and more than every other document it still get same error line container in hang state we have two master and two worker now we are doing test so please help me if it is possible for you if you have any proper document or you want to go through remote and all
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@atulbarge7445 I don't think I can help you remotely. Sorry about that. Look at the command output of "kubectl describe deploy and look at the events section at the bottom. It might give you a clue.
@atulbarge7445
@atulbarge7445 5 жыл бұрын
@@justmeandopensource ok thanks i will do that
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@atulbarge7445 cool.
@shubhadeepgoswami1633
@shubhadeepgoswami1633 4 жыл бұрын
Hi Venkat , thanks for creating such an informative video. I have a question , how do i mount a host directory dynamically to my nfs volume ?
@TumenzulBatjargal
@TumenzulBatjargal 4 жыл бұрын
Cool explanation, what happened when nfs provisioner pod is destroyed( and recreated)? My data is back?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Tumenzul, thanks for watching. The actual nfs server which holds your data is external to your k8s cluster. Even if you delete your nfsprovisioner pod and your k8s cluster, the data will still be there in your nfs server. But depending on how you created your persistent volume, the volume might be deleted when the pvc or the pod is terminated. This is actual and expected behaviour.
@costalesea
@costalesea 4 жыл бұрын
Hi, Venkat. I apprece so much your effort and dedication to make this videos. A little question: suppose I have a previous nfs export with some static files, pdf files for example that I need to mount in every replica of my app. How it works with this provisioner if every pod of deployment will claim for a single volume? I need the same data in every pod, it understand? Sorry if the question is not so consistent. Thank you from Argentina!
@JonBrookes
@JonBrookes 5 жыл бұрын
Really good info. I wondered how a replica set or deployment with say 3 pods would make a pvc that is unique to each, is there a host name option that could be used in the yaml to create the deployment ?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Jon, It all depends on how we design the architecture. We first have to understand the application we are deploying and then plan the resources. It can be a statefulset where pvc gets bound to the same pv every time.
@rameshreddy908
@rameshreddy908 5 жыл бұрын
Try statefulsets instead of deployments. it would create diff pvc's for every pod
@seifeddinebarhoumi8445
@seifeddinebarhoumi8445 4 жыл бұрын
Thanks for the great content
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Barhoumi, thanks for watching.
@rudi.chan.78
@rudi.chan.78 5 жыл бұрын
Great channel, i learned alot from your videos. Can i request a tutorial on how to setup a dynamic and static glusterfs persistent volume? Thanks
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Rudi, thanks for watching this video. I have lot of topics to cover in Kubernetes. I will definitely add this one as well. Cheers.
@rudi.chan.78
@rudi.chan.78 5 жыл бұрын
@@justmeandopensource Great sir, thank you sooo much..
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@rudi.chan.78 You are welcome.
@realthought2262
@realthought2262 4 жыл бұрын
hello , hope you are doing good . i have a question regarding storage classes and pvc, after watching this video i thought to experiment on Aws cloud (EBS) as a volume , but i couldn't , so it is restricted for nodes to be on same cloud to use EBS . Like i created policy as given in Aws document then i created storage class and pvc but it was not creatin pv at own. I read somehwere or got confused with some other thing that its restricted to have nodes on whicc pods are running should be on same cloud. Any suggestions. Thx
@manikandans8808
@manikandans8808 5 жыл бұрын
when i try to change the value in pvc from 500Mi to 1Gi, it shows like this persistentvolumeclaims "pvc1" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize How could i increase the value?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Mani, This illustration I showed in this video is for dynamic provisioning and not dynamic resizing. As the error states, it is forbidden because the storage class we are using here which is NFS based doesn't support dynamic resizing. In order to use dynamic resizing feature, you will have to use one of the supported storage class (eg. AWS EBS, Google PersistentDisk, Azure disk or other cloud offerings). Most of my videos are around bare metal and not cloud. Thanks.
@manikandans8808
@manikandans8808 5 жыл бұрын
@@justmeandopensource thank you Venkat. Since am gonna use that with EBS. It's very interesting.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@manikandans8808 Check the below link. Might be useful. kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims Thanks
@jamesaker7048
@jamesaker7048 3 жыл бұрын
This video will need to be revised for kubernetes versions 1.20+ because the volume mechanism has been reworked in the later releases. If you are using the LXD Provisoiner Venkat was so kind to provide to set up your k8s, you need tochange the script so 1.20.0-00 becomes 1.18.15-00. Venkat if you disagree let me know.
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Thanks, you are right. I will redo this video soon for latest k8s version. Its hard to keep updating the video as the ecosystem evolves at a great speed. Cheers.
@johnhatami4752
@johnhatami4752 4 жыл бұрын
Great video. Is there way to access the Azure Blob Storage via the Persistent Volume in AKS (Kubernetes)?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi John, thanks for watching. I have no idea and never use Azure or AKS in Azure.
@PraveenYadav-kg1es
@PraveenYadav-kg1es 4 жыл бұрын
Instead of NFS How we use AWS EFS for PVC ?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Praveen, thanks for watching. If you have a cluster in AWS (like their managed EKS), it will be easier to use EBS or EFS as persistent storage. If you want to use it for your locally running k8s cluster, its still possible, but I haven't tried. When I get some time I will give it a try. Cheers.
@sunnynehar
@sunnynehar 5 жыл бұрын
Hi, Venkat how to do it in mac? any idea?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
HI Nehar, I haven't used Mac in years. But the process of exporting a directory through nfs shares should be simple. www.peachpit.com/articles/article.aspx?p=1412022&seqNum=11 Once you have nfs shares exported, you can proceed with dynamic nfs-client-provisioning as shown in this video. Thanks
@viertekco
@viertekco 5 жыл бұрын
Your the bomb V! thanks man!
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Zach, thanks for watching this video. Cheers.
@InspiringOrigins
@InspiringOrigins 4 жыл бұрын
Hi brother ... Your videos are so good and its clearing so many doubts. Could you please make some videos on common troubleshooting problems in Kubernetes. It would so helpful for peoples like me to get a job in K8s.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Prabhu, thanks for your interest in this channel. I compile a list of topics based on requests from viewers and this has been requested by few others as well. Its in my list and I will look into making some video time permitting. Cheers.
@InspiringOrigins
@InspiringOrigins 4 жыл бұрын
@@justmeandopensource Thank you 😊
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@InspiringOrigins You are welcome.
@hanvika-thebabybird3363
@hanvika-thebabybird3363 Жыл бұрын
GitHub link is not working kindly provide the latest link.
@shashankgurujala
@shashankgurujala 4 жыл бұрын
How can you setup NFS server as master from your kubernetes master, where can I find the admin.config file which could setup NFS as master. Please guide me, thanks.
@Mr.RTaTaM
@Mr.RTaTaM 4 жыл бұрын
Hi Sir, I am running nfs server on AWS EC2 machine and followed your steps. When I create pvc it is saying status as pending..what should I do?? What I am missing... Pls suggest
@Mr.RTaTaM
@Mr.RTaTaM 4 жыл бұрын
Sir pls help me
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@Mr.RTaTaM Thanks for watching. As shown in this video, did you check that you can manually mount the nfs share from your worker nodes? If not please do that first. And also see if you have to update security groups to allow these traffic.
@Mr.RTaTaM
@Mr.RTaTaM 4 жыл бұрын
My cluster running in my local laptop and created nfs server on AWS and I'm able to mount it from my worker nodes but when I'm creating pvc it is in pending state it saying that waiting for a volume to be created, either by external provisioner example.com/nfs or manually created by sys admin!! Anything I'm missing sir
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@Mr.RTaTaM So if you can mount it from your worker nodes, then I don't think there is a problem with the setup. If you used my manifests, you would have got a storage class named managed-nfs-storage. And you will have to use the same storage class in your PVC. Also you can check the events. For example, kubectl get events. This will show you why the pvc is pending.
@prasadreddy2008
@prasadreddy2008 5 жыл бұрын
super session bro
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Prasad, thanks for watching.
@PraveenYadav-kg1es
@PraveenYadav-kg1es 4 жыл бұрын
Can we ingest kubernetes logs in AWS Elasticsearch directly ?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Praveen, yes you can. All you need is a reachable elasticsearch endpoint from your k8s cluster. You can use fluentd or any log shipper to send logs to Amazon elasticsearch service. I haven't tried it. But when I try it, I will make a video.
@SivaKumar-og9pb
@SivaKumar-og9pb 5 жыл бұрын
Thanks for this Video and i followed the same steps but my pod is getting restart keep on..Back-off restarting failed container... Please help me to resolve..
@justmeandopensource
@justmeandopensource 5 жыл бұрын
HI Siva, thanks for watching. I had been successfully using this process for a very long time on a daiy basis. Can you first make sure that you can mount the nfs volume from the worker node?
@SivaKumar-og9pb
@SivaKumar-og9pb 5 жыл бұрын
@@justmeandopensource hi Bro.. Its mounted even though am getting same issue.. please help on this..
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Siva, I don't think its a problem with your dynamic PV provisioning. If it was PV provisioning problem, then you pod will be in pending state and not in failed backoff state. Look at the events immediately after your deploy the resource. $ kubectl get events
@SivaKumar-og9pb
@SivaKumar-og9pb 5 жыл бұрын
@@justmeandopensource Hi Bro..Thanx , i have recreated NFS there was some network issue.. Now its working fine.. thank you so much.. your videos are helping me lot...
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@SivaKumar-og9pb Perfect.
@radiantmind1079
@radiantmind1079 Жыл бұрын
that was really great
@ricardinhosmorais
@ricardinhosmorais 5 жыл бұрын
Shouldn't be better if nfs client was a daemonset ? Congrats on the tutorial. Pretty good!
@justmeandopensource
@justmeandopensource 5 жыл бұрын
No harm in deploying it as a daemonset. Helm charts are configured for deployments with configurable replica counts. github.com/helm/charts/tree/master/stable/nfs-client-provisioner
@benharathadel8185
@benharathadel8185 5 жыл бұрын
the pod stuck in the status : creatingcontainer waht's the problem ?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Benharath, Thanks for watching this video. Which pod is stuck at that stage? Is it one of the pods during the NFS provisioner deployment or when you are testing a pod with persistent volume after you have created the Nfs provisioners? Thanks, Venkat
@benharathadel8185
@benharathadel8185 5 жыл бұрын
@@justmeandopensource yes it's one of the pods during the NFS provisioner deployment
@justmeandopensource
@justmeandopensource 5 жыл бұрын
You could check the events from that deployment which would give you what stage it is in and possible errors. Run the below command and towards the bottom, see if there are any clue $ kubectl describe deployment Thanks
@benharathadel8185
@benharathadel8185 5 жыл бұрын
@@justmeandopensource i got this : Warning FailedCreatePodSandBox 25s kubelet, nfs-client Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "419e367daaae5f57f1744a0b86e09c28e94544275bcdaf64efe0b8d2af079f52" network for pod "nfs-cl ient-provisioner-c84f69c7c-mvjpx": NetworkPlugin cni failed to set up pod "nfs-client-provisioner-c84f69c7c-mvjpx_default" network: unable to allocate IP address: Post 127.0.0.1:6784/ip/419e367daaae5f57f1744a0b86e09c28e94544275bcdaf64efe0b8d2af079f52: dia l tcp 127.0.0.1:6784: connect: connection refused
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Benharath, Looking at the errors you poseted, it seems there is some network problem. Forget about this dynamic nfs provisioning setup. Were you able to set up the cluster successfully? Could you create a simple pod like below? $ kubectl run myshell -it --rm --image busybox -- sh It will download busybox container and start a pod and give you a prompt. Check if you can ping internet (eg: google.com) or $ kubectl run nginx --image nginx I am trying to find out whether you have a general cluster networking issue or something that is specific to dynamic nfs provisioning deployment. Thanks, Venkat
@nah0221
@nah0221 4 жыл бұрын
what about aws EFS ?!
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Yes. we can use EFS as dynamic provisioning. I haven't done any video on that. Probably I will do it at some point. Cheers.
@domgo241
@domgo241 5 жыл бұрын
unbelievable!!!
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Dom, thanks for watching this video.
@vamseenath1
@vamseenath1 4 жыл бұрын
Hi Venkat, i am trying to expand the pvc online, but it is not working....any idea? I was able to edit PV online expansion, it got expanded from 5GB to 5GB. But PVC not responding at all. Thank you! -------------------------------------------------------------------------------------------------------------------------------------------------- root@ubuntu:/K8/nfs-storage-provision# k get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage dynamic/nfs Delete Immediate true 7m46s -------------------------------------------------------------------------------------------------------------------------------------------------- root@ubuntu:/K8/nfs-storage-provision# k get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-12a822b0-ce75-47fe-8255-ce24ff9b30b5 50Gi RWX Delete Bound default/pvc-nfs-pv2 managed-nfs-storage 4m43s -------------------------------------------------------------------------------------------------------------------------------------------------- root@ubuntu:/K8/nfs-storage-provision# k get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-nfs-pv2 Bound pvc-12a822b0-ce75-47fe-8255-ce24ff9b30b5 5Gi RWX managed-nfs-storage 5m1s root@ubuntu:/K8/nfs-storage-provision#
@devmrtcbk
@devmrtcbk 4 жыл бұрын
Thanks
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Murat, thanks for watching.
@arjunsbabu8712
@arjunsbabu8712 4 жыл бұрын
Thankyou bro
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Arjun, thanks for watching. Cheers.
@rajeshbastia8502
@rajeshbastia8502 5 жыл бұрын
Hi Venkat, Please let me know the root password.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
kubeadmin
@varun898
@varun898 5 жыл бұрын
Came across your channel when I was trying to understand mongodb replica sets.Really appreciate your work and learning a lot from your channel.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Varun, thanks for watching my videos and taking time to comment. Cheers.
@IshanRakitha
@IshanRakitha 2 жыл бұрын
Hi Venkat, My "nfs-client-provisioner" is up and running and PVC is in "PENDING" state with the following message "waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator". Storage class is also visible "managed-nfs-storage (default)". Please advise. Thank You very much.
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Hi Ishan, I can see your storage class managed-nfs-storage is the default storage class which is fine. I believe there is a some mismatch between what storage class can offer and what you have requested in your pvc claim. Let me give you an example. You might have configured the storage class to offer only RWO access mode and you may have asked for RWM read write many in your claim. Something like that. You can also check the logs of the nfs-client-provisioner pod which will give you more meaningful error if there was any.
@IshanRakitha
@IshanRakitha 2 жыл бұрын
@@justmeandopensource thank you Venkat. It’s working now. Really appreciate your lessons. Keep up the good work.
@justmeandopensource
@justmeandopensource 2 жыл бұрын
@@IshanRakitha Glad to hear that.
[ Kube 23.1 ] A guide to setting up dynamic NFS provisioning in Kubernetes
26:00
Just me and Opensource
Рет қаралды 31 М.
[ Kube 20 ] NFS Persistent Volume in Kubernetes Cluster
24:08
Just me and Opensource
Рет қаралды 32 М.
Гениальное изобретение из обычного стаканчика!
00:31
Лютая физика | Олимпиадная физика
Рет қаралды 4,8 МЛН
人是不能做到吗?#火影忍者 #家人  #佐助
00:20
火影忍者一家
Рет қаралды 20 МЛН
BAYGUYSTAN | 1 СЕРИЯ | bayGUYS
36:55
bayGUYS
Рет қаралды 1,9 МЛН
Мясо вегана? 🧐 @Whatthefshow
01:01
История одного вокалиста
Рет қаралды 7 МЛН
[ Kube 24 ] Getting started with Helm v2 in Kubernetes Cluster
27:16
Just me and Opensource
Рет қаралды 63 М.
Persistent Volumes with NFS and Cloud Storage // Kubernetes Tutorial
22:30
[ Kube 14 ] Using Secrets in Kubernetes
21:05
Just me and Opensource
Рет қаралды 22 М.
[ Kube 35 ] Using Horizontal Pod Autoscaler in Kubernetes
24:59
Just me and Opensource
Рет қаралды 38 М.
[ Kube 33 ] Set up MetalLB Load Balancing for Bare Metal Kubernetes
11:02
Just me and Opensource
Рет қаралды 42 М.
Гениальное изобретение из обычного стаканчика!
00:31
Лютая физика | Олимпиадная физика
Рет қаралды 4,8 МЛН