[ Kube 21 ] How to use Statefulsets in Kubernetes Cluster

  Рет қаралды 29,802

Just me and Opensource

Just me and Opensource

Күн бұрын

In this video, I will explain about Statefulset resource and how to deploy one in your Kubernetes cluster. I will be using NFS for Persistent Volumes.
Github: github.com/justmeandopensourc...
For any questions/issues/feedback, please leave me a comment.
If you find this video useful, please share it with your friends and subscribe to my channel for more videos.
If you wish to support me:
www.paypal.com/cgi-bin/webscr...
Thanks for watching this video.
Venkat

Пікірлер: 118
@bhattacharyyapulak
@bhattacharyyapulak 3 жыл бұрын
Very well explained and demoed, thanks!
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Pulak, thanks for watching. Cheers.
@raghumca
@raghumca 2 жыл бұрын
Thank you sir. I really love your videos...
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Thanks 🙏
@banban82
@banban82 5 жыл бұрын
a very nice video thanks Venkat
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Thanks for watching this video.
@akhileshaggarwal123
@akhileshaggarwal123 4 жыл бұрын
Super cool explanation
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Akhilesh, thanks for watching.
@RakeshWaghela
@RakeshWaghela 4 жыл бұрын
Awesome video. You earned a subscriber 🙏
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Rakesh, thanks for watching this video and subscibing to my channel. Cheers.
@davidc24994
@davidc24994 3 жыл бұрын
Very good explanation, thanks :)
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi David, thanks for watching.
@sandeepc3170
@sandeepc3170 4 жыл бұрын
excellent kudos for explination
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Sandeep, thanks for watching. Cheers.
@PanayiotisSavva
@PanayiotisSavva 4 жыл бұрын
Awesome video dude.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Thanks for watching. Cheers.
@husseinrefaat2530
@husseinrefaat2530 3 жыл бұрын
very good video , subscribed :D
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Hussein, many thanks for watching and subscribing. Cheers.
@dineshj1440
@dineshj1440 4 жыл бұрын
Thank you Venkat, This Video is very useful for me and i am going through all of your kubernetes videos. Can you please also share the steps of how to step zsh console which you are using.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Dinesh, Thanks for watching this video. As many people have asked me about this, I recorded a video long time ago and you can watch that in the below link. Cheers. kzbin.info/www/bejne/qaCkqIinZ8iEfrM
@richardwang3438
@richardwang3438 4 жыл бұрын
nice video, subscribed
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Richard, thanks for watching.
@ankurtomar3759
@ankurtomar3759 2 жыл бұрын
Good explanation, but I was expecting that you will show me the exact things how data replication works between master and slave sql nodes using stateful set and nfs.
@kunchalavikram
@kunchalavikram 3 жыл бұрын
Hello sir, great video always. I have a question regarding the databases in k8s. If i have to deploy 2 sql db's using statefulsets, how do i make sure that each db stores the same data? Like if one db service writes the data, how does other db syncs it?
@Sudheer_maneesha7
@Sudheer_maneesha7 4 жыл бұрын
Nice explaination I have small doubt. for stateful applications we are using one dedicated pv for each replica of a pod. Are we using dedicated pv's for each copy of replica's even if we have deployed application with deployment object?
@nohandsignal
@nohandsignal Жыл бұрын
thanks for the video one small question if the situation is for kafka 3 pods [sts], we are using statefulsets and headless service where there wont be any clusterIP for the service, the request will be going to individual Ips but will it be goining to all the pods or will there be any pod in the series of three that will be taking all the requests ?? could you kindly please reply thank you.
@johnfinny100
@johnfinny100 4 жыл бұрын
Hi Venkat, I would like to know on Dynamic method on any cloud provider as aws for stateful set in kubernetes cluster.It would be also helpful if any video on it. Thanks.
@jamhulk
@jamhulk 3 жыл бұрын
Hi, Didnt you mount the nfs volume to all the nodes? As I can see it you just mount it in worker1. Thank you
@MOMENTSTVvn
@MOMENTSTVvn Жыл бұрын
Hi Venkash, Do you have any video which introduce how to resolve down-time when delete a StatefulSet pod? I have a model in MongoDB with Primary and Secondary , Now I want to rolling strategy by using OnDelete feature and need it can prevent down-time if we delete a po thanks
@susheelkumarv
@susheelkumarv 5 жыл бұрын
Very informative video. Your Manjaro linux environment looks great. What's the gnome utility/ application showing the system configuration (Host, network, cpu, memory details) on your desktop?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Susheel, thanks for watching this video. The configuration you see on the right of my desktop is conky. You can install conky and you can find loads of configuration file (.conkyrc) online. Thanks.
@AmreshKumar-jk6sx
@AmreshKumar-jk6sx 4 жыл бұрын
awesome bro the way you are using vagrant i like it. great make more videos on RBAC
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Amresh, thanks for watching this video and taking time to comment. Cheers.
@Siva-ur4md
@Siva-ur4md 5 жыл бұрын
Hi Venkat, Nice Video. I have few doubts on stateful set. How the replication happens between the pods, and How the load balancing technique will work in the stateful set.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Siva, Thanks for watching this video. I am not entirely sure what you mean by "how the replication happens between the pods". Statefulset like a Deployment resource, makes sure there are set number of pods in the cluster as you mentioned in the yaml file with "replicas" option. A replicaset will be created which maintains the number of pods. Each pod is an unit of deployment and they can share same storage using persistent volume. If you mean application level replication, like MySQL replication, it has nothing to do with Kubernetes and it is application configuration that co-ordinates with different instances of it. Load Balancing will work the same way whether its a Statefulset or a Deployment. It all depends on what you set "ClusterIP" to when exposing the deployment/statefulset as service. Thanks, Venkat
@samsulhaque8064
@samsulhaque8064 6 ай бұрын
Hi Sir, I have a question about stateful application Let’s say I have a PostgreSQL Sharded cluster in my Kubernetes cluster and have 3 replicas with Stateful Set and storage class Case1: If replica set increase 3 to 4 one pv attached to 4th number pod dynamically also some data stored in 4th member pv and all are ok Case2. When scale down 4 to 3 my 4th number pod down and pv remaining existing. And that data remaining inaccessible. when replicate set up that pv can accessible a. If that pv inaccessible so is there any data inconsistency happen? b. If inconsistency happen how to redistributed that data from 4th number pv to others pv . c. Or what the actual thing happens that orphan pv when do scale down in stateful application
@ravik8657
@ravik8657 4 жыл бұрын
Nice video Venkat. I have a quick question: Can I use statefulset for an application with single pod and manage the provisioning of its persistent volume ?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Ravi, thanks for watching. There is nothing stopping you from creating a statefulset that has just one replica. But why would you want to do it. Its best suited for something that is deployed as a cluster like a database cluster where each node needs a consistent way to talk to other nodes in the cluster. If you have just 1 node in your replica, then you can just use a deployment, can't you? Cheers.
@FreshersKosam
@FreshersKosam 4 жыл бұрын
Hi Venkat, thanks for this..can you please tell one real time use case to go with statefulset
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Sivakumar, thanks for watching. You first need to understand the statefulset concept and why it was introduced and what benefits it offers over deployment. The main thing about statefulset is the consistent and unique network naming of the pods which is required in cases like deploying a database cluster where each node needs an identity.
@rsrini7
@rsrini7 5 жыл бұрын
Thanks Venkat. Very useful. Could you please make a video about k9s with your vagrant cluster setup ? I am not sure, how to specify the vagrant cluster details in .k9s\config.xml
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Srini, thanks for watching this video. I will try that and let you know. Cheers
@rsrini7
@rsrini7 5 жыл бұрын
@@justmeandopensource Thanks Venkat. I saw a blog. Will you be make a video about the tools and couple of examples will help. Already watched your helm video and its wonderful. blog.hasura.io/draft-vs-gitkube-vs-helm-vs-ksonnet-vs-metaparticle-vs-skaffold-f5aa9561f948/ Also, looking for k3s.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@rsrini7 Thanks for the link. All the tools looks interesting. I have added it to my list and will give it a try when I get some time. At the moment struggling to find time between Kuberentes, AWS and MongoDB series. But keep posting topics, I will get to them eventually. Cheers.
@avikaggarwal4403
@avikaggarwal4403 5 жыл бұрын
Hi Venkat.. great one. Just a couple of questions! (I will be using local persistent volume, like local disk, etc) So when we have replicas>1, a new PersistentVolume is created for each PVC. right? So for example for 3 pods with each PVC 1GB, 3 persistentVolumes will be created on each node. But all the pods can be referenced using the headless service’s name, so what strategy does K8s follow to decide to which pod the traffic should be routed? Also as 3 different storage volumes were created, doesn’t this creates a problem? There may have been a data that was stored in PV of pod 1 but next the request to get that data was routed to pod-2, which won’t return anything. Can you please explain a bit on this scenario??
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Avik, Thanks for watching this video. It largely depends on what type of application you are trying to deploying using statefulset. Usually for a web application with backend database tier, the front end web tier will be a deployment. It doesn't have to be statefulset. And the back end database tier will be a statefulset. The back end statefulset can be designed in many ways. The cluster service that we create when deploying a statefulset is just for that statefulset pods to get unique identities and network resolution consistently. If you want to access the backend outside the cluster, you will have to create a separate service for that. The database tier in a statefulset can be designed in many ways. The database set up can be a cluster. SO the cluster takes care of replication between the different persistent volumes. Or you can have one persistent volume mounted as read/write from all the pods. So there is no single solution. Thanks, Venkat
@avikaggarwal4403
@avikaggarwal4403 5 жыл бұрын
@@justmeandopensource right. I was thinking on the same lines. I will have front end and back end deployed as services and database as statefulset with all pods having access to the sam PV. Maybe sometime later, if required i will look into setting up a cluster of the database. Thanks
@avikaggarwal4403
@avikaggarwal4403 5 жыл бұрын
@@justmeandopensource i actually had 1 question on this strategy too. Suppose i need to increase the size of PV being used, i can just update the PV and restart the pod in case of ReadWriteOnce. But with ReadWriteMany will i have to delete all pods? So that they are restarted with increased PV?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Avik, This is a good scenario and I haven't tried that. Expanding the persistent volume. The below link talks about it. See if that helps. kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/ Volume expansion seems to be supported only for certain volume providers. I don't see NFS volume provider mentioned in that link. So it depends on which type of persistent volume you are using. If you are using one of the cloud volumes, you should be able to update the storageclass to allow volume expansion. Other links: stackoverflow.com/questions/40335179/can-a-persistent-volume-be-resized Thanks, Venkat
@safsfsfdfgdg
@safsfsfdfgdg 4 жыл бұрын
What should be the reclaim policy if we want pods to be able to use data left behind by other previous pods? As an example, mysql data that can be re-bound to next mysql instance. Is that recycle?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Ashish, thanks for watching this video. Basically there are 3 types of reclaim policy. - Retain - Recycle - Delete You can find more details about each of these policy in the below doc. kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming What you were asking can't be achieved straight forward. When you set the reclaim policy to Retain, the PV will be retained, won't be deleted when you delete the pod. The status will change to Released. But it won't be available to any other pods. You have to manually copy the data if you want. Check my simple testing in the below link. I used hostpath with Retain as the reclaim policy. pastebin.com/raw/sig4ZU2g Thanks.
@vengamnaidu9273
@vengamnaidu9273 4 жыл бұрын
how to setup camunba BPM (zeebe) in k8s and how it works and can you brief use of haproxy in satefullsets
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Vengam, thanks for watching this video. I don't have any experience using Camunba. I guess camunba is a web application that can be run as a docker container and it relies on a database possibly postgres. If an application can be containerized, it can be run on Kubernetes. You can check the below two links to get started. github.com/camunda/docker-camunda-bpm-platform blog.camunda.com/post/2019/06/camunda-bpm-on-kubernetes/ Thanks.
@basireddy5409
@basireddy5409 3 жыл бұрын
Thank you Venkat, I really appreciate the way you are putting things together in these sessions. I've a quick question on taking backup and restore the etcd. I'm able to create the backup of etcd from kmaster(control plane). root@kubemaster:/home/vagrant# ETCDCTL_API=3 etcdctl --endpoints=192.168.56.2:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /tmp/etcd-snapshot.db Snapshot saved at /tmp/etcd-snapshot.db root@kubemaster:/home/vagrant# But when I wanted to do it from my machine the cert and key paths are not visible. root@ubuntu-Inspiron-5584:~/.kube# ETCDCTL_API=3 etcdctl --endpoints=192.168.56.2:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /tmp/etcd-snapshot.db Error: open /etc/kubernetes/pki/etcd/server.crt: no such file or directory root@ubuntu-Inspiron-5584:~/.kube# To fix this, I've copied my pub key to the kmaster and copied all the required files to my machine and created snapshot. root@ubuntu-Inspiron-5584:~/.kube# scp root@192.168.56.2:/etc/kubernetes/pki/etcd/server.crt . server.crt 100% 1184 598.9KB/s 00:00 root@ubuntu-Inspiron-5584:~/.kube# scp root@192.168.56.2:/etc/kubernetes/pki/etcd/server.key . server.key 100% 1675 1.0MB/s 00:00 root@ubuntu-Inspiron-5584:~/.kube# scp root@192.168.56.2:/etc/kubernetes/pki/etcd/ca.crt . ca.crt 100% 1058 1.6MB/s 00:00 root@ubuntu-Inspiron-5584:~/.kube# root@ubuntu-Inspiron-5584:~/.kube# root@ubuntu-Inspiron-5584:~/.kube# ETCDCTL_API=3 etcdctl --endpoints=192.168.56.2:2379 --cacert=ca.crt --cert=server.crt --key=server.key snapshot save /tmp/etcd-snapshot.db Snapshot saved at /tmp/etcd-snapshot.db Is there any better way to do this?
@andbuitra
@andbuitra 4 жыл бұрын
Hello! Quick question. What happens if a pod gets rescheduled and the PV was local? For example in your case: If web-1 (on node1) gets rescheduled on node2 but the PV was *locally* in node1, what happens then? I know that when using local PVs the pod gets scheduled to the same node when talking about Deployments. Does this happen as well with StatefulSets? In that case the pod would only be rescheduled when the node becomes available again, right? Thank you! Really good content. I have suscribed now.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
HI Andres, thanks for watching. I actually used NFS for dynamic volume provisioning. In statefulsets, when a pod gets rescheduled, it will still attach to the same PV. It won't work very well if you used local volume that are tied to a particular node.
@naughtyboygamesnbg7771
@naughtyboygamesnbg7771 4 жыл бұрын
Can I acess in evrey pod which will created with a owen 'Service. Or is that simlar like Replicasets that the pods are only there if any pod crashes. I have some issues to understand Statefulset. For me is that a Replicaset but just with each other Volume and volume claim included. Is that right ? Thanks for the answer.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi, thanks for watching this video. Statefulset is different in certain ways to Replicaset. You can find more information at kubernetes.io/docs/concepts/workloads/controllers/statefulset/. You can create individual service for each pod in a statefulset. When your application is deployed as a statefulset, each pod gets a unique identity. And when you attach persistent volumes to each of your pods, they too get a unique identity. So when a pod crashes it gets re-created with the same name and gets attached to the same peristent volume. Scaling up or down will follow strict order. Unique and ordered naming convention for pods and services. You don't get these features in a Replicaset. Statefulsets are specifically suited for certain applications. Some of the examples are, MySQL cluster, Postgres Cluster, MongoDB cluster or any application that is deployed as a cluster and each pod needs to talk to other pods in a reliable manner using names. Hope this answers your question. Thanks.
@romanvolovyk968
@romanvolovyk968 3 жыл бұрын
Should have done erlier
@ravipacc
@ravipacc 4 жыл бұрын
Thanks venkat for the wonderful videos. I have few questions:- 1. When do we use statefulset with pvc or deployment with pvc? do you have some use cases. 2. When to use readwriteonce or readwritemany in pvc for statefulsets? 3. I need to setup database(postgresql) in production, do i use statefulsets. what is the best approach?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi, thanks for watching. Whether to use statefulset or deployment is a choice you have to make based on the application you are deploying in the cluster. Please have a read at the below documentation about statefulsets which will clear all your doubts hopefully. kubernetes.io/docs/concepts/workloads/controllers/statefulset/ If you are deploying an application in a cluster fashion that relies on each member to be uniquely identifiable, then you can go for statefulset. Mysql cluster, postgresql cluster, mongodb cluster are best deployed in a statefulset. Readwriteonce and readwritemany as the name implies you can mount it and only one pod can write to it whereas many mean all the pods that mount the volume can write to it. Again this depends on the application you want to deploy.
@kartikmoolya6994
@kartikmoolya6994 4 жыл бұрын
I'v got a slightly lengthy question here we created 4 pv's mounted to 4 different pods, now the case is if i do the same for lets say jenkins stateful deployment there would be an inconsistency, as each pod has a different independent storage (data amongst the pods(pv) arent same) So if my jenkins job gets created on pod1-data stored in pv1, then the same job would not be accessible when the traffic comes to pod2 because data isn't available in pv2!!? can you help me with this please Venkat
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Kartik, I've got a slightly lengthy answer here. In this video, I used helm to deploy Jenkins in k8s cluster. And helm deployed Jenkins as a deployment and not a statefulset. Jenkins is a special case. You don't want to deploy that as a statefulset. Jenkins doesn't have high availability built-in. If you deploy it as a statefulset with 2 or more replicas each will have its own peristent volume and will work as different Jenkins servers. Even when you deploy Jenkins as a deployment, make sure to set number of replicas to 1. If you set to 2 replicas and point to the same persistent volume, it won't work as expected. Even though the two pods shares the same storage volume, it won't take effect unless you restart Jenkins. For example, you have two Jenkins Pods, podA and podB using the same persistent volume. You connect to Jenkins using podA and create a job. It will write to the persistent volume. Now go to Jenkins dashboard through podB. Despite the pod using the same volume, you won't see that job. There is an option under Manage Jenkins to reload the configuration from disk. If something has changed in the disk without the knowledge of the Jenkins frontend, you will have to reload it. So in Jenkins case, you can't really deploy it as a statefulset. If you want high-availability, you can deploy two separate deployment and point to the same persistent volume. So if one goes down, you can start using the other as the backup. Thanks.
@kartikmoolya6994
@kartikmoolya6994 4 жыл бұрын
Awesome.. got it thanks Venkat.. uv been super helpful with your instant replies and help
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@kartikmoolya6994 No worries. You are always welcome. Cheers.
@ashwathmendan732
@ashwathmendan732 4 жыл бұрын
Superb!!! In production environment I hope same persistent volume will be used for all nginx pods, if we use different persistent volume document root/website contents will be different. So can I use same PV for nginx all pods for WWW/document root.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Yes, that's right. Thanks for watching this video.
@AbhimanyuChoudhary15
@AbhimanyuChoudhary15 4 жыл бұрын
@@justmeandopensource Hi Venkat, 1) For the same example in the video, how will we specify that all the Pods should use the same/single PV ? 2)Suppose if we had only 1 PV, by default do all 4 replicas use the same PV or only 1 pod uses it and other 3 Pods are stuck in some error looking for a available volume?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@AbhimanyuChoudhary15 Statefulsets are for specific purpose. The basic idea is to have consistent individual replicas of pods that should be unique with its own data. For example a mysql cluster. Each node in a mysql cluster needs to have its own data directory. They don't share the same data directory. If you want to use same persistent volumes on multiple pods, then you will have to use deployments.
@AbhimanyuChoudhary15
@AbhimanyuChoudhary15 4 жыл бұрын
@@justmeandopensource got it Venkat! Thank you for the information.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@AbhimanyuChoudhary15 You are welcome. Cheers.
@jamilshaikh07
@jamilshaikh07 2 жыл бұрын
Hi Venkat, great videos. btw. quick question: I've 3 node clusters, 1 Master-2 worker. My query is how to make a statefulset pod running on worker1 with 1 replica, automatically move to another node 'worker2' when worker1 gets down/fail?
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Well, the 1 replica of the statefulset has to go to worker2 when worker1 is down. You only have 2 worker nodes and if one goes down, all the pods will be rescheduled onto the other node. There is no other option. If you have scheduling enabled for master node (which is not a good practise), the pods can go to master node(s) as well.
@jamilshaikh07
@jamilshaikh07 2 жыл бұрын
@@justmeandopensource my guess was also the same, however the 'kubectl get nodes' show the worker1 in 'notReady' state and the statefulset pod waits for the node-Worker1 to come back and the pods never get rescheduled to worker2
@justmeandopensource
@justmeandopensource 2 жыл бұрын
@@jamilshaikh07 Not ready state is different to a node being down. Not ready state usually means kubelet on the node is having problems. The pod might be already scheduled on that node but because kubelet is misbehaving, it has issues getting the container runtime to start the container. You must first investigate why the node is in not ready state. You can always drain and cordon the node while you investigate. When you drain, the pods will be re-scheduled. Or you can just delete the pod.
@vengamnaidu9273
@vengamnaidu9273 4 жыл бұрын
for statefullset we create two services i.e service and headless service, how they are different in functionality and how they work?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
The headless service is mandatory as I explained in my previous comment. I also explained why headless service is needed. The normal service is required to expose your application. Only by exposing your deployment or statefulset as a service, you will be able to access the application. Hope this makes sense. Thanks.
@raizik
@raizik 4 жыл бұрын
@@justmeandopensource Hello. can you please explain how to create a nodePort service that would do the load balancing btw the pods? So far I've tried this method itnext.io/exposing-statefulsets-in-kubernetes-698730fb92a1 but it only gives me a nodePort service per pod.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@raizik Exposing a statefulset as a NodePort service should load balance between all the pods in the Statefulset. I don't understand why it would create a service per pod. In your service definition, you normally select the pods using labels. Did you manually create the service yaml or you used kubectl expose statefulset?
@pengdu7751
@pengdu7751 4 жыл бұрын
great video! also, how you get that grey prompter when typing linux command? very cool!
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Peng, thanks for watching. That is zsh-autosuggestions plugin on top of oh-my-zsh on zsh shell. It basically suggests commands from the history as you start typing. I can't live without it.
@pengdu7751
@pengdu7751 4 жыл бұрын
@@justmeandopensource awesome! best thing i heard today! also subscribed to the channel for more listen! thanks!
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Lovely. Thanks for your interest.
@vengamnaidu9273
@vengamnaidu9273 4 жыл бұрын
hi venkat, what is the use of servicename in statefull set and headless service
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Vengam, The way statefulset works is by use of headless service. A service will get created which is required for statefulset. But this is not a normal service that you can use to access your application. That is why it is called headless application. The primary purpose of this service is to provide dns capabilities to the pods in the statefulset. Thanks.
@vengamnaidu9273
@vengamnaidu9273 4 жыл бұрын
service will take care of data to sync to all pvc
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@vengamnaidu9273 No it won't. The service is only there so that each pod in the statefulset can identify and talk to each other in a consistent manner.
@p55pp55p
@p55pp55p 5 жыл бұрын
Nice video but when I tried to adopt your solution it always end up with "Pod has unbound immediate PersistentVolumeClaims" error. I also checked the PVC but there is explicitly "storageclass.storage.k8s.io "manual" not found". Did you create storage class named "manual" before? If so can you share the content? Thanks. Peter
@p55pp55p
@p55pp55p 5 жыл бұрын
I found what was the issue in my case. I didn't give a number at the end of PV name. Hance PVC was unable to bind PV. When I gave number suffix to the PV name all started to work properly.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Peter, Thanks for watching this video. I haven't created any storage class named "manual". All you saw in the video was as it is from start to finish on a clean cluster. If you had the same setup, it should work. I was doing static provisioning in this video, where I created persistent volumes manually so that when statefulsets are deployed with claims, they will make use of these volumes. If it didn't work, you can check the below video where I showed how to use dynamic nfs provisioning. If you follow below video and then continue to this statefulset video, you don't have to create the persistent volumes yourself. It will get created automatically. And I have explained creating storage class in the below video as well. kzbin.info/www/bejne/d5LZn4SwjKmHe80 Thanks, Venkat
@p55pp55p
@p55pp55p 5 жыл бұрын
@@justmeandopensource Thanks Venkat for your reply. My issue was related to naming. All sorted out. Thanks. Peter
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Cool. Glad you resolve the issue. Good stuff. Cheers
@mahendrapalla1373
@mahendrapalla1373 3 жыл бұрын
From a visitor, Which terminal you using and how come the commands r showing before you typing. ? ( I know may be u used before those commands). But which terminal is it. Thanks, b Safe
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi, thanks for watching. I did a video explaining my terminal setup and here is the link. kzbin.info/www/bejne/hoa6n3aYp56WhJo
@mahendrapalla1373
@mahendrapalla1373 3 жыл бұрын
@@justmeandopensource thanks for the quick reply. Will watch. Btw I have finished watching PV dynamic and static videos. Is there any video on how to use Vault and How to deploy statefulsets (database) on Production scale. ??
@justmeandopensource
@justmeandopensource 3 жыл бұрын
@@mahendrapalla1373 Haven't done anything on vault yet.
@mahendrapalla1373
@mahendrapalla1373 3 жыл бұрын
@@justmeandopensource please do a video on Any database deployment with statefulsets and headless service with application attached. Will be greatfull . Thank you 🙂,.
@cheikamedmaoulida7367
@cheikamedmaoulida7367 5 жыл бұрын
hi Venkat hoping you are doing well, and thanks for this demo. i get a little problem when i want to create file inside of a container it shows that i don t have permission. i connect as root so i don't know exactly why it is not possible.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Cheik, thanks for watching this video. Can you explain more in detail as to what you are trying to do? How are you connecting to the container and what file and where you are trying to create? If you could share the actual commands, it will be helpful. Thanks.
@cheikamedmaoulida7367
@cheikamedmaoulida7367 5 жыл бұрын
@@justmeandopensource Ok i just folow you and do what you have done, i connect in to the pod withe " kubectl exec -it nginx -- /bin/sh" and located in /var/www. when i wanted creat file by " touch test" exctly as you have done, i have got this " touch: cannot touch 'test': Permission denied, I do all this localy
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@cheikamedmaoulida7367 Hi, its been a while since you posted this comment. Sorry I couldn't get to this as it was flagged spam for some reason. Just noticed it. Do you still have this issue? How did you set up your k8s cluster? May be I can try to reproduce and understand it. Thanks.
@JonnieAlpha
@JonnieAlpha 4 жыл бұрын
@@justmeandopensource I had exactly the same issue. If i connect to the container /var/www folder and touch a file there, I got the "touch: cannot touch 'hello': Permission denied". The solution is to make sure that you really use the "chmod -R 777 srv/nfs/" on the nfs server.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@JonnieAlpha Thanks for watching. Yeah I should have mentioned that in this video. But I did mention it on my "Dynamic NFS provisioning" video that you need to chmod 777 /srv/nfs/kubedata.
@rayofblue
@rayofblue 2 жыл бұрын
pv created fine, but when creating nginx I got this error. bad option for several filesystems nfs cifs you might need a /sbin/mount
@rayofblue
@rayofblue 2 жыл бұрын
apt install nfs-kernel-server in all worker nodes resolved this. worker nodes already were installed nfs-common so it's strange.
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Hmm. What version of Ubuntu are you running? May be now there is additional requirement to get nfs packages.
@rayofblue
@rayofblue 2 жыл бұрын
@@justmeandopensource Ubuntu 20.04 and I am not sure if nfs-kernel-server is related. Maybe I didn't set permission to /srv/nfs/kubedata
@vengamnaidu9273
@vengamnaidu9273 4 жыл бұрын
how to maintain etcd HA?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Vengam, There are various ways to provision a Kubernetes cluster with Etcd. You can use the kubeadm method of provisioning the cluster which will set up an etcd component on the master node but is not HA. You can set up your own etcd HA cluster and then use it as the datastore for your kubernetes cluster. Or you can use "Kubespray" method of automatic kubernetes cluster provisioning where you can control how many etcd nodes you want in your cluster and so on. Or you can follow "The Kubernetes Hard way" tutorial. kzbin.info/www/bejne/hKe0imiqqt10grs github.com/kelseyhightower/kubernetes-the-hard-way Thanks,
@ankitvarshney5989
@ankitvarshney5989 4 жыл бұрын
Hi Venkat, I have a question. can pods of sts are bound to any pv. I mean when I run kubectl get pv,pvc then pv-nfs-pv3 is available. please reply
@LuisMiguelZapata
@LuisMiguelZapata 4 жыл бұрын
Hi Venkat!. I really enjoy yours videos. Thanks for sharing. I would like to share an issue that happend to me when I was working with STS PV and PVC. When I tried to delete the PVC or the PV without been deleted before the pod.. the PVC had stayed in state of Terminating and never was deleted by kubernetes. so I have to edit the PVC and comment/delete 2 lines on it finalizers: - kubernetes.io/pv-protection I don't know if this is the best way, but I didn't find anythig about it in the documentation. Other way to do that is applying a patch kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' I'd like to know if you have faced this problem.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Luis, thanks for these details. I haven't used storage extensively to have encountered these issues. But good to know this. Cheers.
[ Kube 22 ] How to upgrade your Kubernetes Cluster
42:01
Just me and Opensource
Рет қаралды 8 М.
[ Kube 13 ] Using Persistent Volumes and Claims in Kubernetes Cluster
44:30
Just me and Opensource
Рет қаралды 36 М.
Red❤️+Green💚=
00:38
ISSEI / いっせい
Рет қаралды 76 МЛН
Я нашел кто меня пранкует!
00:51
Аришнев
Рет қаралды 5 МЛН
[ Kube 23 ] Dynamically provision NFS persistent volumes in Kubernetes
27:29
Just me and Opensource
Рет қаралды 40 М.
Kubernetes:  Deep Dive into StatefulSets and Headless Services
1:17:52
Statefulsets | Deploying MongoDB cluster to Kubernetes
18:22
Pavan Elthepu
Рет қаралды 25 М.
Best operating system for Servers in 2024
11:41
VirtualizationHowto
Рет қаралды 35 М.
[ Kube 60.1 ] Running MongoDB Replicaset in Kubernetes | Part 1
18:26
Just me and Opensource
Рет қаралды 31 М.
[ Kube 20 ] NFS Persistent Volume in Kubernetes Cluster
24:08
Just me and Opensource
Рет қаралды 31 М.
Kubernetes Deconstructed: Understanding Kubernetes by Breaking It Down - Carson Anderson, DOMO
33:15
CNCF [Cloud Native Computing Foundation]
Рет қаралды 187 М.
Persistent Volumes with NFS and Cloud Storage // Kubernetes Tutorial
22:30
Red❤️+Green💚=
00:38
ISSEI / いっせい
Рет қаралды 76 МЛН