[ Kube 7 ] Kubernetes Pods Replicasets & Deployments

  Рет қаралды 25,844

Just me and Opensource

Just me and Opensource

Күн бұрын

In this video I will show you how to use pods, replicasets and deployments.
Github: github.com/jus...
For any questions/issues/feedback, please leave me a comment and I will get back to you as quickly as I can. If you liked this video, please share it with your friends and don't forget to subscribe to my channel.
Hope you found this video informative and useful. Thanks for watching.
If you wish to support me:
www.paypal.com...
#kubernetes #k8s #justmekubernetes #replicasets #deployments #pods

Пікірлер
@mathewkargarzadeh3158
@mathewkargarzadeh3158 5 жыл бұрын
Really liked the deployment part, although the replicaset explained pretty well. getting a copy of Kube-proxy yaml and using it as a sample to create your own yaml, was my take away !!!. Thanks Venkat !!. you are making me addicted to your youtubes. all appreciated !!.Mat.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Thats great to hear. Thanks Mathew.
@harshaghanta1
@harshaghanta1 Жыл бұрын
First of all, I really like your content and the way you simplify and explain things. Few observations in this video, @17:14 you mentioned about the meta data labels, getting applied to every pod created by this application, however I am not seeing that happening with the created pods. Only the labels specified in the template section is getting applied to the newly created pods.
@seyedmahdi4
@seyedmahdi4 2 жыл бұрын
nice
@justmeandopensource
@justmeandopensource 2 жыл бұрын
Thanks for watching.
@jalandharbehera2456
@jalandharbehera2456 3 жыл бұрын
Very good sir... salute to you 🙏🙏
@travelersnotebook3503
@travelersnotebook3503 3 жыл бұрын
Very well explained...simple and on the point
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Thanks for watching.
@stilianstoilov3728
@stilianstoilov3728 4 жыл бұрын
24:35 Hi, the easiest way to create a yaml file is to use the --dry-run option. This will only create the yaml file(no resources created!). From that point, you can easy modified it. kubectl -n test create deployment hr-app --image=nginx:1.7.8 --dry-run=client -o yaml > deploy.yaml
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Stilian, thanks for watching. Yes exactly. I have mentioned this tip in various videos. Thanks for sharing it here. Cheers.
@heygokul
@heygokul 5 жыл бұрын
Thanks again for teaching me more about Kubernetes. Can you please explain a little on ApiVersion. What are the different versions etc.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Gokul, thanks for watching this video. You interact with your Kubernetes cluster through APIs (Application Programming Interface). Every resource in Kubernetes has its own api path. Every resource is developed in a separate part and you will see different versions of the api for each resource. It can be alpha during initial development and beta when tested sufficiently and then stable. You can read about it more in the official Kubernetes link below. kubernetes.io/docs/reference/using-api/api-overview/ Thanks
@romantsyupryk3009
@romantsyupryk3009 4 жыл бұрын
Thanks so much for this tutorial.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Roman, thanks for watching.
@romantsyupryk3009
@romantsyupryk3009 4 жыл бұрын
@@justmeandopensource Very good Kubernetes lessons. Very well understood and reported information. Thank you very much for sharing your knowledge with us.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@romantsyupryk3009 you are welcome. Cheers.
@sarfarazshaikh
@sarfarazshaikh 5 жыл бұрын
Awesome Tutorial
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Sarfaraz, thanks for watching this video.
@Normal833
@Normal833 5 жыл бұрын
outstanding
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Thanks Feiren for trying this video.
@saurabhpandey8250
@saurabhpandey8250 5 жыл бұрын
Thanks,,,, a lot for these videos
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Thanks for watching Saurabh. Cheers.
@omsiva4616
@omsiva4616 5 жыл бұрын
Hi Venkat, Need your Help. 1.I have created Docker Image for LAMP Stack; It includes (NGINX/MAGENTO/INTERNAL-PLUGIN/PHP) I want to use this Docker image in Kubernets. With multiple Persistent Volume. 1. /var/www/html/live 2. /var/lib/mysql 3. /etc/* LAMP stack is running in a single Container/Pod I'm not sure how to expose the all service here in this case and also not sure how to create multiple volume in a single POD. I have gone through the topics about exposing the service & Persistent Vol. all related to individual pod but here all I need to expose everything on a single container. Please take your time and help me to create a syntax for this.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi, thanks for watching. You can have as many volumes as you like mounted in a single pod. You could create all the needed persistent volumes manually yourself or have it auto provisioned using dynamic volume provisioning. Then in the pod defintion (or deployment) manifest you can use claim templates/volumeMounts option to include any number of volumes that you want to mount.
@sridharjanaswamy
@sridharjanaswamy 4 жыл бұрын
A detailed README with required prerequisites in your github would help configure our GCP/AWS instances with appropriate packages.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Sridhar, thanks for watching. I will update it when I get some time. Cheers.
@Mohammed-co3ux
@Mohammed-co3ux 2 жыл бұрын
Hi Venkat.. Thanks for the great content. I need your help. I'm running a deployment with replicaset that creates 6 pods (replicas) when deployed. In general all the 6 replicas gets deployed at once and tend to use the database at the same time that will eventually bring down the DB server. Is there any way to delay the pod creation time between first replica to second and second to third etc.. So first replica can connect to the DB once it gets spin up and then second replica and so on.
@Java_basic
@Java_basic 4 жыл бұрын
Great content.. i have adoubt == in kubernetes service loadbalancer, what type of laod balancer it use ?(nlb or alb or clb)
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Ranjit, thanks for watching. I believe it will use classic load balancer by default. aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-services-cluster/
@harshaghanta1
@harshaghanta1 Жыл бұрын
Hi, Isn't it possible to define the pod template in one file and replica set in a different file. something like Pod definition will have only the containers related info & my replicatset file will have only the matching criteria. something like below code. Its forcing me to define the template section apiVersion: apps/v1 kind: ReplicaSet metadata: name: nginx-replicaset spec: replicas: 2 selector: matchLabels: appType: webserver
@rishabdussoye8111
@rishabdussoye8111 3 жыл бұрын
How to know when which pods is actually running the container in the ReplicaSet? I'm not able to know .
@Probattu
@Probattu 5 жыл бұрын
Thanks for great videos, I have one doubt how many containers we can run inside a pod. Thank you, in advance.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Battu, thanks for watching. In general you will run one container in a pod. You can run more than one container also. But its better to stick with one container. For example, you can run an nginx container and a mysql container in the same pod. But when it comes to scaling, you won't be able to scale individual containers.
@Probattu
@Probattu 5 жыл бұрын
thanks a lot, bro.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@Probattu You are welcome. Cheers.
@nagendrareddybandi1710
@nagendrareddybandi1710 4 жыл бұрын
HI Sir, Nice video. Thanks for that. when we deploy the deployment file and based on replicas num .. it was scheduling the pods on workers. over the period of time if one of the worker crashed.. what would be the situation? is that all pods moved to another worker? anyway still application is accessible to users. thats fine.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Yes, it will always try to maintain the number of replicas you defined provided your cluster has enough resources.
@nagendrareddybandi1710
@nagendrareddybandi1710 4 жыл бұрын
@@justmeandopensource Thanks for your reply Sir, is that all pods moved to another worker?
@user-rp9iis1en6h
@user-rp9iis1en6h 4 жыл бұрын
Could you please let us know where we will find all available parameters for writing a yaml script for various tasks such as deployment creation, cluster creation, service creation?
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Zaman, thanks for watching. You can use the kubectl explain command for detailed documentation of every single option/field that you can use in a manifest. For example if you creating a manifest for a pod, you can check "kubectl explain pod". Cheers.
@Sharma_Mohit04
@Sharma_Mohit04 4 жыл бұрын
You can use command " kubectl explain --recursive"
@mnitin11
@mnitin11 3 жыл бұрын
Really explained well, Well when I tried creating pods, the pods status shows CrashLoopBackOff any way to troubleshooting? I did go through logs, still cant solve
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Mansi, thanks for watching. It may not have got to the point where it starts logging. Crashloopbackoff is basically the container starts and exits immediately. It depends on the image you are using and what it actually does. Look at the output of kubectl get events and it might give you a clue. Or Kubectl describe pod/deploy whatever and at the bottom section you will find some details.
@kdetony
@kdetony 5 жыл бұрын
Congrats!!! #pods #k8s
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Anthony, thanks for watching this video.
@user-ti5uc2zu8m
@user-ti5uc2zu8m 5 жыл бұрын
nice, thx a lot!
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Thanks for watching this video.
@vini007mca1
@vini007mca1 5 жыл бұрын
Hi Venkat, I did tried to expose the sample nginx deployment (from your vagrant setup), though the deployment and its service(Nodeport) is running, but i am unable to see it browser nor i can do curl from my host machine (macos) Note: i have configured the /etc/hosts file on my host mahcine and i can ping worker1 and worker2 as well. Not sure where i should be looking at to resolve the issue.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Vinay, thanks for watching. Thats all needed. You expose the deployment as service of type NodePort. And then from your host machine you can access it by connecting to any . I haven't tried it in MacOS, but the process flow should be the same. Make sure your nginx deployment pods are actually running. And login to one of the worker node using ssh and see if its listening on that nodeport using netstat command.
@raghueeti5237
@raghueeti5237 4 жыл бұрын
Could you please make a video on jsonpath parsing of various resources and conditions like get pods that are running, get nodes and their external IP in kubectl , thank you!
@richardtreu2967
@richardtreu2967 5 жыл бұрын
Hi Venkat, do you know if there is a way to get a DNS resolvable name for containers in Deployments? For StatefulSets and Pods it's no problem but for Deployments DNS resolution seems to be non existent.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Richard, I think its possible to have DNS resolvable names for your pods. Let me bring up my cluster and do some testing. Will keep you posted. Thanks.
@richardtreu2967
@richardtreu2967 5 жыл бұрын
​@@justmeandopensource Thanks. I just found this, what seems to work: stackoverflow.com/questions/54488280/how-do-i-get-individual-pod-hostnames-in-a-deployment-registered-and-looked-up-i
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Yeah. Thats pretty much I got to. DNS entries are created only for services. If you have headless service (like statefulset), pods will get dns names as well. Also explored adding hostname option to the pod spec but didn't create a dns record.
@richardtreu2967
@richardtreu2967 5 жыл бұрын
@@justmeandopensource The question now is if this workaround is stable (to use the IP with dashes and the domain suffix)... Thanks for looking at this problem.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
If its a single replica deployment, then you can go with setting the hostname and sub-domain name in the pod specification. You can have the dns name that won't change. It won't work if you have more than one replica I guess. How would all the containers get the same name in dns. So ip with dashes and domain kind of dns name is the way to go. But when your pod crashes and a new one is started, it will have a different name. Another question. Why do you want to have a dns resolvable name for your pods in a deployment. It makes sense in a Statefulset where each pod needs to talk to each other in a consistent way. But for deployment, what's the use case? You normally access the app through a service. And for debugging a particular pod/replica, you just kubectl exec into it. Thanks.
@Nimitali
@Nimitali 4 жыл бұрын
I recently started practicing on k8s cluster using vagrant(1 master and 2 workers using VBox on local Windows). but unable to create pod using kind:replicaset getting error "error: unable to recognize "rs.yaml": no matches for kind "ReplicaSet" in version "extensions/v1beta1". I tried with diff apiversion but didn't worked. Could pls share some solution to this issue.Also using ReplicationController works fine but some issue with ReplicaSet. Really appreciate if you could help me out here
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Keenjal, thanks for watching. Since Kubernetes v1.16 some of the apiVersions has been obsoleted. One of them is extensions/v1beta1. Please watch my below video for more details. In short, just update your yaml file to use apps/v1 instead of extensions/v1beta1. kzbin.info/www/bejne/a2OoomNjedinqac Cheers.
@Nimitali
@Nimitali 4 жыл бұрын
@@justmeandopensource just perfectly working now thanks..
@justmeandopensource
@justmeandopensource 4 жыл бұрын
@@Nimitali No worries. Cheers.
@sivaguruvinayagam7779
@sivaguruvinayagam7779 5 жыл бұрын
Hi Venkat, R u have already made videos about Liveness, Readiness pls?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
I might have explained it in some of the videos but haven't done a video specifically for that. You can refer the below link which details both these checks. Thanks.
@nitinmuteja
@nitinmuteja 4 жыл бұрын
why do we have labels in pods as well under the replica set? We already know that the labels for a replica set apply to all the pods.
@justmeandopensource
@justmeandopensource 4 жыл бұрын
Hi Nitin, thanks for watching. We don't normally deploy replicasets. We only deploy deployments, statefulsets or daemo sets. Replicasets are created by the replication controllers when we deploy deployments, statefulsets. Labels are attached at all levels. That is how resources are linked. Pods are labelled so that replicasets can find and control them. Replicasets are labelled so that the top level deployments and statefulsets can find and control them.
@johnclarkon369
@johnclarkon369 4 жыл бұрын
Hello. long time no see. hope you safe and healthy. recently, I did a networking CNI test by using MACVLAN as k8s CNI plugin. the final goal is that we want to expose pod IP to the outside world just like a host server(in other word, POD IP subnets is same as K8S master worker's host IP), and no flannel/calico, even you don't have to create k8s service to expose your port, just using MACVLAN as the master CNI, here is how I do it. I tested on k8s 1.16 and 1.18, then both working well. topology: master 172.17.0.1/16 gw .254 worker 172.17.0.2/16 gw .254 our final goal: POD IP subnet: 172.17.0.0/16 just like K8S host IP address. for outside world, it does not has diff as they perspective. ##First, delete flannel pods, you can use kubectl get pods -n kube-system to check kube-flannel pods are terminated. $kubectl delete -f raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ##all node $rm -rf /etc/cni/net.d/* $ifconfig cni0 down $ip link delete cni0 $ifconfig flannel.1 down $ip link delete flannel.1 $rm -rf /var/lib/cni/ $rm -f /etc/cni/net.d/* $systemctl daemon-reload && systemctl restart kubelet ##check networking plugins, you can see macvlan. if not. go to github.com/containernetworking/plugins/releases Download cni binary and unzip cni-plugins-linux-amd64-v0.8.6.tgz files copy to /opt/cni/bin path ##check networking plugin $ ls /opt/cni/bin bandwidth dhcp flannel host-local loopback portmap sbr tuning bridge firewall host-device ipvlan macvlan ptp static vlan ##then create macvlan configuration on each k8s nodes(master and worker), path: /etc/cni/net.d/10-maclannet.conf ##here is an example. as you can see, you can change the ip setting as you want. but remeber this, this config is on every k8s nodes. "master" this value defines your host machine nic card. for most usecase is eth0. but my VM testing env is set to ens3, modify this as your env. you can check this by tapping ip a or ifconfig to check which nic is going to use. nano /etc/cni/net.d/10-maclannet.conf { "cniVersion":"0.3.1", "name": "macvlannet", "type": "macvlan", "master": "ens3", "mode": "bridge", "isGateway": true, "ipMasq": false, "ipam": { "type": "host-local", "subnet": "172.17.0.0/16", "rangeStart": "172.17.0.100", "rangeEnd": "172.17.1.200", "gateway": "172.17.0.1", "routes": [ { "dst": "0.0.0.0/0" } ] } } ##create a pod for testing. cat
@kanavpeer7926
@kanavpeer7926 5 жыл бұрын
Hello, First of all thanks for sharing these sessions on youtube. These are very good for the new comers. I would like to ask you one question. I have provisioned a 4 node cluster on AWS using kuberspray. 2 nodes in us-east-1a and 2 in us-east-1b. I have one issue, when i launch the pod it gets the node in us-east-1a but its volume automatically created in us-east-1b. Do you have any use-case or anything you have faced this type of issue anywhere.
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Kanav, thanks for watching this video. What type of volume are you creating? Can you share the manifest yaml please? Thanks, Venkat
@kanavpeer7926
@kanavpeer7926 5 жыл бұрын
@@justmeandopensource apiVersion: v1 kind: Pod metadata: creationTimestamp: 2019-04-20T01:37:28Z generateName: kafka- labels: app: kafka controller-revision-hash: kafka-dc564bc4c statefulset.kubernetes.io/pod-name: kafka-0 name: kafka-0 namespace: default ownerReferences: - apiVersion: apps/v1beta1 blockOwnerDeletion: true controller: true kind: StatefulSet name: kafka uid: bbdc6e93-4581-11e9-9772-0a7103f65636 resourceVersion: "10604685" selfLink: /api/v1/namespaces/default/pods/kafka-0 uid: db8d904e-630c-11e9-997c-0a7103f65636 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io.kafka operator: In values: - kafka podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - kafka topologyKey: kubernetes.io/hostname containers: - command: - sh - -c - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka-${HOSTNAME##*-}.kafka-svc.default.svc.cluster.local:9093 KAFKA_BROKER_ID=${HOSTNAME##*-} /etc/confluent/docker/run env: - name: KAFKA_HEAP_OPTS value: -Xms2g -Xmx2g -XX:MetaspaceSize=96m -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 - name: KAFKA_OPTS value: -Dlogging.level=INFO - name: KAFKA_ADVERTISED_LISTENERS valueFrom: configMapKeyRef: key: advertised.listeners name: kafka-cm - name: KAFKA_AUTO_CREATE_TOPICS_ENABLE valueFrom: configMapKeyRef: key: KAFKA_AUTO_CREATE_TOPICS_ENABLE name: kafka-cm - name: KAFKA_OFFSETS_RETENTION_MINUTES valueFrom: configMapKeyRef: key: KAFKA_OFFSETS_RETENTION_MINUTES name: kafka-cm - name: KAFKA_ZOOKEEPER_CONNECT valueFrom: configMapKeyRef: key: connect name: kafka-cm image: confluentinc/cp-kafka:4.0.2-2 imagePullPolicy: Always name: kafka ports: - containerPort: 9093 name: server protocol: TCP resources: requests: cpu: 300m memory: 2Gi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/kafka name: kafkadata - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-vfjqm readOnly: true dnsPolicy: ClusterFirst hostname: kafka-0 nodeName: ip-10-160-23-11.ec2.internal restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default subdomain: kafka-svc terminationGracePeriodSeconds: 30 volumes: - name: kafkadata persistentVolumeClaim: claimName: kafkadata-kafka-0 - name: default-token-vfjqm secret: defaultMode: 420 secretName: default-token-vfjqm thi PVC Claim works good in AWS, just its not balancing the last pod with correct EBS
@mihiiracharya2638
@mihiiracharya2638 5 жыл бұрын
where are you install vargrant I don't get understand. its like amazon wave service console
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Mihir, thanks for watching. I have installed vagrant on my local Linux machine and using vagrant to provision my kubernetes cluster. You can check out my other video where I explained how to use vagrant to provision the cluster. kzbin.info/www/bejne/rYHHenWbjK99qck
@mihiiracharya2638
@mihiiracharya2638 5 жыл бұрын
Thanks for the link I will update you after watching
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@mihiiracharya2638 Cool.
@chandinisaleem692
@chandinisaleem692 3 жыл бұрын
Hello @@UC6VkhPuCCwR_kG0GExjoozg. I am currently working on Kube project and my POD is running successfully. How to see the output of my pod (It has a python code to send notifications to slack).I found cronjob can able to show the output but is there is a way to check my pod output as a one-time? Without have to schedule it? Hoping for your reply.
@justmeandopensource
@justmeandopensource 3 жыл бұрын
Hi Chandini, I don't think I fully understood your question. Well, if you want to look at the output of any pod (provided the containers logs the output to stdout), you just have to use kubectl logs command to view the pod output.
@zulhilmizainudin
@zulhilmizainudin 5 жыл бұрын
You mentioned that one of the benefits using Deployment compared to ReplicaSet is it can help with rolling update and point 25% of the traffic to new/old Nginx container version. Could you make a video for that and show us the demo? Thanks!
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Zulhilmi, thanks for watching. In the deployment resource, you can configure rolling update strategy where you can specify how your deployment is updated on new changes. Whether to rollout the update to all the pods at once or update certain percentage of the workloads. I will see if I can do a video on that. I already have videos recorded for the next two months. But will add this to the list.
@zulhilmizainudin
@zulhilmizainudin 5 жыл бұрын
@@justmeandopensource Thanks!
@kdetony
@kdetony 5 жыл бұрын
bro Hi excuse me my english, I greet you from Peru, a chapters 1,2, ... 6? where do I find them?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Anthony, I am not so good at organising the videos in KZbin and making it easy to find. I am learning these skills. You can see all my videos in the below playlist. kzbin.info/www/bejne/j6vEiqSujJWqfdU Thanks.
@kdetony
@kdetony 5 жыл бұрын
@@justmeandopensource regards !!!!
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@kdetony Cheers.
@HarshaVardhan-jf9sd
@HarshaVardhan-jf9sd 5 жыл бұрын
How is a replica set different from mentioning replicas in a deployment.yaml file?
@justmeandopensource
@justmeandopensource 5 жыл бұрын
HI Harsha, Thanks for watching this video. The following is an excerpt from the official Kubernetes docs which might help you understand the difference. kubernetes.io/docs/concepts/workloads/controllers/replicaset/ ``A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don’t require updates at all. This actually means that you may never need to manipulate ReplicaSet objects: use a Deployment instead, and define your application in the spec section.``
@HarshaVardhan-jf9sd
@HarshaVardhan-jf9sd 5 жыл бұрын
@@justmeandopensource Thnks
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@HarshaVardhan-jf9sd You are welcome. Cheers.
@hassije8615
@hassije8615 5 жыл бұрын
hi , thank you for these video, could you give me the vagrant password please thanks
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Hassane, thanks for watching. The password is in the bootstrap.sh script. It is "kubeadmin". Thanks.
@nageshkampati4514
@nageshkampati4514 5 жыл бұрын
Pls tell me the password of kmaster in your vagrant
@justmeandopensource
@justmeandopensource 5 жыл бұрын
Hi Nagesh, thanks for watching this video. The root password for all the nodes is kubeadmin. Thanks.
@nageshkampati4514
@nageshkampati4514 5 жыл бұрын
@@justmeandopensource thank you so much
@justmeandopensource
@justmeandopensource 5 жыл бұрын
@@nageshkampati4514 you are welcome.
@gaddamravikumar2189
@gaddamravikumar2189 3 жыл бұрын
Change background colour
@justmeandopensource
@justmeandopensource 3 жыл бұрын
At which point?
@gaddamravikumar2189
@gaddamravikumar2189 3 жыл бұрын
@@justmeandopensource While your running commands
@justmeandopensource
@justmeandopensource 3 жыл бұрын
noted. thanks.
[ Kube 8 ] Kubernetes Namespaces & Contexts
17:22
Just me and Opensource
Рет қаралды 25 М.
[ Kube 13 ] Using Persistent Volumes and Claims in Kubernetes Cluster
44:30
Just me and Opensource
Рет қаралды 36 М.
Cute dog Won Squid Game 😱💸 #dog # funny #cartoon
00:33
Wooffey
Рет қаралды 21 МЛН
[ Kube 10 ] Kubernetes DaemonSets
15:57
Just me and Opensource
Рет қаралды 14 М.
[ Kube 19 ] Performing Rolling Updates in Kubernetes
45:37
Just me and Opensource
Рет қаралды 10 М.
[ Kube 14 ] Using Secrets in Kubernetes
21:05
Just me and Opensource
Рет қаралды 22 М.
What The Heck Are Kubernetes Resources, CRs, CRDs, Operators, etc.?
21:08
[ Kube 11 ] Jobs & Cronjobs in Kubernetes Cluster
36:36
Just me and Opensource
Рет қаралды 42 М.
[ Kube 21 ] How to use Statefulsets in Kubernetes Cluster
33:25
Just me and Opensource
Рет қаралды 30 М.
[ Kube 15 ] Using ConfigMaps in Kubernetes Cluster
29:10
Just me and Opensource
Рет қаралды 16 М.