Absolute must watch for anyone attempting to setup monitoring on their k8s cluster. Been hunting this information down for several weeks now and although there are numerous sites on web talking about the topic and showing off some cool looking screenshots, none comes close to the perfect job you've done here. Keep this good work going, mate. Awesome stuff.
@justmeandopensource5 жыл бұрын
Thanks for watching this video and taking time to comment. You made my day. Cheers.
@waterkingdom98395 жыл бұрын
Hello Venkat, can you also add a video using Persistent Volume approach and not just Dynamic Provisioning? Will be easy for the viewers not using NFS based approach.
@justmeandopensource5 жыл бұрын
@@waterkingdom9839 Hi, since I am using bare-metal, I can't use any other storage provisioners. NFS is the easiest one. And I wanted to show it the proper way for persistence. May I know what do you mean by persistent volume approach? So if I don't use dynamic provisioning, I can make use of hostPath volumes, but it will be on individual worker nodes. So we have to define nodeSelector to schedule the pod to the same host and it will add more complexity. So I thought of going with standard approach. Thanks.
@abrahamolamipo64495 жыл бұрын
Hi Venkat, just want to say big thanks to you for your videos...you've been of great help to me and many out there. Keep up the good work.
@justmeandopensource5 жыл бұрын
Hi Abraham, thanks for watching this video and taking time to comment and appreciate. Cheers
@techelevatesolutions3 жыл бұрын
Great instruction and impressively efficient use of kubectl commands. It's worth mentioning for viewers of this that the steps shown may not work anymore as indicated. The use of Tiller with Helm is now a deprecated approach for implementing prometheus as is the command "helm init". The "stable" repo is also reportedly being phased out of the Helm community's projects
@justmeandopensource3 жыл бұрын
Hi Alonso, Thanks for watching and explaining the current state of play of this process. The problem is that any video I do in this space gets outdated quickly and I have to do followup videos. This is in my list and I will try to get it in. Cheers.
@sunilkumarmatangi3 жыл бұрын
Hi Venkat, You are doing a wonderful job, It was a great help to learn the Kubernetes in an easy way. keep rocking. all the best. I have shared your many videos with my friends, they are super happy.
@justmeandopensource3 жыл бұрын
Wow! That is much appreciated. Thanks for your kind act. Cheers.
@rishiabhishektanuku4 жыл бұрын
Awesome bro..............i was trying to learn the monitoring tools for Kubernetes ...I have seen a lot of tutorials but they are already predefined configuration, which I did not understand...Ur explanation is really great from beginning to end. Thank you.
@justmeandopensource4 жыл бұрын
Hi Abhishek, thanks for watching. Glad that you found it useful. Cheers.
@rahulshekharpandey4 жыл бұрын
Very awesome video, hats off to you for explaining everything step by step. Couldn't wait to watch your next video. Thank you.
@justmeandopensource4 жыл бұрын
Hi Rahul, thanks for watching. Hope you are aware of this whole playlist where I have over 100 videos related to Kubernetes.
@walidshouman4 жыл бұрын
Thanks for the great tutorials, Following are some implementation notes: - The repo/yamls/nfs-provisioner/deployment.yaml: both ```spec.templates.spec.containers[0].env[2].value``` and ```spec.templates.spec.volumes[0].nfs.path``` shall be changed to the mounted NFS directory - Enabling the dynamic provisioning can be done by logging into the master and editing ```/etc/kubernetes/manifests/kube-apiserver.yaml``` to ensure having ```DefaultStorageClass``` in the enable-admission-plugins in ```spec.containers[0].command```, ie ```--enable-admission-plugins=NodeRestriction,DynamicStorageClass``` - There are 5 prometheus services, ```service/pm-prometheus-server``` is the one we want to set a NodePort for - The NFS mount will need to have ```no_root_squash``` option set for Grafana to work, even this is a bad practice - The NFS mount will need to have ```insecure``` if the nodes use a NAT adapter, follow this [article](blog.bigon.be/2013/02/08/mount-nfs-export-for-machine-behind-a-nat/) - The NFS options I've tried with are ```rw,sync,no_subtree_check,insecure,no_root_squash``` Thanks again ^_^
@justmeandopensource4 жыл бұрын
Hi Walid, many thanks for watching and taking time to share your comments. Cheers.
@guymasumbuko61193 жыл бұрын
once again and as usual, a great video from Venkat !
@justmeandopensource3 жыл бұрын
Many thanks for watching Guy. Cheers.
@machireddyshyamsunder9874 жыл бұрын
Excellent Venkat, I love your training videos . I am learning a lot .
@justmeandopensource4 жыл бұрын
Many thanks for following this channel.
@zongzaili97012 жыл бұрын
Your video is very helpful for beginners, especially the second half of the design dashboard, thanks.
@justmeandopensource2 жыл бұрын
Thanks for watching.
@devmrtcbk3 жыл бұрын
You are amazing. I think I will write a thank you to all of your videos :)
@justmeandopensource3 жыл бұрын
Hi Murat, appreciate your effort in thanking me.
@ankitrawat7215 жыл бұрын
Hello Venkat, I have been watching and following your videos, all videos are very nicely presented and explained.
@justmeandopensource5 жыл бұрын
Thanks Ankit for following this series. Cheers.
@richardmetzler71195 жыл бұрын
in 25:40 I think it is better to use the DNS of your prometheus service my-svc.my-namespace.svc so prometheus.prometheus.svc when I'm not mistaken and use server access.
@justmeandopensource5 жыл бұрын
HI Richard, thanks for watching. Yes that's how I would actually do. But I only realized it after recording the video.
@deanwoods62955 жыл бұрын
Thanks for putting this video out. It's very clear and easy to follow.
@justmeandopensource5 жыл бұрын
Thanks Dean.
@lavanyaanbu12345 жыл бұрын
It would be good if you could post a video on Basic troubleshooting in K8S to start with.
@justmeandopensource5 жыл бұрын
Viewers usually comment if they have any issues and I have been helping them. And as you suggested I think it would make sense to post troubleshooting videos too. Will keep this in mind. Thanks
@Siva-ur4md5 жыл бұрын
Hello Venkat, Thanks for the video, may I request you to make a video on Prometheus queries(functions Like rate, Increase, sum), it would help us to understand Prometheus and Grafana better. Thanks,
@justmeandopensource5 жыл бұрын
Hi Siva, i would love to do those topics. Let me see if I can. At the moment focusing on Kubernetes and AWS series. Cheers.
@lavanyaanbu12345 жыл бұрын
Very useful video to begin with monitoring..
@justmeandopensource5 жыл бұрын
Hi Lavanya, thanks for following this Kubernetes series. Hope you found it useful. Thanks
@SanjeevKumar-nq8td2 жыл бұрын
Any plans for an update session on this.
@rupeshpaneerselvam29585 жыл бұрын
Thanks much .. I have deployed on aws and Its working..!!!!
@justmeandopensource5 жыл бұрын
Hi Rupesh, thanks for watching this video and confirming that it is working on AWS environment. Good to hear that as I haven't tried it there. Thanks.
@vedicbhakt4 жыл бұрын
Thanks Venkat. Please let me know if you get the chance to look k8s-zabbix integration. Thx.
@justmeandopensource4 жыл бұрын
Sure. Will do. Cheers.
@visheshkumarsingh98184 жыл бұрын
Can you make a tutorial for monitoring of the external services which are running on our Kubernetes cluster, For example, like MongoDB, MySQL,etc and then monitoring it with our Prometheus-operator
@justmeandopensource4 жыл бұрын
Hi VIshesh, thanks for watching. external services running in the Kubernetes cluster? Are they running outside or within Kubernetes cluster?
@visheshkumarsingh98184 жыл бұрын
@@justmeandopensource Thanks for the response, like in recent days, I researched a lot about, how can we monitor other services that are not included in (default like node-exporter, etc)Prometheus-operator like database services(or BlackBox). So, the approach which I found out was to create the Service monitor for that particular thing. But this approach also didn't work out for me, like I am missing something. Also, it's within the cluster with a different namespace.
@justmeandopensource4 жыл бұрын
@@visheshkumarsingh9818 I see. There are exporters available for MySQL, MongoDB that helps collect database specific metrics and push it to Prometheus. I have explored them outside of Kubernetes. But haven't had a chance to try within Kubernetes.
@visheshkumarsingh98184 жыл бұрын
@@justmeandopensource Yes, I know that about different exporters available, I installed them via helm, but what do we need to specify in Prometheus-operator for it? I installed it, but not able to see the metrics.
@justmeandopensource4 жыл бұрын
@@visheshkumarsingh9818 If you installed the exporters, then you must have an endpoint (usually ). You will have to add that endpoint to Prometheus configuration to enable Prometheus to scrape the metrics from that endpoint. Depending on how you deployed Prometheus, it might just be a matter of updating the configmap and restarting the pod(s).
@huidey31594 жыл бұрын
awesome video, straight forward and very useful - like always. thanks for sharing, Venkat. Question: after enable the grafana persistence, the pod start will be failed and showing me `Init:CrashLoopBackOff` log shows ```Error from server (BadRequest): container "grafana" in pod "grafana-9f7c7f7ff-8vz9n" is waiting to start: PodInitializing``` do you have any suggestions about it. change to `service.persistence.enabled=false` it just working fine but no persistence... thanks in advance.
@justmeandopensource4 жыл бұрын
Hi Huide, thanks for watching. So without persistence it is working fine? You got to sort your persistent volume provisioning first if you need persistence.
@ramakanthsri1833 жыл бұрын
High Five Boss :) . Great Video
@justmeandopensource3 жыл бұрын
Thanks for watching. Cheers.
@Channel_test125 жыл бұрын
Thank you so much for sharing this !.. Can u please also do a session on prometheus alertmanager and its integration with slack. I am using helm to install stable/prometheus-operator , not getting how to update the rules for prometheus alertmanager also if possible how to trigger alerts only for few alerts .. Thanks!..
@justmeandopensource5 жыл бұрын
Hi Pooja, thanks for watching this video. I will try and play with these concepts and if I get anywhere, I will definitely do a video on it.
@sandeepmishra24 жыл бұрын
Thank you so much for sharing. Nicely explained .
@justmeandopensource4 жыл бұрын
Hi Sandeep, thanks for watching. Cheers.
@Yesdin007 Жыл бұрын
for people who stuck in creating nfs you have to install nfs-kernel-server on the target machine which you want to use for nfs after that you create the path /srv/nfs/kubedata and you edit the file /etc/exports adding this line "/srv/nfs/kubedata *(rw,sync,no_subtree_check)" this will allow all machine to connect to /srv/nfs/kubedata if you want to autorise just certain of hosts you can change * by the ip address on /etc/exports , once you finish this you reload the service sudo systemctl reload nfs-kernel-server , your pod then will be in running mode
@nagendrareddybandi17104 жыл бұрын
HI Sir.. Thanks for this Video on this stuff.. Its very nice and excellent.
@justmeandopensource4 жыл бұрын
Hi Nagendra, thanks for watching.
@lavanyaanbu12345 жыл бұрын
Hi, I have a query. I have installated Jenkins, Grafana, Prometheus, Spinnaker by creating a( Dynamic )Persistant Volume using helm chart. I am trying to define Disaster Recovery plan for it. How to backup all the resources inside the cluster.
@justmeandopensource5 жыл бұрын
Hi Lavanya, Backup and Recovery is another topic in my list. I haven't explored the options yet. But the below link looks promising. github.com/heptio/velero Thanks, Venkat
@swarajgupta30874 жыл бұрын
Hello Venkat, I want to setup Promentheus and Grafana on a machines which doesn't have internet connectivity. I can not use Helm as these are bare metal machines, but have Kubernetes cluster available on them. How could I setup Prometheus/Grafana in that case. Thanks for everything !
@justmeandopensource4 жыл бұрын
Hi Swaraj, thanks for watching. > I can not use Helm as these are bare metal machines Helm can be used on any machine. > Have kubernetes cluster on them How are you running Kubernetes clusters on machines that doesn't have internet access. It needs to download the docker images right?
@pradeepbhuyan4 жыл бұрын
Its very nice tutorial of using helm prometheus and grafana. Do you have any pdf documentation or git link then i can do prctice. Please provide any link to practice this lab.thanks again.
@justmeandopensource4 жыл бұрын
Hi Pradeep, thanks for watching. I don't have any documentation for this video. But generally you find most of the stuff in my Github Repo github.com/justmeandopensource/kubernetes
@chytrak40604 жыл бұрын
very good explanation
@justmeandopensource4 жыл бұрын
Hi Chytra, thanks for watching.
@magrag19873 жыл бұрын
@Just me and Opensource. it was wonderful video, thank you.. can you make a video of getting metrics from a database which is not in cluster. how the exporter will work and such stuff will come. thank you
@vedicbhakt4 жыл бұрын
Hi Venkat, I really appreciate your efforts for your videos specially for bare metal cluster. I have Zabbix already running on my on-premise server and a Kubernetes cluster which is also running on-premise. Now I want to integrate my existing Zabbix with my K8S cluster. Please let me know about its feasibility ? Thanks in advance.
@justmeandopensource4 жыл бұрын
Hi, Thanks for watching. I have no experience using Zabbix monitoring. I am afraid I will have to spend some time exploring options. Cheers.
@claudiogarcia75573 жыл бұрын
Hello Venkat, excellent tutorial videos as usual, Venkat can you make a video teaching Prometheus Cortex and S3,? thanks a lot Mister
@justmeandopensource3 жыл бұрын
Thanks for watching. I will see If I can do that. Cheers.
@shubhamagarwal55664 жыл бұрын
Hi Venkat, all the tutorials are brilliantly explained. I was wondering how to setup the alert manager as now it is just showing no alert groups found Thanks
@justmeandopensource4 жыл бұрын
Hi Shubham, thanks for watching. I haven't gone in depth into configuring Alerts. You will have to setup rules in prometheus and then point it to the alert manager configuration. You will have to configure alert manager as well as to how it alerts. If I get a chance, i will explore this. Cheers.
@lazybongguy5 жыл бұрын
Hey Venkat, great video. If you can also make a video on Prometheues-operator. Also also show how to add scrape targets in both cases, Prometheus (static and service discovery) and Prometheus operator (service monitors) and also if possible how to modify/add alerting rules in prometheus.
@justmeandopensource5 жыл бұрын
Hi Ashish, thanks for watching this video. I will try my best to do it. Thanks.
@lemont90614 жыл бұрын
Pls do video with AppD
@justmeandopensource4 жыл бұрын
Hi Sangeeta, thanks for watching. I will see if I can look into that. I primarily focus on open source products though.
@lemont90614 жыл бұрын
Thnks for reply.. can u suggest gemfire (caching) learning video..
@zaheerhussain53114 жыл бұрын
Hi Any video on Prometheus operator with dynamic persistence volume using NFS Regards Zaheer
@justmeandopensource4 жыл бұрын
Hi Zaheer, thanks for watching. I will add this to my list. Cheers.
@zaheerhussain53114 жыл бұрын
@@justmeandopensource thanks
@justmeandopensource4 жыл бұрын
@@zaheerhussain5311 You are welcome
@apitest62744 жыл бұрын
Hi thanks for your great tutorial. I'm facing a problem that in 14:07 after I use "helm install ..." my pod/prometheus-alertmanager and pod/prometheus-server just pending and can't start. Do you have any idea?
@apitest62744 жыл бұрын
Ah I found that I didn't install the NFS Server so that's why
@justmeandopensource4 жыл бұрын
@@apitest6274 Thanks for watching and glad that you managed to resolve the issue. Cheers.
@pcsridharbe Жыл бұрын
Hi Venkat , There is no init in helm version 3 . can you guide us how to install tiller using helm version 3
@kishoremummaleti17913 жыл бұрын
Is it possible to manually install of metric beat into cluster for monitor in kibana
@justmeandopensource3 жыл бұрын
Not sure what you mean. Can you explain with bit more kore context?
@joshuawilliams95185 жыл бұрын
Nice Work... i want to ask a question. What do i need to change in the prometheus values if i want to use Ingress
@justmeandopensource5 жыл бұрын
Hi Joshua, Thanks for watching this video. In this video, I exposed the Prometheus service as a NodePort service. You can also use ingress which is a bit involving. First get ingress controller deployed in your cluster with haproxy for proxying the requests to worker nodes. I am not sure if you have watched my Nginx ingress video. If not, please watch and follow all the steps in the video. kzbin.info/www/bejne/mZnaoJmvfNdrZsU Now you have an ingress controller in your cluster. In prometheus.values file, leave the type of service to ClusterIP. Don't change it to nodeport like what I did. Then, under Prometheus server section, enable the ingress. ingress: ## If true, Prometheus server Ingress will be created ## enabled: true Then, few lines down, set the dns name you want to use to access your application. Below I have used prometheus.example.com ## Prometheus server Ingress hostnames with optional path ## Must be provided if Ingress is enabled ## hosts: - prometheus.example.com Now install prometheus with this values file as usual. This will create ingress resource for you automatically. $ kubectl -n prometheus get ingress prometheus-server Now you need to add an entry to /etc/hosts file on your local workstation. prometheus.example.com Now when you visit prometheus.example.com, it will hit your haproxy which will forward the requests to one of your worker nodes and the ingress controller running on that node will forward the request to the prometheus service. Hope this makes sense. Thanks
@MudassirAlics5 жыл бұрын
Hello, are the steps shown in this video applies to Azure Kubernetes Services as well? How similar or different is it compared to what you have shown ? Thanks
@justmeandopensource5 жыл бұрын
Hi Mudassir, thanks for watching this video. Yes you can follow the same steps. But since you are using a managed Kuberenetes solution with Azure, you don't have to set up dynamic nfs provisioning for volumes. You can use AzureDisk as persistent volumes. You will have to create storageclass to use Azuredisk. I then exposed Grafana as a NodePort service. You can also do that. You will need to allow the nodeport in the firewall for the worker nodes. Or you can just easily use LoadBalancer type service since you are in the cloud. Otherwise the steps are all the same. Thanks.
@nah02214 жыл бұрын
brilliant ... thanks Venkat !
@justmeandopensource4 жыл бұрын
Hi Nur, thanks for watching. Cheers.
@avinashnarisetty79235 жыл бұрын
Hello Venkat i have followed your vedio but i got struck in nfs-dynamic provissioner.I couldn't get it could you please help me
@justmeandopensource5 жыл бұрын
Hi Avinash, thanks for watching. What error you get exactly? I have done separate videos on NFS provisioning. kzbin.info/www/bejne/d5LZn4SwjKmHe80 kzbin.info/www/bejne/qqCUZaqjg9KFeas You will have to first make sure that your worker nodes can mount the nfs share successfully. Cheers.
@ramallways6321 Жыл бұрын
Hi bro...I've tried the ingress rule as path based routing rather than hostname based, while checking in the grafana, I've seen only the ingress of hostname based routing for that controller which is exposing /metrics to prometheus. I couldn't see the path based ingress file metrics...Can you give the idea about this...? I need to see path based routing metrics also...
@manikandans88085 жыл бұрын
Hi Venkat. It works superb but I can't input the values file. I tried it many times but it's not taking the parameters from it. For grafana I does not claim the persistent volume. The pods get deployed with the default configuration.
@justmeandopensource5 жыл бұрын
Hi Mani, Thanks for watching this video. So your issue is that you can't install prometheus using helm with custom values file? What error do you see? Or is it completely ignoring the changes in your values file and deploys with default configuration? The values file you download for prometheus is a huge one and it has many configuration options for lots of components. If you had incorrectly updated a different component in the file, you wouldn't see it when deployed. So please pay attention when you are editing the values file. Thanks, Venkat
@manikandans88085 жыл бұрын
@@justmeandopensource it's ignoring the values file. For grafana I tried many times but it's not working.
@justmeandopensource5 жыл бұрын
@@manikandans8808 I would suggest you to make one change at a time and see if it has worked. I think the changes I made are to the service (type from LoadBalancer to NodePort and NodePort value) and Persistent volume size. Make sure to edit the right section. Play with it and I'm sure eventually you will get it working. Do "helm delete prometheus --purge" and between testing.
@Tshadowburn5 жыл бұрын
hi Venkat :) I'm sorry to bother you again, I'm trying to set up Prometheus too but since I'm using helm 3 I do not need tiller now, but the thing my pods of prometheus-1575459249-server and prometheus-1575459249-alertmanager don't start when I describe them I see that : pod has unbound immediate PersistentVolumeClaims . thank you if you have any info on that
@justmeandopensource5 жыл бұрын
Hi, Don't be sorry. Please ask questions. It helps me as well. So in your case all the pods are pending because it can't get the persistent volumes. Did you install the dynamic nfs provisioning in your cluster? I have done a video on that. kzbin.info/www/bejne/d5LZn4SwjKmHe80 Once you have installed dynamic nfs provisioning, then your persistent volume claims will be able to get persistent volumes automatically from NFS. In the prometheus and grafana values file, you will have to uncomment and specify the storage class name under persistence section. If you don't have or don't want to enable dynamic volume provisioning, you can disable persistence in the prometheus and grafana values file by setting the option false. Thanks.
@Tshadowburn5 жыл бұрын
@@justmeandopensource thank you I will try that and let you know if I managed to do it
@justmeandopensource5 жыл бұрын
@@Tshadowburn Cool.
@tekconstructors3 жыл бұрын
so "figlet" made it as a banner app. Why did "banner" not make it? Just curious.
@justmeandopensource3 жыл бұрын
I wasn't aware of "banner" to be honest.
@leagueoflegendswildriftnep22363 жыл бұрын
im having hardtime for setting up email notifications, I tried to go inside pod but I do not have permission to edit /usr/share/grafana
@_siva_polisetty5 жыл бұрын
Hi Venkat, actually I have few doubts in service accounts, helm and tiller. 1. How do we know we need to create a service account with specific name, example in this case it is nfs-client-provisioner or helm case service account with is tiller how to get know that. 2. I watched your helm video, there you mentioned when you run helm install Jenkins it will check the cluster name from .kube/config file and deployed there, but what if I have multi cluster config file how to specific cluster and deploy. Could you please help me with this.
@justmeandopensource5 жыл бұрын
Hi Siva, I came to know that we need to create service accounts by reading the documentation. Helm documentation talks about tiller service account. If you have multiple clusters configured in your kube config file, you will have to choose the context (cluster and namespace) before using helm from the command line. You can check kubectl command to see which cluster you are connected and then use helm. Or you can have multiple kube config file one for each cluster and export KUBECONFIG environment variable. Thanks
@sohailahmedeasygoing5 жыл бұрын
Explained very well. Thank you very much
@justmeandopensource5 жыл бұрын
Hi Sohail, thanks for watching. Cheers.
@oraculotube2 жыл бұрын
how can we send all the k8s metrics to an external ubuntu/prometheus/grafana instance?
@nareshpandian13215 жыл бұрын
Hi Venkat . I hope rancher is better than Prometheus and Grafana because rancher can be to do all the activities in cluster . I seen your rancher videos also that’s one is fantastic and it’s also monitoring all the events. Which one is better ? I guess its rancher (dashboard ) . Please correct me if I’m wrong
@justmeandopensource5 жыл бұрын
Hi Naresh, thanks for watching this video. Actually Rancher and Prometheus/Grafana are meant for two different purpose. You can't compare one over the other. Rancher is for managing your cluster or many clusters in a single interface. You can create resources in your cluster. Prometheus is a metrics engine that scrapes various metrics from different resources in your cluster. Grafana is for visualizing the metrics and pulls the data stored in Prometheus. You see cpu, mem, io, network utilization and various application specific metrics are monitored. You have limited monitoring capability in Rancher. And in Prometheus/Grafana, you can create any resource in your cluster like Deployment, Statefulset. Hope you understood the fundamental difference. I would use both these tools. Thanks.
@rajeshbastia85025 жыл бұрын
Hi Venkat, Struggling to install it on Helm v3 and new updated Prometheus file. Please provide the command to install it on the namespace and also let me know where to modify exactly in the updated prometheus file
@justmeandopensource5 жыл бұрын
HI Rajesh, thanks for watching. I know things have changed slightly since I recorded this video. Let me give you the commands for Helm 3. Prometheus: First check out the values file, I used helm inspect for Helm 2. Now you need to use helm show for Helm 3. $ helm show values stable/prometheus > /tmp/prometheus.values Update the values file as per your need. You can disable persistence or change the size of the persistent volume. Finally install it. In Helm 2 I used --name to specify the release name, with Helm 3 --name is deprecated. And more importantly the namespace has to be created first. Helm 3 doesn't create a namespace for you. $ helm install prometheus stable/prometheus --namespace prometheus --values /tmp/prometheus.values. Do this for Grafana as well. Cheers.
@rajeshbastia85025 жыл бұрын
@@justmeandopensource Thanks for the reply and the commands Venkat. And another thing what I wanted to know is the Prometheus Yaml file is changed with the new update. please let me know which section of the file is to be modified.
@justmeandopensource5 жыл бұрын
@@rajeshbastia8502 I don't think there is a lot changed. In this video, I only updated the service type to NodePort and set the nodePort value to 32322. Persistence was already enabled. Just modify the service type from ClusterIp to NodePort (line number 273), uncomment line nuber 272 and set the nodePort. Thats it. Cheers.
@sherifakmal11083 жыл бұрын
HI venkat.while installing helm prometheus-operator - kube-state-metrics pod not running. error showing the error was readiness and liveness connection refused. unhealthy so pods are not creating. please help me
@Oswee5 жыл бұрын
Would be really great to see HAproxy setup and good practices. I tried Traefik. It works great and setup went smoothly but it is just L3 proxy. Want to try HAproxy.
@yiphui96845 жыл бұрын
ingress-nginx-controller is great
@gouterelo5 жыл бұрын
@@yiphui9684 with MetalLB its a life saver usin ssl/tls in controller !
@HeyMani924 жыл бұрын
Hi Venkat, I am not able to create NFS Volumes on ubuntu 16.04. Could you please reply me with this
@justmeandopensource4 жыл бұрын
Hi, thanks for watching. Where are you stuck? Could you give more details?
@HeyMani924 жыл бұрын
@@justmeandopensource In the background persistent volumes not created due to NFS Configuration not properly set and already I'm going through your dynamically nfs provision but still I got the issue and when I mount this path to another server: This is the command : mount -t nfs :/srv/nfs/kubedata /mnt
@Canada_couple_vlogs2 жыл бұрын
Hi, thanks for the informative video, can we monitor websites attached to Kubernetes in Grafana
@hereforyouwhat2 жыл бұрын
Hi What is difference Prom+Grafana vs Kubernetes Dashboard if we want to monitor k8s cluster? Or in which scenarios Prom+Grafana should be used over Kubernetes Dashboard
@alexanderhill29155 жыл бұрын
Hello Venkat, everything seem to have went well, all my prometheus resources are up and i can access the prometheus dashboard. But it doesn't seem to get any metrics. Any ideas
@justmeandopensource5 жыл бұрын
Hi Alexander, is it only few metrics that are not showing? Or you don't see any metrics at all? Thanks.
@alexanderhill29155 жыл бұрын
@@justmeandopensource Sorry for the late reply, didn't see that you replied. I don't see any metrics at all.
@justmeandopensource5 жыл бұрын
@@alexanderhill2915 Thanks for the comment. Its been recorded a while ago, so I need to run through this again in my environment and see if this video is still relevant or if it needs any tweaks. I will test it out today and get back to you. Thanks.
@justmeandopensource5 жыл бұрын
Hi Alexander, I just followed this video step by step and I got everything as exactly shown in this video. * Deployed Kubernetes cluster with 1 master and 2 worker nodes * Deployed Helm & Tiller * Deployed dynamic NFS provisioner * Installed Prometheus using helm chart * Installed Grafana using helm chart * Added the Grafana Dashboard (ID: 8588) I can see cpu, memory and network utilization in Grafana. The only metrics I couldn't see is Disk I/O. Could you please double check each steps to find out exactly where the problem is? Can you check if you see the metrics in prometheus and whether its not showing up only in Grafana? Thanks
@alexanderhill29155 жыл бұрын
@@justmeandopensource ok let me start again from scratch just incase i missed a step
@OhDearBabajan4 жыл бұрын
great video! thank you for putting it together. Just one question. Would you say one NFS server for persistent storage is sufficient enough to handle all the writes? Does it cache fast enough for the entire prometheus cluster? I'm sure it's a good start nonetheless. Also now that helm 3 doesn't support/have tiller, does this demo still work?
@justmeandopensource4 жыл бұрын
Hi Dimitri, thanks for watching. This was just for demonstration purposes. For production usecase you would use something more robust and concrete.
@OhDearBabajan4 жыл бұрын
@@justmeandopensource Got it! So from a Kubernetes standpoint, would that entail multiple NFS servers?
@justmeandopensource4 жыл бұрын
@@OhDearBabajan You can use any number of nfs servers as your backend storage.
@Pallikkoodam945 жыл бұрын
Hi Venkat, Thank you for sharing this video, It would be great if you could provide the details/technical connections of how Prometheus namespace gettings the values of other namespaces. Thank you,
@justmeandopensource5 жыл бұрын
Hi Ajeesh, thanks for watching this video. Most of my videos are like getting started videos and I would have touched the basics. As I am covering a breadth of different technologies, I couldn't go too deep into any one topic. I will see If I can do a video on this detail. Cheers.
@1sbollap5 жыл бұрын
if i change the namespace from default to something, then I am seeing an error message: "Error creating: pods "nfs-client-provisioner-67679d4fff-" is forbidden: error looking up service account default/nfs-client-provisioner: serviceaccount "nfs-client-provisioner" not found .and Pod is not starting. So, I am wondering if this solutions only works with default namespace? If i want to use this, does the nfs-client pod need to be in the same namespace as all my other pods? "
@justmeandopensource5 жыл бұрын
That shouldn't be a problem. You can deploy nfs-client-provisioner in any namespace and any deployment in any namespace can make use of it. When creating persistent volume claim, you refer to it by storage class. Having said that, I haven't tried cross namespace persistent volumes. Thanks, Venkat
@sebastienlevallois3935 жыл бұрын
That s just great man, thank you so much
@justmeandopensource5 жыл бұрын
Hi Julien, thanks for watching and taking time to comment/appreciate. Cheers.
@imranrazakhan25694 жыл бұрын
In grafana datsource why you used Prometheus nodeport as both are on cluster, you may use Prometheus ClusterIP and don't expose prometheus
@justmeandopensource4 жыл бұрын
Hi Imran, thanks for watching. Yes you are right. We could have used clusterIP which is sufficient. People also directly query Prometheus so thought of exposing that too.
@imamrisnandar45725 жыл бұрын
hi venkat, thanks for an awesome video. im trying install but on alertmanager pods status CrashLoopBackOff. what i have to check ? and if alertmanaget not installed what happen?
@imamrisnandar45725 жыл бұрын
hi venkat, sorry add one question, on grafana data source prometheus, wheteher we can add url with ingress connection ? my connection both using ingress, but not connected. many thanks.
@justmeandopensource5 жыл бұрын
@@imamrisnandar4572 I just tried following this video in my local environment and it worked exactly as shown in this video. Alertmanager pod was running fine. Alertmanager service is a separate microservice which sends notifcations on alerts that you define in prometheus. In Grafana, you can add ingress for your prometheus data source. But first check whether you can access the prometheus using ingress from your browser. I used nodeport in this video. Cheers.
@imamrisnandar45725 жыл бұрын
Hi venkat, thanks for your reply. Im using PKS, and the cluster can't direct connect to internet, so images should be pushed to our registry (harbor) and edit the values image to my registry (using command show values in helm3). That's my case. But the status pods alert manager still Crashloopbackoff.
@justmeandopensource5 жыл бұрын
@@imamrisnandar4572 Hmm. Did you find anything in the logs for alertmanager pod?
@imamrisnandar45725 жыл бұрын
@@justmeandopensource my cli: kubectl -n prometheus logs prometheus-alertmanager-58d77b6cfb-dxkm9 -c prometheus-alertmanager. error : level=error ts=2019-12-03T15:52:29.954Z caller=main.go:353 msg="failed to determine external URL" err="\"/\": invalid \"\" scheme, only 'http' and 'https' are supported" the rest level are info only
@vykuntarao71794 жыл бұрын
Which terminal you are using?
@justmeandopensource4 жыл бұрын
Hi, thanks for watching. I used gnome terminal with zsh and with some plugins. I have explained my terminal setup in the below video. kzbin.info/www/bejne/qaCkqIinZ8iEfrM Cheers.
@sudheshpn5 жыл бұрын
Hi Venkat. I see the prometheus-alertmanager pod unable to mount the mount path inside the pod. pv,pvc's are in bound state,nfs provioner is running too. I try to create a sample pod with persistentVolumeClaim name same as the one created in my namespace but see the same exception? Is it somekind of bug? Warning FailedMount 4m47s kubelet, k8s-slave MountVolume.SetUp failed for volume "pvc-8e04833e-9746-11e9-9001-42010a800004" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/8e3eab60-9746-11e9-9001-42010a800004/volumes/kubernetes.io~nfs/pvc-8e04833e-9746-11e9-9001-42010a800004 --scope -- mount -t nfs 10.128.0.5:/m onitoring/monitoring-prometheus-alertmanager-pvc-8e04833e-9746-11e9-9001-42010a800004 /var/lib/kubelet/pods/8e3eab60-9746-11e9-9001-42010a800004/volumes/kubernetes.io~nfs/pvc-8e04833e-9746-11e9-9001-42010a800004 Output: Running scope as unit run-r3b91a34e6f6b4e9ca46ba3bd0e51abb2.scope
@sudheshpn5 жыл бұрын
from master and slave node i am able to successfully mount the nfs mount point
@justmeandopensource5 жыл бұрын
What version of Kubernetes cluster you are running?
@sudheshpn5 жыл бұрын
@@justmeandopensource v1.13.4 running on gcp. Do i need to pass --cloud-provider i my kublet configuration file?
@justmeandopensource5 жыл бұрын
All my videos were done on bare-metal (on-prem). I haven't tested it on any cloud platforms. If you are using a cloud provider, its easier to use their persistent volumes (gcePersistentDisk) instead of nfs-provisioner. It could be anything like pod networks. I am afraid, I have little experience playing with instances in Google cloud. Could you post the full output (kubectl describe) of the sample pod with pvc? If its large, you can past it in pastebin.com Thanks,
@sudheshpn5 жыл бұрын
@@justmeandopensource Issue is resolved by setting nfs-client-root mountPath to /persistentvolumes which is the default setting in deployment.yaml.
@sudheshpn5 жыл бұрын
I get the below error while deploying prometheus. i created a Cluster role and Cluster rolebinding for my monitoring service account. helm install stable/prometheus --name prometheus --values values.yaml --namespace monitoring --tiller-namespace monitoring Error: release prometheus failed: namespaces "monitoring" is forbidden: User "system:serviceaccount:monitoring:tiller" cannot get resource "namespaces" in API group "" in the namespace "monitoring"
Hi Sudhesh, thanks for watching this video. By default when you initialize helm, it will create the service account tiller and will deploy the tiller component in the kube-system namespace. I see you have deployed tiller in a separate namespace called "monitoring". First step in installing tiller component in your cluster is to create a service account and give it cluster admin role so that the tiller component can deploy resources using helm. May I know how you created this tiller service account and how you deployed the clusterrole and clusterrolebinding? Thanks.
@sudheshpn5 жыл бұрын
@@justmeandopensource I created a ServiceAccount(tiller) in my monitoring namespace. I attached the ClusterRoleBinding(cluster-admin) which is my monitoring namespace to my tiller Service account. I initilized the tiller using --tiller-namespace monitoring. Is it the right way we need to do it in production? apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: monitoring spec: {} status: {} --- apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: null name: tiller namespace: monitoring --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: prometheus-monitoring roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: monitoring
@justmeandopensource5 жыл бұрын
Cool. Glad that you got it resolved. Good work.
@justmeandopensource5 жыл бұрын
Nothing wrong in creating tiller service account in a separate namespace. As long as the service account has the clusterrole of cluster admin and corresponding clusterrolebinding, it should work. Thanks.
@ratnakarreddy16275 жыл бұрын
Hello Venkat, Do PVC create if StorageClass was created in a different namespace?
@justmeandopensource5 жыл бұрын
I think that shouldn't be a problem, although I haven't tested. To give you a solid answer, I am going to test that now. Creating a storage class in default namespace and try to create a pvc in a different namespace. Will get back to you shortly. Thanks
@justmeandopensource5 жыл бұрын
Hi Ratnakar, I just verified and it doesn't affect where you create your pvc. StorageClass are global in cluster. I just followed my dynamic nfs provisioner video (kzbin.info/www/bejne/d5LZn4SwjKmHe80) and then created a pvc in a different namespace. Persistent volume got created automatically without any problem. Hope this clarifies your doubts. Thanks.
@ratnakarreddy16275 жыл бұрын
Hello Venkat, I have made some mistakes while creating PVC, due to that I was not able to create a PVC in Prometheus namespace.
@justmeandopensource5 жыл бұрын
@@ratnakarreddy1627 Cool.
@ratnakarreddy16275 жыл бұрын
Hello Venkat, Could you please tell me in which file we need to make changes if we would like to report alerts(ex in email)?
@justmeandopensource5 жыл бұрын
Hi Ratnakar, I haven't explored alerting feature in Prometheus. Its done by configuring AlertManager and configuring Prometheus to send events to AlertManager. You can check the below link for more details. prometheus.io/docs/alerting/overview/ And thanks for suggesting this topic. I will add it to my list and have a play with it. Possibly will post a video later. I have videos scheduled for the next 4 weeks. It will be after a month if I record a video. Thanks.
@anithak15854 жыл бұрын
Hi Venkat , Session is clear and good ..I have few doubts on exposing nodeports to prometheus . Have modified the nodeport and IP in service yaml or the default configuration.
@justmeandopensource4 жыл бұрын
HI Anitha, thanks for watching. Can you please explain your question in a bit more detail?
@rajeevghosh20003 жыл бұрын
Thanks for the great Video Venkat. can i ask a quick question. Does prometheus pulls the metrics directly from the containers/pods ? it doesnot need cadvisor ? Also, if I understood correctly, prometheus pulls the metrics using http . Does it mean every container should listen to HTTP requests from prometheus ?
@kasimshaik5 жыл бұрын
Hi Venkat, Could you create video on pod security policy. I have been studying PSP ( pod security policy ), need to clarify few queries on that.
@kasimshaik5 жыл бұрын
Do you have experise in that area PSP ?
@justmeandopensource5 жыл бұрын
Hi Kasim, Thanks for watching this video. I haven't looked at pod security policy yet. But I should be able to test it on my test cluster. Let me know what you query is. Thanks
@kasimshaik5 жыл бұрын
@@justmeandopensource May I have your personal e-mail address. So that, I can send e-mail with complete details what have had tried with PSP. Here is my ID kasim123@gamil.com
@piby18024 жыл бұрын
Hi Venkat! Your videos are of great help. Thank your for putting so much effort for us :) I am particularly struggling with the storage aspect of kubernetes these days. I am running my clusing on virtualbox using vagrant. I tried to use virtualbox synced folders for storing data but since they don't support fsync on synced folders many applications like mongodb don't run properly. I finally resorted to attaching vdi disks in my vagrant files and running ceph on k8s using rook. I am currently using rook and ceph for block and (S3 like) object storage. Would be great if we can get a video on rook and ceph and comparison of ceph, nfs and edgefs ^^ Thx!
@justmeandopensource4 жыл бұрын
Hi piby, thanks for watching. I had been using NFS solution for dynamic volume provisioning. And since many viewers asked for Gluster FS, I started a different series to cover basic Gluster FS concepts. I will soon be recording videos for k8s with gluster as storage backend. kzbin.info/aero/PL34sAs7_26wOwCvth-EjdgGsHpTwbulWq Few users also asked about ceph/rook which I am yet to explore. Will definitely do videos on that as well. Cheers.
@1sbollap5 жыл бұрын
can you please tell me your github url so i can download the k8s resources
@justmeandopensource5 жыл бұрын
Hi, thanks for watching this video. The git hub url for my kubernetes repo is in the description. Https://github.com/justmeandopensource/kubernetes Thanks, Venkat
@MyYuichan4 жыл бұрын
Hi venkat, can u explain me how we can access the grafana not from the node host, but from my server?
@justmeandopensource4 жыл бұрын
Hi Shidiq, thanks for watching. The usual way would be to expose the grafana service as NodePort or LoadBalancer or use ingress to access it. If you want to access it from your machine, you can use kubectl port-forward command.
@ramdesi14 жыл бұрын
Hi Venkat, Thanks for the detailed explanation. I have a question here. We have many AKS clusters in our environment. Should i install Prometheus & Grafana on each cluster and maintain multiple Grafana console? Also, wanted to know about the security related concerns.
@justmeandopensource4 жыл бұрын
Hi Ramanan, thanks for watching. You can take either of the two approaches. Each has its own benefits. Either you can install the monitoring stack prometheus/grafana on each cluster or you can have a separate cluster or a central monitoring infrastructure and collect metrics from all your clusters so that you have a single place to go to.
@srujanareddy86233 жыл бұрын
Hi Venkat, I'm Fresher new to DevOps. I have one doubt what is different between (Prometheus, Grafana) and ELK. Both tools we are using for monitoring purpose. What is different between monitoring and logging? Can you please help me on this.
@homefootage15 жыл бұрын
Hi Ventak, thanks for the video, it was very helpfull like usual 👍🏻 The Prometheus installation worked perfect, but I'm facing a issue to install Grafana. I'm getting the error below after deployment : Pod Status: Init:CrashLoopBackOff It is something related with the PersistentVolume, do you have any idea Sr? Thanks
@justmeandopensource5 жыл бұрын
It can't be persistent volume related. If it can't get a volume, the grafana pod will be in pending state. May I know what version of Kubernetes and helm you are using? And what sort dynamic storage provisioning you are using? I will just do a test run of this video in my environment tomorrow and let you know. Meanwhile can you try if its working without persistent volumes? Thanks.
@homefootage15 жыл бұрын
@@justmeandopensource it is running fine without persistent. I'm using helm 2.14.13 and Kubernetes 1.15.3. The pv is working fine with Prometheus, the issue is related with grafana x pv
@homefootage15 жыл бұрын
@@justmeandopensource I have followed your video to setup NFS and provisioned the pv, it is running fine with Prometheus
@justmeandopensource5 жыл бұрын
@@homefootage1 I just tested this video completely now and everything is working perfectly fine. Please check the below pastebin link for my testing. pastebin.com/LMWhMLNA
@justmeandopensource5 жыл бұрын
Did you edit the grafana.values and enabled persistent volume? It was set to false by default.
@pallavladekar44665 жыл бұрын
hey hi, getting below error on grafana's dashboard "Failed to create dashboard model p.a is not a constructor" finding the solution but not succeeded.
@justmeandopensource5 жыл бұрын
Hi Pallav, Thanks for watching this video. I just ran through this complete video myself following each step. its working perfectly fine for me, although Grafana UI has changed slightly now compared to when I recorded this video. May I know where exactly you get the problem? Could you give me more details on your issue please. I will see if I am able to reproduce it. Thanks, Venkat
@pallavladekar44665 жыл бұрын
@@justmeandopensource Hi Venkat, Thanks for such quick response, there was an issue with grafana helm chart version. i was using chart -v: 2.3.1 & App-V: 6.0.2 . but now tried with the same version that u are using. and it's working. (but it works without persistent volume). with persistent volume, I am getting an error "chown: changing ownership of '/var/lib/grafana': Operation not permitted" in grafana's pod logs.
@justmeandopensource5 жыл бұрын
Hi Pallav, I think it depends on how you configured your dynamic persistent volume. Did you use dynamic nfs provisioner similar to what I did in the video? If so, did you follow all the steps like exporting the nfs share with correct options and setting the share ownership to nfsnobody:nfsnobody? Thanks, Venkat
@pallavladekar44665 жыл бұрын
thank u very much Venkat, I had missed with some NFS option in /etc/export file. it's working now. great video.
@justmeandopensource5 жыл бұрын
Cool.
@cenubit5 жыл бұрын
how to send kubernetes metrics to remote standalone prometheus?
@justmeandopensource5 жыл бұрын
Hi Girts, thanks for watching this video. I haven't tried connecting k8s cluster to external prometheus server yet. But it looks interesting. I will explore the options and if I get anywhere with it, I will record a video. Cheers.
@amulraj05 жыл бұрын
Did anyone have the issue adding prometheus datasource into grafana? . I see this error in developer console - "has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource". Else, can someone provide me the working version of grafana and prometheus?
@justmeandopensource5 жыл бұрын
Hi Amul, thanks for watching. I will try this video in my environment and let you know if anything has changed. Cheers.
@justmeandopensource5 жыл бұрын
Hi Amul, I just tested this video step by step and all working exactly as shown in this video.
@amulraj05 жыл бұрын
@@justmeandopensource Hi Venkat, thanks for checking for me. I tried the whole thing again and strangely getting the same error while adding the prometheus data source into grafana. The error on the granfana UI is "Cannot read property 'status' of undefined" and the chrome developer console shows "GET 172.42.42.102:32322/api/v1/query?query=1%2B1&time=1573167664.312 net::ERR_ABORTED 404 (Not Found). Access to XMLHttpRequest at '172.42.42.102:32322/api/v1/query?query=1%2B1&time=1573167664.312' from origin '172.42.42.102:32323' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource." But, never mind, I will try with web developers locally here.. :-)
@vandanasharma84615 жыл бұрын
Hi Thanks for this video... Do you have any reference for alert systemc from prometheus or grafana.
@justmeandopensource5 жыл бұрын
Hi Vandana, thanks for watching. You can use AlertManager which is a separate component. prometheus.io/docs/alerting/overview/
@kasimshaik5 жыл бұрын
Hi Venkat, do we really need PV volumes for Prometheus ? Can we opt for hoaspath option instead of PV ? I just wanted to clarify the query. I have not tried it
@justmeandopensource5 жыл бұрын
Hi Kasim, Thanks for watching this video. Yes you need some form of persistent storage for Prometheus pod. Prometheus stores all the metrics collected from various services. If the prometheus pod crashes and restarts, and if you don't have persistent volume enabled, then you will lose all the previously collected metrics. You can use hostPath, and it will be fine as long as the prometheus pod runs on that worker node. What if the pod crashes and gets started on another worker node? Hostpath is tied to a particular host. If you really want to use hostpath, then you need to make sure that the prometheus pod always gets started on the same node. You can do this by defining nodeselector. Thanks.
@kasimshaik5 жыл бұрын
@@justmeandopensourceHi Venkat, we have NFS mounth path mounted across on all worker nodes. We have using using nfsclient option for sharing confimap files.
@justmeandopensource5 жыл бұрын
In that case, it should be fine to use hostPath. If that directory defined in hostPath can be mounted on all worker nodes, then abosulute no problem with using that. Cheers.
@StefanoCiccolini4 жыл бұрын
hello, congratulations on the explanation. I wanted to ask you why while installing prometheus by launching the file.values releases me this error? Error: error unmarshaling JSON: json: cannot unmarshal string into Go value of type map[string]interface I'm doing it all with PowerShell on windows 10. Thank you
@justmeandopensource4 жыл бұрын
Hi Stefano, thanks for watching. I believe there is something wrong with your values file when you updated it. Wrong format values or could be indentation error. Try pulling the values file and installing without any changes and then start doing the change one by one to find out which line in values file is causing this issue.
@1sbollap5 жыл бұрын
after i ran helm install stable/prometheus --name prometheus --values prometheus.values --namespace prometheus I am seeing this error. "pod has unbound immediate PersistentVolumeClaims (repeated 3 times)" any ideas why? Answer: It worked after i deployed the nfs-povisoner to the default namspace
@justmeandopensource5 жыл бұрын
This means that the cluster couldn't provision persistent volume for the pvc defined in the helm chart. Are you using any form of dynamic volume provisioning?
@justmeandopensource5 жыл бұрын
That makes sense.
@vudinhdai26385 жыл бұрын
when i ran command: helm install stable/prometheus.... I got an error: chart incompatible with tiller v2.14.0-rc.2. Please, help me to fix this problem :(((
@justmeandopensource5 жыл бұрын
Hi, thanks for watching this video. Please try using latest stable version of helm and not pre-release, alpha or beta release. Thanks
@vudinhdai26385 жыл бұрын
i followed your previous video about getting started with helm, and i installed from Binary Releases, how can i choose stable version of helm. I can't see any version, just tar file and use?
@justmeandopensource5 жыл бұрын
@@vudinhdai2638 Hi, go to official releases page using the below link. github.com/helm/helm/releases In that page, I can see 2.14.0-rc2, 2.14.0-rc1 which are pre-releases. Ignore those and download 2.13.1 which is the latest verified release. Thanks, Venkat
@vudinhdai26385 жыл бұрын
oh! thank you so much!
@justmeandopensource5 жыл бұрын
@@vudinhdai2638 You are welcome.
@pankajmahto23704 жыл бұрын
Hi Venkat, Please can you suggest below error, I am getting below error while running below command. I am using HELM3 version (v3.2.1+gfe51cd1) for installation. helm3 install stable/prometheus prometheus --values /tmp/prometheus.values --namespace prometheus Error: failed to download "prometheus" (hint: running `helm repo update` may help) I try tried hint as well still unable to install. Thanks In Advance :)
@pankajmahto23704 жыл бұрын
Hi friend, Please can someone help on this error, I got stuck in this step. Try searching in google for help still unable to find the cause of the error. I am new to Kubernetes unable to resolve this alone. Please help.
@justmeandopensource4 жыл бұрын
@@pankajmahto2370 Thanks for watching. I think I spotted the problem. The way you specified the chart and the name is wrong. What you have is, $ helm3 install stable/prometheus prometheus --values /tmp/prometheus.values --namespace prometheus Try this instead, $ helm3 install prometheus stable/prometheus --values /tmp/prometheus.values --namespace prometheus I hope you have the stable repo in your helm installation. If not run the below commands first. $ helm repo add stable kubernetes-charts.storage.googleapis.com $ helm repo update
@shoryasingh65663 жыл бұрын
@@justmeandopensource Hi Venkat, I am also using helm 3.6.0 even though I try following the above comments it still shows error stating Error: failed to download "stable/prometheus" (hint: running `helm repo update` may help)
@vishalchauhan93424 жыл бұрын
Pod scheduling is failing with following error :- pod has unbound immediate PersistentVolumeClaims
@systemadministrator81925 жыл бұрын
Hello Venkat, as usual good job. I get N/A for some metrics Can you recommend how to fix it ?
@justmeandopensource5 жыл бұрын
Hi. Thanks for watching this video. Where exactly you get that? I mean at which point in this video? And could you please tell me which metrics you don't get values for? Thanks
@full_hause_59935 жыл бұрын
@@justmeandopensource I used grafana 8588, hovewer I don't get Deployment memory, deployment CPU, used cores all have N/A. thanks
@justmeandopensource5 жыл бұрын
I will try dashboard 8588 later tomorrow and see if I get the same results. It could be that the dashboard's metrics query might be wrong. Have you tried looking at it after some time? Is it never getting updated? Thanks
@full_hause_59935 жыл бұрын
@@justmeandopensource Yes, we tried(
@josephbatish94765 жыл бұрын
Good job mate
@justmeandopensource5 жыл бұрын
Thanks for watching this video Joseph.
@josephbatish94765 жыл бұрын
@@justmeandopensource please do more videos about helm and kubernetes
@justmeandopensource5 жыл бұрын
@@josephbatish9476 I have done a getting started video on helm in Kubernetes. Hope you already watched it. If not here is the link. kzbin.info/www/bejne/foXNZICDj6ppsMk Thanks.
@rahulmalgujar11105 жыл бұрын
thanks for the vedio, I am trying to init helm but getting this error "Error: unknown flag: --service-account" why so?
@justmeandopensource5 жыл бұрын
Hi Rahul, Thanks for watching. You are using Helm v3. In this video, I used helm v2.14. For Helm v3, there is no tiller component to be deployed in your cluster. All you need is helm binary. Cheers.
@rahulmalgujar11105 жыл бұрын
@@justmeandopensource thanks for your reply. I also read in one of the document, but when I try to install any thing using helm it says failed to download not know y it is saying that.
@justmeandopensource5 жыл бұрын
@@rahulmalgujar1110 With Helm v3, there won't be any default repositories. You will have to add a respository so that you can search and pull charts. Check if you have any repos enabled by running the below command. $ helm repo list I believe you don't have any repos. So add and update the repo with the below two commands. $ helm repo add stable kubernetes-charts.storage.googleapis.com $ helm repo update
@justmeandopensource5 жыл бұрын
If you are following this video for helm installation steps, bear in mind that the same command I showed in this video won't work with Helm v3. For example, I passed --namespace prometheus to helm install command which will automatically create a namespace. But with Helm v3, you will have to create the namespace manually before installing. Also --name option is deprecated. For example, in Helm v2 helm install --name prometheus stable/prometheus whereas in Helmv v3 helm install prometheus stable/prometheus
@rahulmalgujar11105 жыл бұрын
@@justmeandopensource Thanks for reply, It worked for me.
@knightrider64785 жыл бұрын
Hi Venkat again :) How can i implement Prometheus and Grafana on my k8s cluster which is standalone based on 2 VPSs and also i have configured the HAproxy and Nginx Ingress? I have tried to enable the ingress resource from the prometheus values file but without success.Also i didn't change the the service to be nodePort as you did in the video because i have the HAproxy load balancer and Nginx ingress so i want to access it from internet like this: prometheus.my-domain.com. I'm not so fluid with the .values configurations on Helm charts. Can you clear my path with some advice? Nice video series! Also i would like to suggest you to make some videos on how to deploy k8s and do all the good stuff using VPSs. Thank you and best regards.
@justmeandopensource5 жыл бұрын
Hi Knight, I personally don't enable ingress from the values file. I will leave that as false/disabled in values.yaml and configure ingress myself using Nginx ingress. Thanks.
@realthought22624 жыл бұрын
hey , hope you are doing good , i was stuck in a problem and my grafana container was crashing again and again , i tried logs and then everything but failed then i started reading comments , there is something going on with NFS server (norootsquash) worked , i modified the /etc/export file and delete the helm chart and reinstall it , kaboom . I can see amazing dashboard Grafana.... thanks everybody !!
@justmeandopensource4 жыл бұрын
Hi, thanks for sharing your findings.
@pandu2264 жыл бұрын
hi friend i'm following your video i'm getting below error for installing prometheus #helm install prometheus stable/prometheus --values /tmp/prometheus.values --namespace prometheus Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.securityContext): unknown field "runAsGroup" in io.k8s.api.core.v1.PodSecurityContext
@pandu2264 жыл бұрын
i'm using helm version 3
@justmeandopensource4 жыл бұрын
@@pandu226 Hi thanks for watching. Are you using the latest chart version of Prometheus. Can you please check other chart versions for Prometheus to see if you get the same problem. I am trying to find out if the issue is cluster wide or just to the prometheus deployment.
@adonaik8s5 жыл бұрын
nice
@justmeandopensource5 жыл бұрын
Hi Adonai, thanks for watching. Cheers.
@vishalchauhan93424 жыл бұрын
kubectl logs -f pod/prometheus-1595420753-server-68b899667b-b4275 error: a container name must be specified for pod prometheus-1595420753-server-68b899667b-b4275, choose one of: [prometheus-server-configmap-reload prometheus-server]