Holy sh*t, been reading/watching walkthroughs to properly setup nodes on my k3s cluster. This is the first piece of media where, by the end, I've actually got a working product and learned more about cluster devOps. Even tho this is a 3 year old video, it's better than most guides coming out today, thanks a million.
@mehmetesen93854 жыл бұрын
I just discovered this channel, it's golden... Thank you for your great work.
@kevinshea54183 жыл бұрын
Awesome videos, greatly enjoy the redis series as I'm a Sr. System Admin who has been tasked with setting up a redis cluster for new containerize deployments. All your vids are great, thanks for all the hard work.
@carlriis31024 жыл бұрын
12:12 - 12:44 is a really important point. After many hours of reading and trying, I finally have a setup that I understand and have customized to my own requirements, using my own scripts. If I ever need to change anything or scale my cluster I know what to change. You can't just copy all the code from this video and expect it to work with your needs. Amazing video btw it was extremely helpful!
@TheMrJoshua4 жыл бұрын
This is so cool. Im enjoying your kubernetes series so far. Thank you
@iilillilili432 жыл бұрын
Thank you for great lecture, Anton Chigurh!
@RahulSharmaSingularity6 ай бұрын
Absolute gem of a video ! Kudos my man !
@mattiasfjellvang2 жыл бұрын
Thanks for a great series! It would be a great addition to how to setup redis using a Helm chart (like bitnami) or an Operator - to see how this setup could be simplified
@DigisDen2 жыл бұрын
Amazing, a change to the storage class and password and everything just worked. Excellent videos, all of them, you'll definitely have a million subscribers one day.
@Goyalvipin12 жыл бұрын
Very nice demo
@mostlyAtNight2 жыл бұрын
How a re-joining pod decides if it should be master/replica was exactly what I was looking for - great video, excellent presentation too. Thank you.
@georgelza2 жыл бұрын
would be keen to see this updated for AWS where the 3 Redis instances and 3 sentinels are distributed over 3 AZ's each AZ configured with a dedicated/local storage class. in your stateful set yaml, line 70 you reference the storage class, issue with AWS and az's you will have a storage class defined per az. from what I've seen I will also need to define a affinity rule to pin one pod per az... (my nodegroups got a tag location which is az1, az2 or az3...) this works perfectly on a single dc cluster.
@user-fg6ng7ej6w2 жыл бұрын
great to see such detailed explanations accompanied by code in github. thanks
@ThatOdooGuy3 жыл бұрын
Pure Gold! Thanks for all of your great work.
@dansikes41744 жыл бұрын
Great video! It is very helpful. I appreciate the work you are putting into this channel! It's great stuff. Keep up the amazing work.
@SaadullahKhan-y9o3 ай бұрын
Thanks for your efforts
@agushary94053 жыл бұрын
What a great video, I've new in Kubernetes and your video is very helpful to me, Thank you
@minhthinhhuynhle91032 жыл бұрын
Without you, deploying Redis to K8S for Staing/Developing Environment would be totally Nightmare :))) The Killing Part is where your Init Container lay !!! That shows how advanced this Tutorial is. As always, thank you for your contribution :> I'mma going 1 step further considered as my personal homelab: Sharding the Redis HA Cluster :>
@chrisscole2 жыл бұрын
Epic thanks! Have been chewing over this week how to solve this architecture problem. Was better solution than cluster with shards.
@vamshipunna77353 жыл бұрын
Great video. Very helpful ! Why to maintain volumes for sentinel also ? and if three redis pods will die at a time then what is the case?
@kevinyu99343 жыл бұрын
This is super helpful!!! Thanks for the efforts!
@davida.75863 жыл бұрын
Thank you for such a great tutorial! Wish you success!
@Christopherney2 жыл бұрын
Amazing Tutorial ! Thanks a lot
@Grishma0105903 жыл бұрын
Pretty appreciative about your devops series videos. Trying to deploy redis cluster in baremetal k8s cluster. Using local-storage Storage class. Specified the same storage class during statefulset deployment. But getting error "node(s) didn't find available persistent volumes to bind". Describe on pvc says waiting for redis-0 to be scheduled. So, stuck in a dead loop. It would be great if you can suggest how to proceed for successful deployment
@nahuelaguirre5069 ай бұрын
Thanks for sharing! 🙌
@GoogleUser-id4sj2 жыл бұрын
Awesome video. Thanks!
@hasanulisyrafabdazis21453 жыл бұрын
hai marcel , can u help me when deleting pod redis-0 for failover test got this error *** FATAL CONFIG FILE ERROR (Redis 6.2.3) *** Reading the configuration file, at line 1686 >>> 'slaveof 6379' thank you
@hasanulisyrafabdazis21453 жыл бұрын
what i figure is $MASTER is not recognize MASTER="$(redis-cli -h sentinel -p 5000 sentinel get-master-addr-by-name mymaster | grep -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')" in the redis-statefulset.yaml thank you
@noptanakhon66504 жыл бұрын
Why sentinel-statefulset need volumeMount name : data? What sentinels use those volumes for?
@rodrigol.malacarne85043 жыл бұрын
Incredible content ... Thank you Marcel !!!
@himansusekharmishra51142 жыл бұрын
Thanks for this awesome video. But how can we access all the redis pods from the client side using a single service IP.
@asd8552802 жыл бұрын
Thank you so much for this perfect instructions, I was wondering if we want to make this redis cluster external accessible, how can we do it, because currently the redis is only accessible inside of the k8s cluster
@balsubu12 жыл бұрын
very nice, though I'm new to k8s, if a node dies (say redis-0) - wouldn't k8s restart that node ? and what would be its IP then .. same redis-0 ? as one of the replica would now be the master.. so it essentially becomes slave. I think you have explained it well around 19 minutes mark.. thanks
@_geekstudios_3 жыл бұрын
Hey Marcel, when i delete the master pod it creates a new one, but it goes into CrashLoopBackOff and in the logs it says that FATAL CONFIG FILE ERROR at line 1686 'slave of 6379' Bad directive or wrong number of arguments. Could you please guide me to solve this issue.
@MarcelDempers3 жыл бұрын
You'll need to ensure your sentinels were healthy prior to deleting that master pod. When it gets deleted , the sentinels will assign a new master and your pod should return as a general replica and indicate connecting to the new master with new IP . It gets this IP from its init container where it finds it with redis-cli : `redis-cli -h sentinel -p 5000 sentinel get-master-addr-by-name` which sounds like your init container did not return the IP. You'll need to troubleshoot the init container to see why no master address was found if your sentinels were working in the first place. example: "1:S 12 Aug 2021 00:58:51.107 * Connecting to MASTER 10.244.0.10:6379" There is an updated REGEX for the init container suggested in a github issue which I have not tested yet incase you are facing similar issue, checkout suggestion github.com/marcel-dempers/docker-development-youtube-series/issues/87
@_geekstudios_3 жыл бұрын
@@MarcelDempers Thank you Marcel, i will check it out and let you know.
@_geekstudios_3 жыл бұрын
@@MarcelDempers I have made certain changes here and now it is working fine, now how should i connect to this redis from my application within this cluster, is it through the redis svc ? Or is it through the single hostnames ?
@MarcelDempers3 жыл бұрын
@@_geekstudios_ You need to ensure you use a sentinel aware library in your app. Application broadcasts to each sentinel to query the master address then proceeds to connect to current master. If the master dies, application should repeat to get new master address to re-establish connection with retry capability.
@_geekstudios_3 жыл бұрын
@@MarcelDempers ok thanks
@helders3 жыл бұрын
A question, should i follow the setup you provided on the examples, what kind if tweak/change should i make (apart the password part ofc), if any, to use it on a production environment?
@adityasharma92983 жыл бұрын
Thanks for your detailed video.I want to know the client apps will be connected to first redis or sentinel ?
@amitrathee3374 жыл бұрын
Hi Marcel, Can you please check and update sentinel init args. Because in pods status, it is showing Init:CrashLoopBackOff. Thank you!!
@dasilavanya74293 жыл бұрын
For me also same issue CrashLoopBackOff. did you get solution for this?
@meisj2 жыл бұрын
Hi! I need help. We used Reddis for our Grails project and we applied changes on the Tomcat configuration. Upon testing, the session remains without relogin and also redirected to the other live pod however, session is terminated when the other replica pod goes up. Would you know why?
@陳彥辰-f9r2 жыл бұрын
How to do it with two different clusters? For example, we have cluster A and cluster B Our master Redis is in Cluster A at the beginning, and cluster A has two sentinels salve Redis is in Cluster B at the beginning, Cluster B also has two sentinels This arch can conti provide the service when one of the clusters is shutdown
@MarcelDempers2 жыл бұрын
Redis does not care where sentinels are running as long as they can talk to each other. In the same cluster its simple because kubernetes provides a host IP and port for each. In multi clusters you need to give each sentinel a public IP and port. You can do that with an ingress controller that supports tcp/udp (non http) like NGINX to expose each sentinel on a different domain, i.e sentinel-one.blah.com and sentinel-two.blah.com etc
@陳彥辰-f9r2 жыл бұрын
So we need to modify Redis host IP to a public IP, including the master, slave, and sentinels. But how to get the MASTER_FDQN in storage/redis/kubernetes/redis/redis-statefulset.yml? And why you know the nodes=redis-0.redis,redis-1.redis,redis-2.redis in storage/redis/kubernetes/sentinel/sentinel-statefulset.yml
@MarcelDempers2 жыл бұрын
Yes, you need to manage all public endpoints by having fixed IPs or domain names for master slave and sentinels. The master fdqn is simply an env variable that is defaults to the first instance as a master, and is populated by querying the sentinels incase the master is different. You may need to debug and tune the init containers further to support multi clusters
@陳彥辰-f9r2 жыл бұрын
@@MarcelDempers How do the sentinels talk to each other when we deploy the Redis cluster across two clusters? How do we check the sentinels know where is the slave Redis? Because in Cluster A, I have below set up master Redis (statefuleset) one service and one ingress for master Redis two sentinels (statefuleset) two sentinels pods have their own service and ingress In Cluster B, the setup is as below slave Redis (statefuleset) one service and one ingress for master Redis two sentinels (statefuleset) two sentinels pods have their own service and ingress but when I shut down the master redis in cluster A the slave master does not become the master so I think the problem is sentinels do not know where is the slave Redis This is the reason why I ask this question
@MarcelDempers2 жыл бұрын
@@陳彥辰-f9r If you're running in multiple clusters you are going to need to use something like an ingress that supports basic TCP (non HTTP) and set domain names to each instance so every instance and sentinels are individually addressable. If you do this make sure you use a strong password and TLS and ensure you run optimal security for internet communication
@default_youtube_profile3 жыл бұрын
Thank you, I am going to try this out however it would have been really nice to make use of replica sets beside just limiting them to refis-0 , redis-1 and redis-2.
@MarcelDempers3 жыл бұрын
I would recommend you use statefulsets if you care about persisting your data. You can still scale a statefulset beyond redis-2
@陳彥辰-f9r2 жыл бұрын
if we want to access the Redis cluster out of the cluster, do we need to set up the ingress for each sentinel? or need to do extra setting ? for example, we use the python code to access the Redis cluster, sample code maybe as below from redis.sentinel import Sentinel sentinel = Sentinel([(sentinel_ingress_1 , sentinel_ingress_port_1), (sentinel_ingress_2,sentinel_ingress_port_2), (sentinel_ingress_3, sentinel_ingress_port_3)], socket_timeout=0.1) master = sentinel.master_for('master-name', socket_timeout=0.1)
@MarcelDempers2 жыл бұрын
I would follow that path. Ingress with TCP support that routes based on hostname\domain and have domain for each instance sentinal-1.blah.com, sentinal-2.blah.com etc
@gautham9904 ай бұрын
You are doing the Lord's work. I have a question though, in the Redis cluster if redis-0 is the master, are redis-1 and redis-2 read only replicas? If yes, how does my application know to switch to a new master if a failover happens?
@MarcelDempers3 ай бұрын
thanks for the kind words general approach is to use sentinels and a redis sentinal aware SDK in your program and use the SDK to detect the master by querying the master address from the sentinel, then making a read\write to the master did something similar in a python series github.com/marcel-dempers/docker-development-youtube-series/blob/master/python/introduction/part-5.database.redis/src/app.py#L70
@thejokumarmuppala33302 жыл бұрын
Amazing Tutorials
@georgelza2 жыл бұрын
curious, you've shown the Redis, 1 Master and 2 Slave + Sentinel statefulset deployment and you've shown the Rancher sourced 3 Master and 3 slave deployment/no sentinel, is which would you say is better ? or is there a combination, say 3 master, each with 2 slave + 3 sentinel in front possibility?
@MarcelDempers2 жыл бұрын
Im not too familiar with the 3 master setups. Redis does have a clustering feature where sentinels are not required, but unsure if its fully GA. Sentinel is the most popular setup
@georgelza2 жыл бұрын
@@MarcelDempers ... like the sentinel method, and for small loads the 3 redis nodes, 1 being a master and the other 2 being slaves work perfectly, just looking further, when we need to scale the masters, each with 2 replicas... still using say 6 nodes, as a master can also be a slave for another master.
@hadironi Жыл бұрын
Thank you so much for the video. Is it possible to have 3 Redis sentinel groups in my cluster? Because I have 3 groups of Nodes that I want they have their own Redis.
@samcooley31783 жыл бұрын
really like your channel, man! I am a little confused about how to handle the masterauth and other password in the redis.conf. Isn't it insecure to just have the password in the manifest there? how do we pass secrets into that particular configmap? Is it really that insecure? Quite confused here and would love some guidance !
@MarcelDempers3 жыл бұрын
Yes I believe masterauth and auth items can be removed from the configmap and instead be passed over the redis command line when starting it up. So you can configure them as a secret, and map the secret values to ENV variables on the statefulset and consume them in the start command of the container over the commandline
@fakhrikharrat Жыл бұрын
great tuto, thanks
@WouterPlanet3 жыл бұрын
Nice! Will there be new videos?
@kk1234562352 жыл бұрын
This is great. Yet, i found some issue when i create statefulsets Defaulted container "redis" out of: redis, config (init) with aws
@linovieira9739 Жыл бұрын
Great video! Congrats. This video was very helpful! Is it possible to use this setup and add kubernetes secret to store the redis password? Thank you, and keep up the amazing work you are doing!
@amitrathee3374 жыл бұрын
Hi Marcel, best video for Redis that I found on youtube. It’s easily understandable even though I am new to devops. I have a question, What should i use in redis host (DNS/URL) to connect statefulset with Nodejs project ? As i explored, I should use like mongodb url---> host: "redis-0.redis.redis.svc.cluster.local,redis-1.redis.redis.svc.cluster.local,redis-2.redis.redis.svc.cluster.local". But this one is showing error. I also tried this one ---> host: "redis.redis.svc.cluster.local" still no success and showing---> ""errno":"ECONNREFUSED",code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1","port":6379, Thank you for your great work.
@MarcelDempers4 жыл бұрын
If you are running sentinels for high availability, its best to use a nodejs redis client with sentinel support. The client library would connect to the sentinel service which would provide a connection to an active master. If a failover occurs, your client library should return a new master connection for your code to use. With statesulsets, each pod gets a DNS -.redis.svc.cluster.local. You could connect to the sentinel here, but it may be best not to use the headless service since its used internally by sentinels and masters for service discovery. Any sentinel\master can be down at any point. Therefore you would instead use a single service that load balances the sentinels. (Without clusterip:none). In my Sentinel YAML, the service is used for internal descovery between sentinels github.com/marcel-dempers/docker-development-youtube-series/blob/master/storage/redis/kubernetes/sentinel/sentinel-statefulset.yaml#L76 You could create a copy of that one and not have it run in headless mode so redis clients (nodejs) can use it: apiVersion: v1 kind: Service metadata: name: sentinel-clients
@陳彥辰-f9r2 жыл бұрын
have a graceful way to update the redis config file with statefulsets?
@mayukh_3 жыл бұрын
Thanks..great video. I guess if I need to increase the number of redis pods from 3 to say 6, I need to also change the config in the sentinel right ??
@MarcelDempers3 жыл бұрын
correct! 💪🏽
@mayukh_3 жыл бұрын
If I now have a web app which wants to connect to the redis cluster, what should be connection url then ?
@ramiferwana97453 жыл бұрын
Thank you for great video, can we replicate redis between two different Kubernetes clusters (DC and DR) sites?
@goyalankur2043 жыл бұрын
Hi, i am following your video, But stuck after apply step of statefulset, pod failed to CrashLoopBackOff status. Can you please help
@putnam1203 жыл бұрын
Great video. I do however have a question about the configuration. Is there a better way to store the passwords other than in plain text in the config file? I ask as I need to store all the information in a git repo.
@MarcelDempers3 жыл бұрын
The Redis pass is generally stored in config. It's not a good practise to store the password in GIT. Generally folks use configuration tools to inject the password during deployment. I believe Redis can accept password via environment variables too which means your full config can go in GIT and you can set the password in a K8s secret
@putnam1203 жыл бұрын
@@MarcelDempers thanks for the information. I'll take a look into using the environment variables.
@PradeepKumar-vf2lf2 жыл бұрын
I have an application in java which uses Lettuce library to do redis operations. Since, my application is not containerized. Wondering if there's a way to create a proxy to redis and sentinels both? btw. have u used flant/redis-sentinel-proxy? It's also not working though. I will be glad if you could help me out with this. Have already spend around 72 hrs to find the solution xD. No progress yet. :(
@bhishamudasi16987 ай бұрын
Did you found solution of integrating it with lettuce?
@vedatapuk22143 жыл бұрын
Hello, I have a simple question regarding Redis cluster on Kubernetes, so maybe you can help me out. I have a Redis cluster implemented and setup on my Azure Kubernetes Services (AKS) cluster, but the problem is that I want to expose my redis cluster IP to other resources in my Azure resource group, so other VMs can read/write from my redis cluster. At the moment, my redis master pod is configured as ClusterIP, I don't want to setup it as a LoadBalancerIP, because I don't want to expose it outside of my private network. Do you have any suggestion how can I expose my redis cluster that is on AKS outside of it to other resources in Azure?
@MarcelDempers3 жыл бұрын
AKS can run with Azure CNI which bridges all pods onto the azure vnet. It allows you to peer that VNET with other networks so you can achieve what you're after. There are some points to know about Azure CNI to ensure you define subnets for node pools so IPs do not clash. You can then use service type LoadBalancer with an annotation to make it an internal LB with a private IP that can be called by VMs on the peered network. Hope that helps
@rahuls98672 жыл бұрын
Fantastic !!
@ragnadrok72 жыл бұрын
I would like to clarify something, how to dynamically switch connection if master is down and another one becomes master from an application
@MarcelDempers2 жыл бұрын
You need to use sentinels with a sentinel aware client library. Here is an example in Go kzbin.info/www/bejne/bJ2tdKyBrNSEl7M
@casimirrex4 жыл бұрын
Hi Marcel,I have gone through your video. which is really useful . But i have a doubt. Shall we use Sentinels to use other database clustering as well , for instance MongoDB,Postgres. Is It possible to use?
@MarcelDempers4 жыл бұрын
Thanks for the kind words 💪🏽Definitely not, Sentinel is a Redis only feature. For Mongo,Posgres please refer to their documentation on high availability and replication
@casimirrex4 жыл бұрын
@@MarcelDempers Thanks for your immediate response.
@necrodogs4 жыл бұрын
Awesome! I have just now deployed a triple site redis cluster on openshift utilizing your code with some slight modifications. I'm going to modify the code to handle arbritary namespace names and use secrets for the passwords. Do you take pull requests?
@MarcelDempers4 жыл бұрын
Nice work! 💪🏽 A secret is better than a configmap. Usually don't really add enhancements unless its just a documentation change. This is to ensure compatibility between the video content and the source code. Viewers may get confused if the video refers to one thing, and the source to another. Sometimes refactoring is necessary to allow better folder structure for future videos.
@necrodogs4 жыл бұрын
@@MarcelDempers That makes sense. I live and breathe git and often forget the linear TV like nature of youtube videos. Anyways, thank you for the great tutorials. Subscrubed :)
@VikasYadav-ry4xx2 жыл бұрын
I have a doubt here, as we are using sentinel, when used with lettuce(redis library for spring boot) the sentinel provides the internal ip of master service as the address of master and due to which the application is not able to connect to the master, how can we solve this problem?
@MarcelDempers2 жыл бұрын
your library needs to have sentinel support (built in failover ) to provide you a new client in case of master failover
@romeoarnado3127 Жыл бұрын
my man...do you have steps to expose redis on a istio-kubernetes setup
@MitoRPGnaWeb2 жыл бұрын
connection externaly with ingress?
@Borislovefei3 жыл бұрын
nice video, thank you for uploading. What are the virtues of running redis solely on its own cluster as opposed to incorporating it among other components in a single cluster?
@MarcelDempers3 жыл бұрын
Separating it might make cluster upgrades processes a little more manageable. I can also see companies who run it as a SaaS offering might use seperate K8s to bootstrap it for customers so physical separation might be a higher tier offering. Incorporating it into the same cluster as other services will give you Redis as a private connection instead of a public one so it has some security benefits and also share compute, so some cost benefits.
@AskewTarantula3 жыл бұрын
This is great. Do you have any videos that explain how to make a service so applications or a user can connect to the Redis replica set from outside the cluster? I was able to do this with a NodePort service but it's connecting to one of the replicas and I want it to connect to the master.
@AskewTarantula3 жыл бұрын
@Hiren Panchani I did not. It turned out it was not a requirement for the project. I did get cluster services working for external access, and now using that instead of nodeport. However it's still a one-to-one connection.
@davida.75863 жыл бұрын
And the main question is! :) If someone is new in Kubernetes and trying to follow this steps, how to get entryponts to access Redis? Assuming, address:port etc... No mater with what programming languge. Thanks for advance!
@VikasYadav-ry4xx2 жыл бұрын
Hi, you got answer for this?
@georgelza2 жыл бұрын
... with this setup, to what do I connect, the redis service on 6379 or sentinel on 5000 ? well had some success, seems it needs to be the sentinel service... it did however not work, as with a cluster slots needs to be configured/defined.
@MarcelDempers2 жыл бұрын
You'll need a sentinel aware library, like described in this video kzbin.info/www/bejne/ioOUeIh6n555fKc
@georgelza2 жыл бұрын
@@MarcelDempers got it working. like to take this further, change to 6 masters, each replicating to 2 slaves... and then the 3 sentinels. looking at anther redis of yours, (the one where you used the Rancher doc), you have 6 masters, but you also have the value of "--cluster-replicas 1" in the create cluster, which can easily be changed to "--cluster-replicas 2" to have the 2 replicas, giving me a 6 way cluster, but now how to take that and glue it to the 3 x sentinels... ?
@georgelza2 жыл бұрын
@@MarcelDempers enough digging and I did... thanks, was pretty easy once I found it.
@korbkrys58684 жыл бұрын
Hey marcel I applied the redis statefulset and config map but the status for the pod is stuck on pending. Do you have any suggestions?
@MarcelDempers4 жыл бұрын
There can be a number of reasons why pods go into a Pending Status. It could be that the storage it expects is not available in your cluster, There are no nodes that meet the criteria of what the pods expect (memory + cpu resource requests) etc, the list can be long. Best is to `kubectl` describe the pods, the statefulsets and look through events and see why its pending, and take it from there.
@vishalviswanathan98103 жыл бұрын
How to connect to AWS redis from nodejs pod?
@skarabei23032 жыл бұрын
I made this on Azure aks but how to connect API to redis
@sofiaoliveira64343 жыл бұрын
I can't seem to get the sentinel right. When inspecting the logs of a sentinel pod, I can see the message '# Fatal error, can't open config file '/etc/redis/sentinel.conf': No such file or directory'. I tried to add 'touch /etc/redis/sentinel.conf' to the initContainer but it doesn't work. Any ideas?
@MarcelDempers3 жыл бұрын
You might need to check the logs on the sentinel init container as I suspect something may have gone wrong in the init command. The init container attempts to find a master, connects to it and initialises the config at /etc/redis/sentinel.conf If the redis instance cannot find the file, the init process must have failed
@sofiaoliveira64343 жыл бұрын
@@MarcelDempers I wasn't finding a master because I initially created the resources in a namespace called 'test' and I think that the nodes' names depend on the namespace being called 'redis'. I now have everything in the 'redis' namespace and from the logs of the init container everything seems fine (the 'cat /etc/redis/sentinel.conf' confirms that the 'sentinel ...' lines are saved to the file but I keep getting the fatal error on the sentinel pod
@blackpearl19034 жыл бұрын
Hi Marcel, great video for Redis that I came across. It’s easily understandable even though I am new to devops. Thanks much👍🏻 I have a quick question. Is there a way to set the masterauth & requirepass password in my k8s cluster itself as secrets? If so how can I call it in the configmap? Can u pls assist on this.
@MarcelDempers4 жыл бұрын
Thanks for the kind words 💪🏽redis takes config options as arguments when you start it up. I.E "redis-server --requirepass $PASSWORD --masterauth $PASSWORD" What you can do is add those two as a secret and use ENV variables to set it in the startup args Hope that helps
@tuhinsengupta10232 жыл бұрын
If I want to make the redis service public, how can I access the service from outside the cluster. I created another service of type loadbalancer in eks, it did give me a public endpoint. But the problem is that it is loadbalancing between redis-0, redis-1 and redis-2. Sometimes the write fails because the external requests are being redirected to one of the replicas. How do I make sure a public service always points to the master?
@MarcelDempers2 жыл бұрын
You can use an ingres controller instead. NGINX has TCP support. You can route appropriate domain to the correct pod
@carlriis31024 жыл бұрын
But how do I use that cluster in my application? Which pod do I try to connect to? I'm guessing the master because it's the only one that can write to disk, but if the master changes what happends then? I guess I'm asking how I create a service for this cluster I can reliably use elsewhere.
@carlriis31024 жыл бұрын
Is it possible to create 2 services: one for reading and one for writing?
@carlriis31024 жыл бұрын
After doing some research it seems that Redis cluster is what I need, as it can refer a request to a diffrent node that fits the request. Sorry for spamming questions.
@MarcelDempers4 жыл бұрын
You will need to ensure your client application has sentinel support. Your application needs to connect to the sentinel to query the master address, and handle the switch scenario if the master changes. redis.io/topics/sentinel-clients
@milentzvetkov94003 жыл бұрын
Is there a way to make this work on GKE. I managed to implement it but I have one problem, when the application starts to write it is hitting the replicas in round robin style... is there a way to make it read-write from the master only ?
@MarcelDempers3 жыл бұрын
Your application can only talk to the write copy. You cannot load balance Redis which is why a cluster IP none service is used. Application should also use a sentinel supported client library so it queries the current master before writing. That will support automatic failover.
@milentzvetkov94003 жыл бұрын
@@MarcelDempers thank you, I will try to find one and test it.
@Ashe944 жыл бұрын
So this means if I want to scale up the number of redis replicas, I need to change the hostnames in the sentinel init config? Is there a better way to automate this part?
@MarcelDempers4 жыл бұрын
Correct, this is only if you wish to run the sentinels separately as demonstrated as they would need a mechanism to discover a redis instance. An alternative is to run a separate service type=ClusterIP on top of the redis instances used for discovery which the sentinel init can call (with retry logic). So it does not have to call each instance in a loop. Alternatively you can also run the Sentinel container as a sidecar to each Redis instance. Sentinel can call its instance over localhost. This is similar to the Bitnami Redis approach: hub.kubeapps.com/charts/bitnami/redis
@dansikes41744 жыл бұрын
@@MarcelDempers could something like Consul be used to provide service discovery for the Sentinels to discover another redis instance? Would a setup like that even make sense?
@korbkrys58684 жыл бұрын
Please make one for postgresql
@haraldhacker3 жыл бұрын
since only the redis master can write, how to i connect to the master? I mean my application does not know what's the current master so the best would be to just connect to the redis service which redirects to the master? Maybe I'm missing something but that's not clear to me how I can talk to the current master w/o knowing what replica is the master at the moment. Thanks.
@MarcelDempers3 жыл бұрын
You'll need to use sentinels to discover the master. You also need to use a sentinel aware library in your app which will query the master and use auto-retry mechanisms to ensure it can write to the master. Alternatively you can use an application library that broadcasts first to see who is master and then continues until TCP connections fail and rebroadcast to get the new master
@korbkrys58683 жыл бұрын
How would this work in a kubernetes cluster if the redis pods are on different nodes possibly on different machines
@MarcelDempers3 жыл бұрын
You might want to learn about Kubernetes services kubernetes.io/docs/concepts/services-networking/service/
@korbkrys58683 жыл бұрын
@@MarcelDempers if you have a master node and two slave nodes how would the data stay consistent on the two slave nodes if they both have their own set of redis sentinels/clusters
@korbkrys58683 жыл бұрын
@@MarcelDempers if the load balancer sends the request to a different node as the original request I would assume it would have different data
@MarcelDempers3 жыл бұрын
@@korbkrys5868 Each pod should be individually addressable. Headless services help so each instance is discoverable and individually addressable via DNS (no loadbalancer used) to form cluster and allow sentinels to do leader election. kubernetes.io/docs/concepts/services-networking/service/#headless-services
@chornsokun4 жыл бұрын
I know this is for learning purposes, but I wonder what makes it production-ready?
@MarcelDempers4 жыл бұрын
Production ready is: ✅Rigorous application testing ✅Ongoing failover testing ✅Understanding documentation ✅Confidence
@FedericoMarchini-wj5wi10 ай бұрын
It did not work for me. Sentinel is not able to find master and redis pods are not able to find sentinel. I've been some hours trying to fix this. BTW, I have the exactly the same config as the repo
@meraku93902 жыл бұрын
can i use a custom ip?
@shrishcs2 жыл бұрын
Loved the content in the video. I have a doubt here. Can anyone explain that how we can connect to redis master from outside the kubernetes?
@MarcelDempers2 жыл бұрын
Id look into using an ingreds controller to serve traffic publicly. That way traffic comes to one route and passed to redis instances based on domain name. NGINX ingress can do it with its tcp proxy functionality
@shrishcs2 жыл бұрын
@@MarcelDempers The sentinel returns the internal address of the redis pod and we have to connect to the master from outside the cluster. I am not sure how ingress controller would work here!
@MarcelDempers2 жыл бұрын
@@shrishcs you may need to configure sentinels to use DNS names instead, i believe it's possible in redis 6.2+
@saju3084 жыл бұрын
Bash script which we placed in initContainer updates all conf file as slaveof redis-0.redis.redis.svc.cluster.local
@amitrathee3374 жыл бұрын
I faced same problem but this can be solved by minor changes. Mainly in second if condition. Just copy and paste in init args & try it. cp /tmp/redis/redis.conf /etc/redis/redis.conf echo "finding master..." if [ "$(redis-cli -h sentinel -p 5000 ping)" != "PONG" ]; then echo "master not found, defaulting to redis-0" if [ "$(hostname)" != "redis-0" ]; then echo "updating redis.conf..." echo "slaveof redis-0.redis.redis.svc.cluster.local 6379" >> /etc/redis/redis.conf else echo "this is redis-0, not updating config..." fi else echo "sentinel found, finding master" MASTER="$(redis-cli -h sentinel -p 5000 sentinel get-master-addr-by-name mymaster | grep -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')" echo "master found : $MASTER, updating redis.conf" echo "slaveof $MASTER 6379" >> /etc/redis/redis.conf fi
@saju3084 жыл бұрын
@@amitrathee337 Thank you so much :) I tried and it's working for me
@amitrathee3374 жыл бұрын
Hello@@saju308 , Sounds great it is working your side. Is sentinel is working fine your side if yes, please help bro. We can connect on linkedin www.linkedin.com/in/amit-rathee-b97a371aa/ . I am waiting for your ping.
@skarabei23032 жыл бұрын
kubectl -n default get pv returns no resource found
@ranjeetranjan76843 жыл бұрын
I really appreciate your effort in making an awesome tutorial. I was going to apply you same process in my Digital Ocean Kubernetes but did not get success. Getting an error "0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims."Can you please help with it.
@MarcelDempers3 жыл бұрын
You'll need to check what storage Digital Ocean provides and then apply the correct "storageClassName" to the statefulset to ensure the PVC's get created and bound accordingly
@ranjeetranjan76843 жыл бұрын
@@MarcelDempers Thanks for your response I changed storageClassName: "do-block-storage" but still the same error. Your help will be really appreciated.
@sammcooley3 жыл бұрын
@@ranjeetranjan7684 I am also stuck here as well. I think a seperate volume manifest is necessary to provision the storage which the pvc can bind to it. I have had success in this past doing this, but have since migrated over to Digital Ocean and similarly experiencing this unbound immediate pvc issue. Even when laying out some pv's for the redis pods, I also encountered problems with sentinels also requiring volumes, for which reason I am unsure, and have been unable to find correct resources to make this clear. Let me know which steps you may have taken or may take to grt it working w digital ocean block storage.
@haraldhacker3 жыл бұрын
Using your example in your git repo, the sentinel is not promoting another replica to master when I delete the redis-0. When deleting redis-0, sentinel is logging "+sdown master mymaster redis-0.redis.redis.svc.cluster.local 6379" and that's it. The new instance of redis-0 is a slave. Any idea why sentinel is not promoting a new replica to master? Maybe something changed in the redis docker image that was updated in the last months? when switching back to the redis:6.0-alpine image and to the config for that version everything works perfectly. Maybe you can update your example to get back a working example? Thank you a lot :)
@MarcelDempers3 жыл бұрын
The code was recently updated to deal with redis 6.2 which resolves DNS instead of IP. This will resolve fail-over issues and works for both cases when a master fails as well as a slave. See github.com/marcel-dempers/docker-development-youtube-series/issues/87 If you're experiencing issues you may want to add a "sleep" in the init container and add additional logging to troubleshoot what is going wrong as discovery happens in the init containers
@haraldhacker3 жыл бұрын
@@MarcelDempers Hmmm I investigated a bit - on minikube 1.18 it works - on aks 1.21.2 i'm facing the issue. Still not clear to me why, because the manifests are the same except the storage class part. When the new redis replica is booting up it's asking sentinel for the current master, but sentinel says that the master is the instance which was just deleted few seconds ago. So sentinel somehow does not run the switch master part. When I'm correct, when the master is deleted sentinel should elect a new master but for some reason it does not elect a new master. Deploying the same manifests on minikube it's just fine. :-/
@AshrafulHuda-n3u Жыл бұрын
@@haraldhacker hi. did you find the fix?
@haraldhacker Жыл бұрын
@@AshrafulHuda-n3u nope. I found out 1yr ago, that this is an open bug, also found the exact issue on the github repo. Due to that known issue i did not spin up a redis cluster, but just a single instance. WHICH is fine because according to redis' documentation they say you should design your application in a way where the it's not relying on redis' availability.
@Aka007103 жыл бұрын
Hi , when I apply the statefulset the pods have a status on pending and the error message in 1 pod has unbound immediate PersistentVolumeClaims. I am using Microsoft Azure and checked the StorageClass names , and the default one is called "default" , I have amended this on the yaml file , anyone have any ideas on what is going on ?
@MarcelDempers3 жыл бұрын
You'll need to update your statefulset to use the correct storage class and persistent volume for your cloud provider or custom storage. Checkout my statefulset as well as my persistent volume video for further understanding
@Aka007103 жыл бұрын
@@MarcelDempers Thanks let me look into those videos and see
@barleywaterfederal Жыл бұрын
Storing the password in the CM sounds like a terrible idea. Is there a better approach eg with secrets?
@MarcelDempers Жыл бұрын
you can remove the config from the cm, and use environment variable instead to override the cm. Then use a secret to inject to the env variable.
@christianibiri4 жыл бұрын
This is cool!
@moussadiasoumahoro76214 жыл бұрын
good game, Please can we have the repository link?
@MarcelDempers4 жыл бұрын
Sure its all in the description 🤓
@William-Anez3 жыл бұрын
Just discover this (gold piece) because I was needing to configure a redis HA on AWS, so I toke your files and just modify 2 things: the password xD and the storageClassName to "gp2" (as I'm using AWS) Everything was excelent, got the cluster running in no time, however I'm facing a little issue when I tried to delete the MASTER node When I ran: k delete pod redis-0 -n redis redis-0 was terminated and immediately was recreated, the sentinel didn't switch master, the only line that appeared on sentinel-0 was: 1:X 12 Oct 2021 15:06:27.416 # +sdown master mymaster redis-0.redis.redis.svc.cluster.local 6379 And when the new redis-0 was Initializing it did it as the MASTER again: 1:S 12 Oct 2021 15:06:44.185 # Server initialized 1:S 12 Oct 2021 15:06:44.185 * Ready to accept connections 1:S 12 Oct 2021 15:06:44.185 * Connecting to MASTER redis-0.redis.redis.svc.cluster.local:6379 1:S 12 Oct 2021 15:06:44.185 * MASTER REPLICA sync started 1:S 12 Oct 2021 15:06:44.185 * Non blocking connect for SYNC fired the event. 1:S 12 Oct 2021 15:06:44.185 * Master replied to PING, replication can continue... 1:S 12 Oct 2021 15:06:44.185 * Partial resynchronization not possible (no cached master) 1:S 12 Oct 2021 15:06:44.186 * Master is currently unable to PSYNC but should be in the future: -NOMASTERLINK Can't SYNC while not connected with my master 1:S 12 Oct 2021 15:06:45.195 * Connecting to MASTER redis-0.redis.redis.svc.cluster.local:6379 1:S 12 Oct 2021 15:06:45.195 * MASTER REPLICA sync started 1:S 12 Oct 2021 15:06:45.195 * Non blocking connect for SYNC fired the event. And then forever a loop with this lines: 1:S 12 Oct 2021 15:06:44.186 * Master is currently unable to PSYNC but should be in the future: -NOMASTERLINK Can't SYNC while not connected with my master 1:S 12 Oct 2021 15:06:45.195 * Connecting to MASTER redis-0.redis.redis.svc.cluster.local:6379 1:S 12 Oct 2021 15:06:45.195 * MASTER REPLICA sync started 1:S 12 Oct 2021 15:06:45.195 * Non blocking connect for SYNC fired the event. 1:S 12 Oct 2021 15:06:45.197 * Master replied to PING, replication can continue... 1:S 12 Oct 2021 15:06:45.197 * Partial resynchronization not possible (no cached master) All the other redis are also looping: 1:S 12 Oct 2021 15:15:54.425 * Connecting to MASTER redis-0.redis.redis.svc.cluster.local:6379 1:S 12 Oct 2021 15:15:54.427 * MASTER REPLICA sync started 1:S 12 Oct 2021 15:15:54.427 * Non blocking connect for SYNC fired the event. 1:S 12 Oct 2021 15:15:54.428 * Master replied to PING, replication can continue... 1:S 12 Oct 2021 15:15:54.431 * Trying a partial resynchronization (request 8a24b878c3c46263ee2abd273795192f100b537c:84244). 1:S 12 Oct 2021 15:15:54.432 * Master is currently unable to PSYNC but should be in the future: -NOMASTERLINK Can't SYNC while not connected with my master I'll appreciate any help
@William-Anez3 жыл бұрын
However, if I enter the redis-0 instance, connect using redis-cli and doing: "shutdown nosave" sentinel just restart it (nothing change) If I do a kill to the redis-server process then I get the expected behavior, the pod dies and sentinel is able to switch master as expected
@sivasai41924 жыл бұрын
Even though master is working fine. Slave is not able to connect to master . Here are the logs of slave Unable to connect to MASTER: Invalid argument Connecting to MASTER redis-0.redis.redis.svc.cluster.local:6379 Did someone else faced this issue? Thanks in advance EDIT: I am new to kubernetes. The redis before .svc is the namespace in which our statefulset is running
@wzije3 жыл бұрын
i have a same problem..have you fix this issue? EDIT: sorry i miss your edit explanation. so the format slaveof must be like this "pod.my-service.my-namespace.svc.cluster.local" ref : matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-networking-guide-beginners.html thanks. :D
@budgetkeeper2 жыл бұрын
In my case I have other namespace (not redis). Check it "pod_name.service_name.namespace.svc.cluster.local"
@vudc1404 жыл бұрын
nice. thank u
@NjunwaWamavoko7 ай бұрын
Congratulations for this work. However, This code seems to have changed and the below bash code does not seem to return the master any more MASTER=$(redis-cli --no-auth-warning --raw -h $i -a $REDIS_PASSWORD info replication | awk '{print $1}' | grep master_host: | cut -d ":" -f2)
@a0nmusic3 жыл бұрын
so much useful info thank you so much
@dasilavanya74293 жыл бұрын
Hi Marcel, Can you please check and update sentinel init args. Because in pods status, it is showing Init:CrashLoopBackOff. Thank you!!
@MarcelDempers3 жыл бұрын
Source code on GitHub still appears to be working. Feel free to raise a Github issue with error details and steps to reproduce if the latest source code has issues. If you have a fix, pull requests are also welcome 🙏🏽
@dasilavanya74293 жыл бұрын
@@MarcelDempers I just Discovered that I was deployed these manifest files under default namespace instead of redis namespace. once i deployed under redis namespace,the issue got resolved.Thanks alot.
@ankushanpat2732 жыл бұрын
How to connect redis cluster service url with spring boot application ?