Thank you soo much. You tutorials were helpful indeed. Most importantly your editing skills are top tier. Building my own channel, and you are my inspiration for KZbin. Thank you for everything.
@dillonhansen71 Жыл бұрын
Lets goo!! 💪💪💪💪
@trevorlichfield23863 ай бұрын
Beautiful explanation. The only thing I had questions about that I thought I would share is the topology key. I didn't really understand it (I'm a beginner and was missing some context) but to elaborate: I think a good way to explain it (or better common name for it) would be the node group, or, the group of nodes the affinity rule will be searching for pods within. in the video example, grouping by hostname means searching for matching pods per node, because the hostname is unique for each node. You could also use this to group based on a different label on the node, like "region" where you could group the nodes based on their region, and make sure your pods only exist on a per-region basis and whatnot.
@lucianolopez15588 ай бұрын
Thank you, clear and with example, keep it simple!!!
@devcrypted16 сағат бұрын
Great explanation, I'm just wondering if kind has an option to name nodes? Can't find anything in the documentation
@danimercado5 ай бұрын
Un genio!!! Saludos desde Argentina
@fkangalov Жыл бұрын
Love it!
@tiagomedeiros7935 Жыл бұрын
Another excellent video.
Жыл бұрын
Great explanation. Now Iam encourage to use it more
@MrKulk Жыл бұрын
Amazing..Thanks a ton.
@tekknokrat Жыл бұрын
Very valuable explanation. Interesting that i have much more flexibility using a pod affinity then node affinity. But instead of using pod anti affinity in some cases i would stick with using a daemonset.
@Shubham__Saroj Жыл бұрын
Best explanation 👍🏻👌🏻👌🏻👌🏻
@emilne83 Жыл бұрын
Thank you for this great tutorial. To extend on this concept, how would you go about scheduling pods on a physical rack based anti-affinity rule? Assuming that the nodes had labels applied for the rack they were located in.
@kallan2255Ай бұрын
@emilne83 I would recommend topology spread constraints for most circumstances like this, assuming your use case is about spreading out replicas across different failure domains (node, rack, aisle, zone etc). I'd recommend avoiding pod affinity/anti-affinity entirely as it's got a pretty poor implementation that absolute wrecks cluster auto scalers.
@harshdave991211 ай бұрын
How to schedule 2 pods on 2 separate nodes having same labels?
@manjunadh13 ай бұрын
Many ways ... You can use Maxskew.
@kallan2255Ай бұрын
Topology spread constraints. Although I recommend never using hard requirements for scheduling unless you truly need them (distributed databases etc) and stick to preferred scheduling policies in all cases.
@kallan2255Ай бұрын
I dont really understand this one to be honest. Nodes for a given cluster will always in the same region. Kubernetes does not work with control planes and worker nodes across geographically seperate network boundaries. You're probably going to confuse people with this example.