Рет қаралды 1,505
One feature that draws people to Kubernetes is its ability to scale automatically. Auto-scaling Kubernetes is an essential part of your cloud-native strategy. In addition, you may be dealing with use cases requiring advanced Kubernetes scheduling requirements like pod affinity, pod anti-affinity, and volume topology awareness. In this video, I'll show you how to automatically scale the compute resources of an Amazon EKS cluster using Karpenter, with a focus on meeting scheduling requirements with Kubernetes features like inter-pod affinity and dynamic provisioning of volumes for your container workloads.
#kubernetes #podscheduling #autoscaling
karpenter.sh/
aws.amazon.com/blogs/containe...
aws.amazon.com/blogs/aws/intr...
github.com/aws-ia/terraform-a...
aws.github.io/aws-eks-best-pr...
Timestamps:
00:00 - Introduction
00:06 - Overview
00:47 - Pod affinity and pod anti-affinity in Kubernetes
01:22 - Topology of your Kubernetes cluster
01:41 - The pod affinity rule
01:57 - The pod anti-affinity rule
02:06 - Use cases for pod affinity and pod anti-affinity
03:11 - Demo for scaling Amazon EKS cluster with Karpenter and workloads with pod affinity rules
11:31 - Demo for scaling Amazon EKS cluster with Karpenter and workloads with pod anti-affinity rules
16:23 - Dynamic provisioning of volumes and volume topology awareness in Kubernetes
18:48 - Demo for dynamic provisioning of volumes and volume topology awareness in Kubernetes for a statefulset workload
Connect:
GitHub: github.com/LukeMwila
Twitter: / luke9ine
Medium: / outlier.developer
LinkedIn: / lukonde-mwila-25103345
If you found this video helpful, please like the video and subscribe to the channel!