For multi clusters solutions, I had a similar summary. 1. controller layer: Karmada, clusternet, kubefed (obsolete): kind of controlled distribution scheduling 2. data layer: Clusterpedia: only performs data aggregation to provide a better experience for operation and maintenance monitoring and data retrieval. 3. devops layer: ArgoCD and FluxCD are connected to multiple clusters through CD and release alternative distribution, with similar effects. 4. infra layer: ClusterAPI, kubean and other multi-cluster life cycle management, only for multi-cluster creation 5. logical layer: Virtual multi-tenancy, vcluster, kubezoo, etc., make users feel that it is an independent cluster, but it is actually virtual. It saves resources in some test development scenarios. 6. network layer: submarine, istio mutil-cluster and other ingress/egress This part is my relatively superficial understanding.
@MangirdasJudeikis8 ай бұрын
Before K8S - everybody built their own "Application Farms" and created ways to run their onw apps. Meaning if you hire somebody, who supported "Application farm" they will not be able to suppory your farm at day 1 as every of them are different. Kuberentes alinged this. Now we see same happening with platfroms - Everybody are building platfroms ontop of k8s and all of them uses different tools and looks different. They achieve same goals, but are different.
@knaledge68548 ай бұрын
How might KCP (sandbox status) and BACK-stack co-exist and benefit one another?
@mangirdasjudeikis87998 ай бұрын
They not competing in any ways. All BACK-stack components are operating at single cluster context in the way that you run them in multiple cluster and/or have control clusters. So in the way it is fleet management stack. Where KCP intended to be single API, horizontally scaled. There is not limitation why somebody should not be able to put all the BACK-stack components ontop of KCP creating unified single API to do the all the management.
@knaledge68548 ай бұрын
@@mangirdasjudeikis8799 I appreciate this perspective! Do you feel that BACK-stack and KCP have a natural partnership in that way? Peanut butter and chocolate, jam and toast, etc. I ask because it does seem like there is an opportunity to blend each to make both stronger. How would you approach the very first technical step in combining both efforts? What would be the first material win/outcome that would see both changed/improved as a result?
@mangirdasjudeikis87998 ай бұрын
@@knaledge6854 As mentioned KCP is framework to build platforms. So I suspect somebody need to build an opinionated platfrom from it first :D
@barefeg8 ай бұрын
Why can’t different versions of CRDs be installed in the same cluster? The resources are all versioned. Similar how k8s updates APIs over several releases.
@mangirdasjudeikis87998 ай бұрын
Its more of the operators authors questions. You can, but community does not do this. Usually with operator upgrade you are forced to upgrade CRDs. So where intentions of the API where good, community didn't built as it intended (same operator supporting multiple version), and we get to the point there lower layer of stack dictates the pattern
@barefeg8 ай бұрын
@@mangirdasjudeikis8799 by that idea, it would be a simple solution would be to create a tool to merge both CRDs with different versions and create a router controller that delegates to the right controller version given the resource’s version.
@MangirdasJudeikis8 ай бұрын
@@barefeg It solves only 1 problem. There are many other problems :) we trying to lookg bit more holistic view
@narunaskapocius8988 ай бұрын
Great talk! In some way KCP reminds me Teleport but on steroids. It not only allows to manage access but the platform itself: deployemets, etc
@MangirdasJudeikis8 ай бұрын
We getting there :D
@andrestorres73438 ай бұрын
why would you need multiple clusters?
@TheCardil7 ай бұрын
They said in the beginning slides. The CRDs are cluster wide, and various teams may need different setup for each.