Challenges Managing Multiple Cluster across Multiple Clouds
Last updated
Last updated
Often times we see Development teams work with multiple Kubernetes Clusters to deploy workloads into. There maybe many reasons why teams choose multiple clusters. Most common reasons are for tenancy and reliability. Organizations may want to have Kubernetes cluster segregated around different functional teams or Organizations may need a Kubernetes Run time on multiple Clouds. A Kubernetes Cluster today doesn't have the capability to run/stretch across different Cloud Providers.
On the other hand each Kubernetes Cluster is an island in itself that needs to be managed individually. Each Kubernetes Cluster has its own API, each cluster needs to have a definition of Cluster Roles and Access Management. But more importantly, because Kubernetes deals with containers, each cluster needs to have its its own security profile defined around how these containers work on host Linux OS. Is a running container allowed to access the underlying hosts resources like host loopback interface or have access to files with root access?
For Operations teams managing multiple Kubernetes Cluster this means, they need to map users that need to work in these clusters individually into each cluster. They will have to define security requirements for each cluster. On the other hand, development teams expect a certain level of agility to connect, provision and access Kubernetes cluster everywhere.
VMware Tanzu Mission Control addresses these concerns for both Operation and Development teams. To learn more click on the next chapter.