Cluster API: Introduction

Cluster API is a Kubernetes Project that helps deploy and maintain Kubernetes Cluster using Kubernetes style declarative APIs

Cluster API can help deploy , configure and manage Kubernetes clusters on various environments. Cluster API has various Infrastructure providers that help deploy Clusters on Cloud Providers like Amazon, Google etc. And to on Prem environments like VMware vSphere. Cluster API also has a docker provider to deploy a K8s cluster on Docker.

Cluster API builds and maintains a Kubernetes Cluster in a given Infrastructure Provider using Kubernetes Custom Resource Definition (CRD's) The Cluster being deployed is called a Target Cluster. Since Cluster API is using Kubernetes CRD's to implement the Target Cluster it needs a Kubernetes Cluster to begin. This Kubernetes Cluster is called Management Cluster. This may sound counter intuitive, however its pretty simple to implement. The management cluster to begin can be an existing K8s cluster or a KIND cluster. Once the target cluster is deployed using this, Cluster API lets one pivot the configs from the Management Cluster to the Target Cluster.

For this Lab, the lab instance already has a kubeadm based cluster pre creates, we will be using this cluster as our Management Cluster and will be using Docker Infrastructure Provider, however a similar workflow can be used to deploy a cluster in AWS/GCP/Azure or vSphere.

For more info on Cluster API visit https://cluster-api.sigs.k8s.io/user/concepts.html

Prepare Your Lab

Environment Overview

Your lab environment is hosted with a cloud server instance. This instance is accessible by clicking the “My Lab” icon on the left of your Strigo web page. You can access the terminal session using the built-in interface provided by Strigo or feel free to SSH directly as well (instructions will appear in the terminal).

Start Kubernetes Management Cluster

k8s-start
sudo apt  install jq --yes

Install Cluster API

Start installing Cluster API

kubectl create -f https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.2.7/cluster-api-components.yaml

Install Bootstrap Provider fo Cluster API

kubectl create -f https://github.com/kubernetes-sigs/cluster-api-bootstrap-provider-kubeadm/releases/download/v0.1.5/bootstrap-components.yaml

Install the Infrastructure Provider (using Docker here)

kubectl create -f https://github.com/kubernetes-sigs/cluster-api-provider-docker/releases/download/v0.2.1/provider-components.yaml

Create a Target Cluster

So far we deployed Infrastructure Provider components into our Management Cluster in KIND. We can now create n number of Target Clusters

kubectl apply -f https://raw.githubusercontent.com/Boskey/cluster-api-docker/master/docker-cluster.yaml

Create Control Plane for the Target Cluster

kubectl apply -f https://raw.githubusercontent.com/Boskey/cluster-api-docker/master/capi-docker-controlplane.yaml 

Wait a few seconds to make sure the Control Plane machine is provisioned.

kubectl get machine

Fetch the Kubeconfig of the Target Cluster created

kubectl --namespace=default get secret/capi-quickstart-kubeconfig -o json \
  | jq -r .data.value \
  | base64 --decode \
  > ./capi-quickstart.kubeconfig

This installs the target cluster with a single control plane and the Kubeconfig for the the target cluster is in the file capi-quickstart-kubeconfig

Run Below command to see the new new Target Cluster, you should see a single node cluster deployed

kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes

Create Worker Nodes in Target Cluster

We can now start attaching worker nodes to the Target cluster, setup CNI etc. We need to deploy a CNI solution before we can add worker nodes. For this Lab we use Calico

Deploy a CNI for the Cluster

kubectl --kubeconfig=./capi-quickstart.kubeconfig \
  apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

Deploy Worker Nodes, Cluster API defines them as MachineDeployment CRD

kubectl apply -f https://raw.githubusercontent.com/Boskey/cluster-api-docker/master/clusterapi-docker-machinedeployment.yaml 

Fetch the number of nodes, we should see 2 nodes for our Target Cluster now ( This may take a few minutes to Provision)

kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes

The above process can be applied to deploy Clusters in AWS/GCP/vSphere using the same Management Cluster.

For other providers checkout the Providers List here

Thank you!

Last updated