Kubernetes/the-hard-way
This article will show how to setup Kubernetes The Hard Way, as originally developed by Kelsey Hightower. I will add my own additions, changes, alterations, etc. to the process (and this will be continually expanded upon).
Contents
Install the client tools
Note: See here for how to install on other OSes.
In this section, we will install the command line utilities required to complete this tutorial:
- Install CFSSL
The cfssl and cfssljson command line utilities will be used to provision a PKI Infrastructure and generate TLS certificates.
- Download and install cfssl and cfssljson from the cfssl repository:
$ wget -q --show-progress --https-only --timestamping \ https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \ https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 $ chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 $ sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl $ sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
- Verify cfssl version 1.2.0 or higher is installed:
$ cfssl version Version: 1.2.0 Revision: dev Runtime: go1.6
Note: The cfssljson command line utility does not provide a way to print its version.
- Install kubectl
The kubectl command line utility is used to interact with the Kubernetes API Server.
- Download and install kubectl from the official release binaries:
$ K8S_VERSION=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt) $ curl -LO https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64/kubectl $ chmod +x kubectl $ sudo mv kubectl /usr/local/bin/
- Verify kubectl version 1.12.0 or higher is installed:
$ kubectl version --client Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Provisioning compute resources
Networking
- Virtual Private Cloud Network (VPC)
In this section, a dedicated Virtual Private Cloud (VPC) network will be setup to host the Kubernetes cluster.
- Create the kubernetes-the-hard-way custom VPC network:
$ gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom Created [https://www.googleapis.com/compute/v1/projects/<project-name>/global/networks/kubernetes-the-hard-way]. $ gcloud compute networks list --filter="name~'.*hard.*'" NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4 kubernetes-the-hard-way CUSTOM REGIONAL
A subnet must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
- Create the kubernetes subnet in the kubernetes-the-hard-way VPC network:
$ gcloud compute networks subnets create kubernetes \ --network kubernetes-the-hard-way \ --range 10.240.0.0/24 Created [https://www.googleapis.com/compute/v1/projects/<project-name>/regions/us-west1/subnetworks/kubernetes]. $ gcloud compute networks subnets list --filter="network ~ kubernetes-the-hard-way" NAME REGION NETWORK RANGE kubernetes us-west1 kubernetes-the-hard-way 10.240.0.0/24
Note: The 10.240.0.0/24
IP address range can host up to 254 compute instances.
- Firewall rules
- Create a firewall rule that allows internal communication across all protocols:
$ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ --allow tcp,udp,icmp \ --network kubernetes-the-hard-way \ --source-ranges 10.240.0.0/24,10.200.0.0/16
- Create a firewall rule that allows external SSH, ICMP, and HTTPS:
$ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ --allow tcp:22,tcp:6443,icmp \ --network kubernetes-the-hard-way \ --source-ranges 0.0.0.0/0
Note: An external load balancer will be used to expose the Kubernetes API Servers to remote clients.
- List the firewall rules in the
kubernetes-the-hard-way
VPC network:
$ gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way" NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp False kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp False
- Kubernetes public IP address
- Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
$ gcloud compute addresses create kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region)
- Verify that the
kubernetes-the-hard-way
static IP address was created in your default compute region:
$ gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')" NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS kubernetes-the-hard-way XX.XX.XX.XX EXTERNAL us-west1 RESERVED
Compute instances
The compute instances will be provisioned using Ubuntu Server 18.04, which has good support for the containerd container runtime. Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
- Kubernetes Controllers
- Create three compute instances, which will host the Kubernetes control plane:
for i in 0 1 2; do gcloud compute instances create controller-${i} \ --async \ --boot-disk-size 200GB \ --can-ip-forward \ --image-family ubuntu-1804-lts \ --image-project ubuntu-os-cloud \ --machine-type n1-standard-1 \ --private-network-ip 10.240.0.1${i} \ --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ --subnet kubernetes \ --tags kubernetes-the-hard-way,controller done
- Kubernetes Workers
Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking further down. The pod-cidr
instance metadata will be used to expose pod subnet allocations to compute instances at runtime.
Note: The Kubernetes cluster CIDR range is defined by the Controller Manager's --cluster-cidr
flag. The cluster CIDR range will be set to 10.200.0.0/16
, which supports 254 subnets.
- Create three compute instances, which will host the Kubernetes worker nodes:
for i in 0 1 2; do gcloud compute instances create worker-${i} \ --async \ --boot-disk-size 200GB \ --can-ip-forward \ --image-family ubuntu-1804-lts \ --image-project ubuntu-os-cloud \ --machine-type n1-standard-1 \ --metadata pod-cidr=10.200.${i}.0/24 \ --private-network-ip 10.240.0.2${i} \ --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ --subnet kubernetes \ --tags kubernetes-the-hard-way,worker done
- Verification
- List the compute instances in your default compute zone:
$ gcloud compute instances list --filter="tags:kubernetes-the-hard-way" NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS controller-0 us-west1-a n1-standard-1 10.240.0.10 XX.XX.XX.XX RUNNING controller-1 us-west1-a n1-standard-1 10.240.0.11 XX.XX.XX.XX RUNNING controller-2 us-west1-a n1-standard-1 10.240.0.12 XX.XX.XX.XX RUNNING worker-0 us-west1-a n1-standard-1 10.240.0.20 XX.XX.XX.XX RUNNING worker-1 us-west1-a n1-standard-1 10.240.0.21 XX.XX.XX.XX RUNNING worker-2 us-west1-a n1-standard-1 10.240.0.22 XX.XX.XX.XX RUNNING
- SSH into the instances:
$ gcloud compute ssh controller-0
See also
External links
- Kubernetes the Hard Way — on GitHub
- CFSSL — CloudFlare's PKI/TLS toolkit on GitHub