Kubernetes/the-hard-way
This article will show how to setup Kubernetes The Hard Way, as originally developed by Kelsey Hightower. I will add my own additions, changes, alterations, etc. to the process (and this will be continually expanded upon).
I will show you how to set up Kubernetes from scratch using Google Cloud Platform (GCP) VMs running Ubuntu 18.04.
I will use the latest version of Kubernetes (as of August 2019):
$ curl -sSL https://dl.k8s.io/release/stable.txt v1.15.2
Contents
- 1 Install the client tools
- 2 Provisioning compute resources
- 3 Provisioning a CA and Generating TLS Certificates
- 4 Generating Kubernetes Configuration Files for Authentication
- 5 Generating the Data Encryption Config and Key
- 6 Bootstrapping the etcd Cluster
- 7 Bootstrapping the Kubernetes Control Plane
- 8 The Kubernetes Frontend Load Balancer
- 9 Bootstrapping the Kubernetes Worker Nodes
- 10 Configuring kubectl for Remote Access
- 11 Provisioning Pod Network Routes
- 12 Deploying the DNS Cluster Add-on
- 13 See also
- 14 External links
Install the client tools
Note: See here for how to install on other OSes.
In this section, we will install the command line utilities required to complete this tutorial:
- Install CFSSL
The cfssl and cfssljson command line utilities will be used to provision a PKI Infrastructure and generate TLS certificates.
- Download and install cfssl and cfssljson from the cfssl repository:
$ wget -q --show-progress --https-only --timestamping \ https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \ https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 $ chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 $ sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl $ sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
- Verify cfssl version 1.2.0 or higher is installed:
$ cfssl version Version: 1.2.0 Revision: dev Runtime: go1.6
Note: The cfssljson command line utility does not provide a way to print its version.
- Install kubectl
The kubectl command line utility is used to interact with the Kubernetes API Server.
- Download and install kubectl from the official release binaries:
$ K8S_VERSION=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt) $ curl -LO https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64/kubectl $ chmod +x kubectl $ sudo mv kubectl /usr/local/bin/
- Verify kubectl version 1.12.0 or higher is installed:
$ kubectl version --client Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Provisioning compute resources
Networking
- Virtual Private Cloud Network (VPC)
In this section, a dedicated Virtual Private Cloud (VPC) network will be setup to host the Kubernetes cluster.
- Create the kubernetes-the-hard-way custom VPC network:
$ gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom Created [https://www.googleapis.com/compute/v1/projects/<project-name>/global/networks/kubernetes-the-hard-way]. $ gcloud compute networks list --filter="name~'.*hard.*'" NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4 kubernetes-the-hard-way CUSTOM REGIONAL
A subnet must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
- Create the kubernetes subnet in the kubernetes-the-hard-way VPC network:
$ gcloud compute networks subnets create kubernetes \ --network kubernetes-the-hard-way \ --range 10.240.0.0/24 Created [https://www.googleapis.com/compute/v1/projects/<project-name>/regions/us-west1/subnetworks/kubernetes]. $ gcloud compute networks subnets list --filter="network ~ kubernetes-the-hard-way" NAME REGION NETWORK RANGE kubernetes us-west1 kubernetes-the-hard-way 10.240.0.0/24
Note: The 10.240.0.0/24
IP address range can host up to 254 compute instances.
- Firewall rules
- Create a firewall rule that allows internal communication across all protocols:
$ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ --allow tcp,udp,icmp \ --network kubernetes-the-hard-way \ --source-ranges 10.240.0.0/24,10.200.0.0/16
- Create a firewall rule that allows external SSH, ICMP, and HTTPS:
$ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ --allow tcp:22,tcp:6443,icmp \ --network kubernetes-the-hard-way \ --source-ranges 0.0.0.0/0
Note: An external load balancer will be used to expose the Kubernetes API Servers to remote clients.
- List the firewall rules in the
kubernetes-the-hard-way
VPC network:
$ gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way" NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp False kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp False
- Kubernetes public IP address
- Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
$ gcloud compute addresses create kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region)
- Verify that the
kubernetes-the-hard-way
static IP address was created in your default compute region:
$ gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')" NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS kubernetes-the-hard-way XX.XX.XX.XX EXTERNAL us-west1 RESERVED
Compute instances
The compute instances will be provisioned using Ubuntu Server 18.04, which has good support for the containerd container runtime. Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
- Kubernetes Controllers
- Create three compute instances, which will host the Kubernetes control plane:
for i in 0 1 2; do gcloud compute instances create controller-${i} \ --async \ --boot-disk-size 200GB \ --can-ip-forward \ --image-family ubuntu-1804-lts \ --image-project ubuntu-os-cloud \ --machine-type n1-standard-1 \ --private-network-ip 10.240.0.1${i} \ --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ --subnet kubernetes \ --tags kubernetes-the-hard-way,controller done
- Kubernetes Workers
Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking further down. The pod-cidr
instance metadata will be used to expose pod subnet allocations to compute instances at runtime.
Note: The Kubernetes cluster CIDR range is defined by the Controller Manager's --cluster-cidr
flag. The cluster CIDR range will be set to 10.200.0.0/16
, which supports 254 subnets.
- Create three compute instances, which will host the Kubernetes worker nodes:
for i in 0 1 2; do gcloud compute instances create worker-${i} \ --async \ --boot-disk-size 200GB \ --can-ip-forward \ --image-family ubuntu-1804-lts \ --image-project ubuntu-os-cloud \ --machine-type n1-standard-1 \ --metadata pod-cidr=10.200.${i}.0/24 \ --private-network-ip 10.240.0.2${i} \ --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ --subnet kubernetes \ --tags kubernetes-the-hard-way,worker done
- Verification
- List the compute instances in your default compute zone:
$ gcloud compute instances list --filter="tags:kubernetes-the-hard-way" NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS controller-0 us-west1-a n1-standard-1 10.240.0.10 XX.XX.XX.XX RUNNING controller-1 us-west1-a n1-standard-1 10.240.0.11 XX.XX.XX.XX RUNNING controller-2 us-west1-a n1-standard-1 10.240.0.12 XX.XX.XX.XX RUNNING worker-0 us-west1-a n1-standard-1 10.240.0.20 XX.XX.XX.XX RUNNING worker-1 us-west1-a n1-standard-1 10.240.0.21 XX.XX.XX.XX RUNNING worker-2 us-west1-a n1-standard-1 10.240.0.22 XX.XX.XX.XX RUNNING
- SSH into the instances:
$ gcloud compute ssh controller-0
Provisioning a CA and Generating TLS Certificates
In this section, we will provision a PKI Infrastructure using CloudFlare's PKI toolkit, cfssl (which we installed above), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.
Certificate Authority
Provision a Certificate Authority that can be used to generate additional TLS certificates.
- Generate the CA configuration file, certificate, and private key:
{ cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "8760h" }, "profiles": { "kubernetes": { "usages": ["signing", "key encipherment", "server auth", "client auth"], "expiry": "8760h" } } } } EOF cat > ca-csr.json <<EOF { "CN": "Kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Seattle", "O": "Kubernetes", "OU": "CA", "ST": "Washington" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca }
- Results:
ca-key.pem ca.pem
Client and Server Certificates
In this section, we will generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes admin
user.
- The Admin Client Certificate
- Generate the
admin
client certificate and private key:
{ cat > admin-csr.json <<EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Seattle", "O": "system:masters", "OU": "Kubernetes The Hard Way", "ST": "Washington" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare admin }
- Results:
admin-key.pem admin.pem
- The Kubelet Client Certificates
Kubernetes uses a special-purpose authorization mode called "Node Authorizer", which specifically authorizes API requests made by Kubelets. In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the system:nodes
group, with a username of system:node:<nodeName>
. In this section, we will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
- Generate a certificate and private key for each Kubernetes worker node:
for instance in worker-0 worker-1 worker-2; do cat > ${instance}-csr.json <<EOF { "CN": "system:node:${instance}", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Seattle", "O": "system:nodes", "OU": "Kubernetes The Hard Way", "ST": "Washington" } ] } EOF EXTERNAL_IP=$(gcloud compute instances describe ${instance} \ --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') INTERNAL_IP=$(gcloud compute instances describe ${instance} \ --format 'value(networkInterfaces[0].networkIP)') cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \ -profile=kubernetes \ ${instance}-csr.json | cfssljson -bare ${instance} done
- Results:
worker-0-key.pem worker-0.pem worker-1-key.pem worker-1.pem worker-2-key.pem worker-2.pem
- The Controller Manager Client Certificate
- Generate the
kube-controller-manager
client certificate and private key:
{ cat > kube-controller-manager-csr.json <<EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Seattle", "O": "system:kube-controller-manager", "OU": "Kubernetes The Hard Way", "ST": "Washington" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager }
- Results:
kube-controller-manager-key.pem kube-controller-manager.pem
- The Kube Proxy Client Certificate
- Generate the
kube-proxy
client certificate and private key:
{ cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Seattle", "O": "system:node-proxier", "OU": "Kubernetes The Hard Way", "ST": "Washington" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-proxy-csr.json | cfssljson -bare kube-proxy }
- Results:
kube-proxy-key.pem kube-proxy.pem
- The Scheduler Client Certificate
- Generate the
kube-scheduler
client certificate and private key:
{ cat > kube-scheduler-csr.json <<EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Seattle", "O": "system:kube-scheduler", "OU": "Kubernetes The Hard Way", "ST": "Washington" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-scheduler-csr.json | cfssljson -bare kube-scheduler }
- Results:
kube-scheduler-key.pem kube-scheduler.pem
- The Kubernetes API Server Certificate
The kubernetes-the-hard-way
static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
- Generate the Kubernetes API Server certificate and private key:
{ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') cat > kubernetes-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Seattle", "O": "Kubernetes", "OU": "Kubernetes The Hard Way", "ST": "Washington" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \ -profile=kubernetes \ kubernetes-csr.json | cfssljson -bare kubernetes }
- Results:
kubernetes-key.pem kubernetes.pem
- The Service Account Key Pair
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the managing service accounts documentation.
- Generate the
service-account
certificate and private key:
{ cat > service-account-csr.json <<EOF { "CN": "service-accounts", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Seattle", "O": "Kubernetes", "OU": "Kubernetes The Hard Way", "ST": "Washington" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ service-account-csr.json | cfssljson -bare service-account }
- Results:
service-account-key.pem service-account.pem
- Distribute the Client and Server Certificates
- Copy the appropriate certificates and private keys to each worker instance:
for instance in worker-0 worker-1 worker-2; do gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/ done
- Copy the appropriate certificates and private keys to each controller instance:
for instance in controller-0 controller-1 controller-2; do gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ service-account-key.pem service-account.pem ${instance}:~/ done
Note: The kube-proxy
, kube-controller-manager
, kube-scheduler
, and kubelet
client certificates will be used to generate client authentication configuration files in the next section.
Generating Kubernetes Configuration Files for Authentication
In this section, we will generate Kubernetes configuration files, also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.
Client Authentication Configs
In this section, we will generate kubeconfig files for the controller manager
, kubelet
, kube-proxy
, and scheduler
clients and the admin user.
- Kubernetes Public IP Address
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
- Retrieve the
kubernetes-the-hard-way
static IP address:
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)')
- The kubelet Kubernetes Configuration File
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes Node Authorizer.
- Generate a kubeconfig file for each worker node:
for instance in worker-0 worker-1 worker-2; do kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \ --kubeconfig=${instance}.kubeconfig kubectl config set-credentials system:node:${instance} \ --client-certificate=${instance}.pem \ --client-key=${instance}-key.pem \ --embed-certs=true \ --kubeconfig=${instance}.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:node:${instance} \ --kubeconfig=${instance}.kubeconfig kubectl config use-context default --kubeconfig=${instance}.kubeconfig done
- Results:
worker-0.kubeconfig worker-1.kubeconfig worker-2.kubeconfig
- The kube-proxy Kubernetes Configuration File
- Generate a kubeconfig file for the
kube-proxy
service:
{ kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials system:kube-proxy \ --client-certificate=kube-proxy.pem \ --client-key=kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig }
- Results:
kube-proxy.kubeconfig
- The kube-controller-manager Kubernetes Configuration File
- Generate a kubeconfig file for the
kube-controller-manager
service:
{ kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager \ --client-certificate=kube-controller-manager.pem \ --client-key=kube-controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:kube-controller-manager \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig }
- Results:
kube-controller-manager.kubeconfig
- The kube-scheduler Kubernetes Configuration File
- Generate a kubeconfig file for the
kube-scheduler
service:
{ kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=kube-scheduler.pem \ --client-key=kube-scheduler-key.pem \ --embed-certs=true \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:kube-scheduler \ --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig }
- Results:
kube-scheduler.kubeconfig
- The admin Kubernetes Configuration File
- Generate a kubeconfig file for the
admin
user:
{ kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=admin.kubeconfig kubectl config set-credentials admin \ --client-certificate=admin.pem \ --client-key=admin-key.pem \ --embed-certs=true \ --kubeconfig=admin.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=admin \ --kubeconfig=admin.kubeconfig kubectl config use-context default --kubeconfig=admin.kubeconfig }
- Results:
admin.kubeconfig
Distribute the Kubernetes Configuration Files
- Copy the appropriate
kubelet
andkube-proxy
kubeconfig files to each worker instance:
for instance in worker-0 worker-1 worker-2; do gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/ done
- Copy the appropriate
kube-controller-manager
andkube-scheduler
kubeconfig files to each controller instance:
for instance in controller-0 controller-1 controller-2; do gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/ done
Generating the Data Encryption Config and Key
Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to encrypt cluster data at rest.
In this section, we will generate an encryption key and an encryption config suitable for encrypting Kubernetes Secrets.
- Create the
encryption-config.yaml
encryption config file:
cat > encryption-config.yaml <<EOF kind: EncryptionConfig apiVersion: v1 resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: $(head -c 32 /dev/urandom | base64 -i -) - identity: {} EOF
- Copy the
encryption-config.yaml
encryption config file to each controller instance:
for instance in controller-0 controller-1 controller-2; do gcloud compute scp encryption-config.yaml ${instance}:~/ done
Bootstrapping the etcd Cluster
Kubernetes components are stateless and store cluster state in etcd. In this section, we will bootstrap a three-node etcd cluster and configure it for high availability and secure remote access.
- Prerequisites
The commands in this section must be run on each controller instance: controller-0
, controller-1
, and controller-2
.
Using tmux, split your shell into 3 x panes (ctrl + b "
) and then log into each controller instance:
$ gcloud compute ssh controller-0 ctrl + b o $ gcloud compute ssh controller-1 ctrl + b o $ gcloud compute ssh controller-2
Now, set tmux to run all subsequent commands (unless otherwise stated) in parallel on all 3 x controller instances (in all 3 x tmux panes):
ctrl + b : set synchronize-panes on # off #~OR~ setw synchronize-panes # toggles on/off
- Download and Install the etcd Binaries
Download the official etcd release binaries from the coreos/etcd GitHub project:
ETCD_VER=v3.3.13 # choose either URL GOOGLE_URL=https://storage.googleapis.com/etcd GITHUB_URL=https://github.com/etcd-io/etcd/releases/download DOWNLOAD_URL=${GOOGLE_URL} wget -q --show-progress --https-only --timestamping \ "${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz"
- Extract and install the etcd server and the etcdctl command line utility:
{ tar -xvf etcd-${ETCD_VER}-linux-amd64.tar.gz sudo mv etcd-${ETCD_VER}-linux-amd64/etcd* /usr/local/bin/ rm -rf etcd-${ETCD_VER}-linux-amd64* }
- Configure the etcd Server
{ sudo mkdir -p /etc/etcd /var/lib/etcd sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/ }
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers.
- Retrieve the internal IP address for the current compute instance:
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
$ ETCD_NAME=$(hostname -s)
- Create the
etcd.service
systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/etcd.service [Unit] Description=etcd Documentation=https://github.com/coreos [Service] ExecStart=/usr/local/bin/etcd \\ --name ${ETCD_NAME} \\ --cert-file=/etc/etcd/kubernetes.pem \\ --key-file=/etc/etcd/kubernetes-key.pem \\ --peer-cert-file=/etc/etcd/kubernetes.pem \\ --peer-key-file=/etc/etcd/kubernetes-key.pem \\ --trusted-ca-file=/etc/etcd/ca.pem \\ --peer-trusted-ca-file=/etc/etcd/ca.pem \\ --peer-client-cert-auth \\ --client-cert-auth \\ --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\ --advertise-client-urls https://${INTERNAL_IP}:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
- Start the etcd Server
{ sudo systemctl daemon-reload sudo systemctl enable etcd sudo systemctl start etcd }
Remember to run the above commands on each controller node: controller-0
, controller-1
, and controller-2
.
- Verification
- List the etcd cluster members:
$ sudo ETCDCTL_API=3 etcdctl member list \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/etcd/ca.pem \ --cert=/etc/etcd/kubernetes.pem \ --key=/etc/etcd/kubernetes-key.pem 3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379 f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379 ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379
Bootstrapping the Kubernetes Control Plane
In this section, we will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. We will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
- Prerequisites
The commands in this section must be run on each controller instance: controller-0
, controller-1
, and controller-2
.
Using tmux, split your shell into 3 x panes (ctrl + b "
) and then log into each controller instance:
$ gcloud compute ssh controller-0 ctrl + b o $ gcloud compute ssh controller-1 ctrl + b o $ gcloud compute ssh controller-2
Now, set tmux to run all subsequent commands (unless otherwise stated) in parallel on all 3 x controller instances (in all 3 x tmux panes):
ctrl + b : set synchronize-panes on # off #~OR~ setw synchronize-panes # toggles on/off
Provision the Kubernetes Control Plane
- Create the Kubernetes configuration directory:
$ sudo mkdir -p /etc/kubernetes/config
- Download and Install the Kubernetes Controller Binaries
- Download the official Kubernetes release binaries:
$ K8S_VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) $ K8S_URL=https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64 $ wget -q --show-progress --https-only --timestamping \ "${K8S_URL}/kube-apiserver" \ "${K8S_URL}/kube-controller-manager" \ "${K8S_URL}/kube-scheduler" \ "${K8S_URL}/kubectl"
- Install the Kubernetes binaries:
{ chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ }
Configure the Kubernetes API Server
{ sudo mkdir -p /var/lib/kubernetes/ sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ service-account-key.pem service-account.pem \ encryption-config.yaml /var/lib/kubernetes/ sudo chmod 0600 /var/lib/kubernetes/encryption-config.yaml }
The instance internal IP address will be used to advertise the API Server to members of the cluster.
- Retrieve the internal IP address for the current compute instance:
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
- Create the
kube-apiserver.service
systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --advertise-address=${INTERNAL_IP} \\ --allow-privileged=true \\ --apiserver-count=3 \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-path=/var/log/audit.log \\ --authorization-mode=Node,RBAC \\ --bind-address=0.0.0.0 \\ --client-ca-file=/var/lib/kubernetes/ca.pem \\ --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\ --etcd-cafile=/var/lib/kubernetes/ca.pem \\ --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\ --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\ --etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\ --event-ttl=1h \\ --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\ --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\ --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\ --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\ --kubelet-https=true \\ --runtime-config=api/all \\ --service-account-key-file=/var/lib/kubernetes/service-account.pem \\ --service-cluster-ip-range=10.32.0.0/24 \\ --service-node-port-range=30000-32767 \\ --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\ --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
Configure the Kubernetes Controller Manager
- Move the
kube-controller-manager
kubeconfig into place:
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
- Create the
kube-controller-manager.service
systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kube-controller-manager \\ --address=0.0.0.0 \\ --cluster-cidr=10.200.0.0/16 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\ --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\ --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\ --leader-elect=true \\ --root-ca-file=/var/lib/kubernetes/ca.pem \\ --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\ --service-cluster-ip-range=10.32.0.0/24 \\ --use-service-account-credentials=true \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
Configure the Kubernetes Scheduler
- Move the
kube-scheduler
kubeconfig into place:
$ sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
- Create the
kube-scheduler.yaml
configuration file:
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml apiVersion: kubescheduler.config.k8s.io/v1alpha1 kind: KubeSchedulerConfiguration clientConnection: kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig" leaderElection: leaderElect: true EOF
- Create the
kube-scheduler.service
systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kube-scheduler \\ --config=/etc/kubernetes/config/kube-scheduler.yaml \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
Start the Controller Services
{ sudo systemctl daemon-reload sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler }
Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
Enable HTTP Health Checks
A Google Network Load Balancer will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround, an Nginx webserver can be used to proxy HTTP health checks. In this section, Nginx will be installed and configured to accept HTTP health checks on port 80
and proxy the connections to the API server on https://127.0.0.1:6443/healthz
.
The /healthz
API server endpoint does not require authentication by default.
- Install a basic webserver to handle HTTP health checks:
$ sudo apt-get install -y nginx $ cat > kubernetes.default.svc.cluster.local <<EOF server { listen 80; server_name kubernetes.default.svc.cluster.local; location /healthz { proxy_pass https://127.0.0.1:6443/healthz; proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem; } } EOF { sudo mv kubernetes.default.svc.cluster.local \ /etc/nginx/sites-available/kubernetes.default.svc.cluster.local sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local \ /etc/nginx/sites-enabled/ } $ sudo systemctl restart nginx && sudo systemctl enable nginx
Verification
$ kubectl get componentstatuses --kubeconfig admin.kubeconfig NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"}
- Test the nginx HTTP health check proxy:
$ curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz HTTP/1.1 200 OK Server: nginx/1.14.0 (Ubuntu) Date: Sun, 30 Sep 2018 17:44:24 GMT Content-Type: text/plain; charset=utf-8 Content-Length: 2 Connection: keep-alive ok
Note: Remember to run the above commands on each controller node: controller-0
, controller-1
, and controller-2
.
RBAC for Kubelet Authorization
In this section, we will configure Role-Based Access Control (RBAC) permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
Note: We are also setting the Kubelet --authorization-mode
flag to Webhook
. Webhook mode uses the SubjectAccessReview API to determine authorization.
- SSH into just the
controller-0
instance:
$ gcloud compute ssh controller-0
- Create the
system:kube-apiserver-to-kubelet
ClusterRole with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f - apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubelet rules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*" EOF
The Kubernetes API Server authenticates to the Kubelet as the kubernetes
user using the client certificate as defined by the --kubelet-client-certificate
flag.
- Bind the
system:kube-apiserver-to-kubelet
ClusterRole to thekubernetes
user:
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f - apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: system:kube-apiserver namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubelet subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kubernetes EOF
The Kubernetes Frontend Load Balancer
In this section, we will provision an external load balancer to front the Kubernetes API Servers. The kubernetes-the-hard-way
static IP address (created above) will be attached to the resulting load balancer.
Note: The compute instances created in this article will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
- Rules for Network Load Balancing
When we create our external load balancer, we need to create an ingress firewall rule for Network Load Balancing, which requires a legacy health check. The source IP ranges for legacy health checks for Network Load Balancing are:
35.191.0.0/16 209.85.152.0/22 209.85.204.0/22
- Provision a Network Load Balancer
- Create the external load balancer network resources:
{ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') gcloud compute http-health-checks create kubernetes \ --description "Kubernetes Health Check" \ --host "kubernetes.default.svc.cluster.local" \ --request-path "/healthz" gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \ --network kubernetes-the-hard-way \ --source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \ --allow tcp gcloud compute target-pools create kubernetes-target-pool \ --http-health-check kubernetes gcloud compute target-pools add-instances kubernetes-target-pool \ --instances controller-0,controller-1,controller-2 gcloud compute forwarding-rules create kubernetes-forwarding-rule \ --address ${KUBERNETES_PUBLIC_ADDRESS} \ --ports 6443 \ --region $(gcloud config get-value compute/region) \ --target-pool kubernetes-target-pool }
- Get some basic information on our external load balancer:
$ gcloud compute target-pools list --filter="name:kubernetes-target-pool" NAME REGION SESSION_AFFINITY BACKUP HEALTH_CHECKS kubernetes-target-pool us-west1 NONE kubernetes
Note: In the GCP API, there is no direct "load balancer" entity; just a collection of components that constitute the load balancer.
Verification
- Retrieve the
kubernetes-the-hard-way
static IP address:
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)')
- Make an HTTP request for the Kubernetes version info:
$ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version { "major": "1", "minor": "15", "gitVersion": "v1.15.2", "gitCommit": "f6278300bebbb750328ac16ee6dd3aa7d3549568", "gitTreeState": "clean", "buildDate": "2019-08-05T09:15:22Z", "goVersion": "go1.12.5", "compiler": "gc", "platform": "linux/amd64" }
Bootstrapping the Kubernetes Worker Nodes
In this section, we will bootstrap three Kubernetes worker nodes. The following components will be installed on each node:
- Prerequisites
The commands in this section must be run on each worker instance/node: worker-0
, worker-1
, and worker-2
.
Using tmux, split your shell into 3 x panes (ctrl + b "
) and then log into each worker instance:
$ gcloud compute ssh worker-0 ctrl + b o $ gcloud compute ssh worker-1 ctrl + b o $ gcloud compute ssh worker-2
Now, set tmux to run all subsequent commands (unless otherwise stated) in parallel on all 3 x controller instances (in all 3 x tmux panes):
ctrl + b : set synchronize-panes on # off #~OR~ setw synchronize-panes # toggles on/off
Provisioning a Kubernetes Worker Node
- Install the OS dependencies:
{ sudo apt-get update sudo apt-get -y install socat conntrack ipset }
Note: The socat
binary enables support for the kubectl port-forward
command.
- Download and Install Worker Binaries
- Create the installation directories:
$ sudo mkdir -p \ /etc/cni/net.d \ /opt/cni/bin \ /var/lib/kubelet \ /var/lib/kube-proxy \ /var/lib/kubernetes \ /var/run/kubernetes
$ K8S_VERSION=v1.15.2 $ mkdir tar && cd $_ $ wget -q --show-progress --https-only --timestamping \ "https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz" \ "https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17" \ "https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64" \ "https://github.com/containernetworking/plugins/releases/download/v0.8.1/cni-plugins-linux-amd64-v0.8.1.tgz" \ "https://github.com/containerd/containerd/releases/download/v1.2.7/containerd-1.2.7.linux-amd64.tar.gz" \ "https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64/kubectl" \ "https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64/kube-proxy" \ "https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64/kubelet"
- Install the worker binaries:
cni-plugins-linux-amd64-v0.8.1.tgz containerd-1.2.7.linux-amd64.tar.gz crictl-v1.15.0-linux-amd64.tar.gz kube-proxy kubectl kubelet runc.amd64 runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 { sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc sudo mv runc.amd64 runc chmod +x kubectl kube-proxy kubelet runc runsc sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/ sudo tar -xvf crictl-v1.15.0-linux-amd64.tar.gz -C /usr/local/bin/ sudo tar -xvf cni-plugins-linux-amd64-v0.8.1.tgz -C /opt/cni/bin/ sudo tar -xvf containerd-1.2.7.linux-amd64.tar.gz -C / cd $HOME }
Configure CNI Networking
- Retrieve the Pod CIDR range for the current compute instance:
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
- Create the
bridge
network configuration file:
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf { "cniVersion": "0.3.1", "name": "bridge", "type": "bridge", "bridge": "cnio0", "isGateway": true, "ipMasq": true, "ipam": { "type": "host-local", "ranges": [ [{"subnet": "${POD_CIDR}"}] ], "routes": [{"dst": "0.0.0.0/0"}] } } EOF
- Create the
loopback
network configuration file:
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf { "cniVersion": "0.3.1", "type": "loopback" } EOF
Configure containerd
- Create the containerd configuration file:
sudo mkdir -p /etc/containerd/
cat << EOF | sudo tee /etc/containerd/config.toml [plugins] [plugins.cri.containerd] snapshotter = "overlayfs" [plugins.cri.containerd.default_runtime] runtime_type = "io.containerd.runtime.v1.linux" runtime_engine = "/usr/local/bin/runc" runtime_root = "" [plugins.cri.containerd.untrusted_workload_runtime] runtime_type = "io.containerd.runtime.v1.linux" runtime_engine = "/usr/local/bin/runsc" runtime_root = "/run/containerd/runsc" [plugins.cri.containerd.gvisor] runtime_type = "io.containerd.runtime.v1.linux" runtime_engine = "/usr/local/bin/runsc" runtime_root = "/run/containerd/runsc" EOF
Note: Untrusted workloads will be run using the gVisor (runsc) runtime.
- Create the
containerd.service
systemd unit file:
cat << EOF | sudo tee /etc/systemd/system/containerd.service [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target [Service] ExecStartPre=/sbin/modprobe overlay ExecStart=/bin/containerd Restart=always RestartSec=5 Delegate=yes KillMode=process OOMScoreAdjust=-999 LimitNOFILE=1048576 LimitNPROC=infinity LimitCORE=infinity [Install] WantedBy=multi-user.target EOF
Configure the Kubelet
{ sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/ sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig sudo mv ca.pem /var/lib/kubernetes/ }
- Create the
kubelet-config.yaml
configuration file:
cat << EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false webhook: enabled: true x509: clientCAFile: "/var/lib/kubernetes/ca.pem" authorization: mode: Webhook clusterDomain: "cluster.local" clusterDNS: - "10.32.0.10" podCIDR: "${POD_CIDR}" resolvConf: "/run/systemd/resolve/resolv.conf" runtimeRequestTimeout: "15m" tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem" tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem" EOF
Note: The resolvConf
configuration is used to avoid loops when using CoreDNS for service discovery on systems running systemd-resolved
.
- Create the
kubelet.service
systemd unit file:
cat << EOF | sudo tee /etc/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=containerd.service Requires=containerd.service [Service] ExecStart=/usr/local/bin/kubelet \\ --config=/var/lib/kubelet/kubelet-config.yaml \\ --container-runtime=remote \\ --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\ --image-pull-progress-deadline=2m \\ --kubeconfig=/var/lib/kubelet/kubeconfig \\ --network-plugin=cni \\ --register-node=true \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
Configure the Kubernetes Proxy
$ sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
- Create the
kube-proxy-config.yaml
configuration file:
cat << EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml kind: KubeProxyConfiguration apiVersion: kubeproxy.config.k8s.io/v1alpha1 clientConnection: kubeconfig: "/var/lib/kube-proxy/kubeconfig" mode: "iptables" clusterCIDR: "10.200.0.0/16" EOF
- Create the
kube-proxy.service
systemd unit file:
cat << EOF | sudo tee /etc/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kube-proxy \\ --config=/var/lib/kube-proxy/kube-proxy-config.yaml Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
Start the Worker Services
{ sudo systemctl daemon-reload sudo systemctl enable containerd kubelet kube-proxy sudo systemctl start containerd kubelet kube-proxy }
Note: Remember to run the above commands on each worker node: worker-0
, worker-1
, and worker-2
.
- Check the statuses of the worker services:
$ systemctl status containerd kubelet kube-proxy
Verification
NOTE: The compute instances created in this article will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
- List the registered Kubernetes nodes:
gcloud compute ssh controller-0 \ --command "kubectl --kubeconfig admin.kubeconfig get nodes -o wide" NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME worker-0 Ready <none> 3m3s v1.15.2 10.240.0.20 <none> Ubuntu 18.04.2 LTS 4.15.0-1037-gcp containerd://1.2.7 worker-1 Ready <none> 3m3s v1.15.2 10.240.0.21 <none> Ubuntu 18.04.2 LTS 4.15.0-1037-gcp containerd://1.2.7 worker-2 Ready <none> 3m3s v1.15.2 10.240.0.22 <none> Ubuntu 18.04.2 LTS 4.15.0-1037-gcp containerd://1.2.7
Configuring kubectl for Remote Access
In this section, we will generate a kubeconfig file for the kubectl
command line utility based on the admin
user credentials.
NOTE: Run the commands in this section from the same directory used to generate the admin client certificates.
- The Admin Kubernetes Configuration File
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
WARNING: The following commands will overwrite your current/default kubeconfig (whatever KUBECONFIG
environment variable is pointing to, if it is set. If it is not set, then it will overwrite your $HOME/.kube/config
file).
- Generate a kubeconfig file suitable for authenticating as the
admin
user:
{ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 kubectl config set-credentials admin \ --client-certificate=admin.pem \ --client-key=admin-key.pem kubectl config set-context kubernetes-the-hard-way \ --cluster=kubernetes-the-hard-way \ --user=admin kubectl config use-context kubernetes-the-hard-way }
- Verification
- Check the health of the remote Kubernetes cluster:
$ kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"}
Provisioning Pod Network Routes
Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point, pods can not communicate with other pods running on different nodes due to missing network routes.
In this section, we will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address.
Note: There are other ways to implement the Kubernetes networking model.
The Routing Table
In this section, we will gather the information required to create routes in the kubernetes-the-hard-way
VPC network.
- Print the internal IP address and Pod CIDR range for each worker instance:
for instance in worker-0 worker-1 worker-2; do gcloud compute instances describe ${instance} \ --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' done 10.240.0.20 10.200.0.0/24 10.240.0.21 10.200.1.0/24 10.240.0.22 10.200.2.0/24
Routes
- List the default routes in the
kubernetes-the-hard-way
VPC network:
$ gcloud compute routes list --filter "network: kubernetes-the-hard-way" NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY default-route-294de28447c4e405 kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 default-route-638561d1ca3f4621 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000
- Create network routes for each worker instance:
for i in 0 1 2; do gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ --network kubernetes-the-hard-way \ --next-hop-address 10.240.0.2${i} \ --destination-range 10.200.${i}.0/24 done
- List the routes in the
kubernetes-the-hard-way
VPC network:
$ gcloud compute routes list --filter "network: kubernetes-the-hard-way" NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY default-route-294de28447c4e405 kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 default-route-638561d1ca3f4621 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000 kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000 kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000 kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
Deploying the DNS Cluster Add-on
In this section, we will deploy the DNS add-on which provides DNS based service discovery, backed by CoreDNS, to applications running inside the Kubernetes cluster.
The DNS Cluster Add-on
- Deploy the
coredns
cluster add-on:
$ kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.extensions/coredns created service/kube-dns created
- List the pods created by the
kube-dns
deployment:
$ kubectl -n kube-system get pods -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE coredns-7945fb857d-kpd67 1/1 Running 0 40s coredns-7945fb857d-rpvwl 1/1 Running 0 40s
Verification
- Create a busybox deployment:
$ kubectl run busybox --image=busybox:1.31.0 --command -- sleep 3600
- List the pod created by the
busybox
deployment:
$ kubectl get pods -l run=busybox NAME READY STATUS RESTARTS AGE busybox-57786959c7-xpfxv 1/1 Running 0 16s
- Retrieve the full name of the
busybox
pod:
$ POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}") $ echo $POD_NAME busybox-57786959c7-xpfxv
- Execute a DNS lookup for the
kubernetes
service inside thebusybox
pod:
$ kubectl exec -ti $POD_NAME -- nslookup kubernetes
See also
External links
- Kubernetes the Hard Way — on GitHub
- CFSSL — CloudFlare's PKI/TLS toolkit on GitHub