Difference between revisions of "Kubernetes/the-hard-way"
(→Configure the Kubernetes API Server) |
(→See also) |
||
(11 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
This article will show how to setup '''Kubernetes The Hard Way''', as originally developed by [https://github.com/kelseyhightower Kelsey Hightower]. I will add my own additions, changes, alterations, etc. to the process (and this will be continually expanded upon). | This article will show how to setup '''Kubernetes The Hard Way''', as originally developed by [https://github.com/kelseyhightower Kelsey Hightower]. I will add my own additions, changes, alterations, etc. to the process (and this will be continually expanded upon). | ||
+ | |||
+ | I will show you how to set up [[Kubernetes]] from scratch using [[Google Cloud Platform]] (GCP) VMs running Ubuntu 18.04. | ||
+ | |||
+ | I will use the latest version of Kubernetes (as of August 2019): | ||
+ | <pre> | ||
+ | $ curl -sSL https://dl.k8s.io/release/stable.txt | ||
+ | v1.15.2 | ||
+ | </pre> | ||
==Install the client tools== | ==Install the client tools== | ||
Line 990: | Line 998: | ||
service-account-key.pem service-account.pem \ | service-account-key.pem service-account.pem \ | ||
encryption-config.yaml /var/lib/kubernetes/ | encryption-config.yaml /var/lib/kubernetes/ | ||
+ | |||
+ | sudo chmod 0600 /var/lib/kubernetes/encryption-config.yaml | ||
} | } | ||
</pre> | </pre> | ||
Line 1,188: | Line 1,198: | ||
Note: Remember to run the above commands on each controller node: <code>controller-0</code>, <code>controller-1</code>, and <code>controller-2</code>. | Note: Remember to run the above commands on each controller node: <code>controller-0</code>, <code>controller-1</code>, and <code>controller-2</code>. | ||
+ | |||
+ | ===RBAC for Kubelet Authorization=== | ||
+ | |||
+ | In this section, we will configure Role-Based Access Control (RBAC) permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods. | ||
+ | |||
+ | Note: We are also setting the Kubelet <code>--authorization-mode</code> flag to <code>Webhook</code>. Webhook mode uses the [https://kubernetes.io/docs/admin/authorization/#checking-api-access SubjectAccessReview] API to determine authorization. | ||
+ | |||
+ | * SSH into ''just'' the <code>controller-0</code> instance: | ||
+ | $ gcloud compute ssh controller-0 | ||
+ | |||
+ | * Create the <code>system:kube-apiserver-to-kubelet</code> [https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole ClusterRole] with permissions to access the Kubelet API and perform most common tasks associated with managing pods: | ||
+ | <pre> | ||
+ | cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f - | ||
+ | apiVersion: rbac.authorization.k8s.io/v1beta1 | ||
+ | kind: ClusterRole | ||
+ | metadata: | ||
+ | annotations: | ||
+ | rbac.authorization.kubernetes.io/autoupdate: "true" | ||
+ | labels: | ||
+ | kubernetes.io/bootstrapping: rbac-defaults | ||
+ | name: system:kube-apiserver-to-kubelet | ||
+ | rules: | ||
+ | - apiGroups: | ||
+ | - "" | ||
+ | resources: | ||
+ | - nodes/proxy | ||
+ | - nodes/stats | ||
+ | - nodes/log | ||
+ | - nodes/spec | ||
+ | - nodes/metrics | ||
+ | verbs: | ||
+ | - "*" | ||
+ | EOF | ||
+ | </pre> | ||
+ | |||
+ | The Kubernetes API Server authenticates to the Kubelet as the <code>kubernetes</code> user using the client certificate as defined by the <code>--kubelet-client-certificate</code> flag. | ||
+ | |||
+ | * Bind the <code>system:kube-apiserver-to-kubelet</code> ClusterRole to the <code>kubernetes</code> user: | ||
+ | <pre> | ||
+ | cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f - | ||
+ | apiVersion: rbac.authorization.k8s.io/v1beta1 | ||
+ | kind: ClusterRoleBinding | ||
+ | metadata: | ||
+ | name: system:kube-apiserver | ||
+ | namespace: "" | ||
+ | roleRef: | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | kind: ClusterRole | ||
+ | name: system:kube-apiserver-to-kubelet | ||
+ | subjects: | ||
+ | - apiGroup: rbac.authorization.k8s.io | ||
+ | kind: User | ||
+ | name: kubernetes | ||
+ | EOF | ||
+ | </pre> | ||
+ | |||
+ | ==The Kubernetes Frontend Load Balancer== | ||
+ | |||
+ | In this section, we will provision an external load balancer to front the Kubernetes API Servers. The <code>kubernetes-the-hard-way</code> static IP address (created above) will be attached to the resulting load balancer. | ||
+ | |||
+ | Note: The compute instances created in this article will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances. | ||
+ | |||
+ | ; Rules for Network Load Balancing | ||
+ | |||
+ | When we create our external load balancer, we need to create an ingress firewall rule for Network Load Balancing, which requires a legacy health check. The source IP ranges for legacy health checks for Network Load Balancing are: | ||
+ | <pre> | ||
+ | 35.191.0.0/16 | ||
+ | 209.85.152.0/22 | ||
+ | 209.85.204.0/22 | ||
+ | </pre> | ||
+ | |||
+ | ; Provision a Network Load Balancer | ||
+ | |||
+ | * Create the external load balancer network resources: | ||
+ | <pre> | ||
+ | { | ||
+ | KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ | ||
+ | --region $(gcloud config get-value compute/region) \ | ||
+ | --format 'value(address)') | ||
+ | |||
+ | gcloud compute http-health-checks create kubernetes \ | ||
+ | --description "Kubernetes Health Check" \ | ||
+ | --host "kubernetes.default.svc.cluster.local" \ | ||
+ | --request-path "/healthz" | ||
+ | |||
+ | gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \ | ||
+ | --network kubernetes-the-hard-way \ | ||
+ | --source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \ | ||
+ | --allow tcp | ||
+ | |||
+ | gcloud compute target-pools create kubernetes-target-pool \ | ||
+ | --http-health-check kubernetes | ||
+ | |||
+ | gcloud compute target-pools add-instances kubernetes-target-pool \ | ||
+ | --instances controller-0,controller-1,controller-2 | ||
+ | |||
+ | gcloud compute forwarding-rules create kubernetes-forwarding-rule \ | ||
+ | --address ${KUBERNETES_PUBLIC_ADDRESS} \ | ||
+ | --ports 6443 \ | ||
+ | --region $(gcloud config get-value compute/region) \ | ||
+ | --target-pool kubernetes-target-pool | ||
+ | } | ||
+ | </pre> | ||
+ | |||
+ | * Get some basic information on our external load balancer: | ||
+ | <pre> | ||
+ | $ gcloud compute target-pools list --filter="name:kubernetes-target-pool" | ||
+ | NAME REGION SESSION_AFFINITY BACKUP HEALTH_CHECKS | ||
+ | kubernetes-target-pool us-west1 NONE kubernetes | ||
+ | </pre> | ||
+ | |||
+ | Note: In the GCP API, there is no direct "load balancer" entity; just a collection of components that constitute the load balancer. | ||
+ | |||
+ | ===Verification=== | ||
+ | |||
+ | * Retrieve the <code>kubernetes-the-hard-way</code> static IP address: | ||
+ | <pre> | ||
+ | KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ | ||
+ | --region $(gcloud config get-value compute/region) \ | ||
+ | --format 'value(address)') | ||
+ | </pre> | ||
+ | |||
+ | * Make an HTTP request for the Kubernetes version info: | ||
+ | <pre> | ||
+ | $ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version | ||
+ | |||
+ | { | ||
+ | "major": "1", | ||
+ | "minor": "15", | ||
+ | "gitVersion": "v1.15.2", | ||
+ | "gitCommit": "f6278300bebbb750328ac16ee6dd3aa7d3549568", | ||
+ | "gitTreeState": "clean", | ||
+ | "buildDate": "2019-08-05T09:15:22Z", | ||
+ | "goVersion": "go1.12.5", | ||
+ | "compiler": "gc", | ||
+ | "platform": "linux/amd64" | ||
+ | } | ||
+ | </pre> | ||
+ | |||
+ | ==Bootstrapping the Kubernetes Worker Nodes== | ||
+ | |||
+ | In this section, we will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: | ||
+ | * [https://github.com/opencontainers/runc runc] | ||
+ | * [https://github.com/google/gvisor gVisor] | ||
+ | * [https://github.com/containernetworking/cni container networking plugins] (CNIs) | ||
+ | * [https://github.com/containerd/containerd containerd] | ||
+ | * [https://kubernetes.io/docs/admin/kubelet kubelet] | ||
+ | * [https://kubernetes.io/docs/concepts/cluster-administration/proxies kube-proxy] | ||
+ | |||
+ | ; Prerequisites | ||
+ | |||
+ | The commands in this section must be run on each worker instance/node: <code>worker-0</code>, <code>worker-1</code>, and <code>worker-2</code>. | ||
+ | |||
+ | Using [[tmux]], split your shell into 3 x panes (<code>ctrl + b "</code>) and then log into each worker instance: | ||
+ | <pre> | ||
+ | $ gcloud compute ssh worker-0 | ||
+ | ctrl + b o | ||
+ | $ gcloud compute ssh worker-1 | ||
+ | ctrl + b o | ||
+ | $ gcloud compute ssh worker-2 | ||
+ | </pre> | ||
+ | |||
+ | Now, set tmux to run all subsequent commands (unless otherwise stated) in parallel on all 3 x controller instances (in all 3 x tmux panes): | ||
+ | <pre> | ||
+ | ctrl + b : | ||
+ | set synchronize-panes on # off | ||
+ | #~OR~ | ||
+ | setw synchronize-panes # toggles on/off | ||
+ | </pre> | ||
+ | |||
+ | ===Provisioning a Kubernetes Worker Node=== | ||
+ | |||
+ | * Install the OS dependencies: | ||
+ | <pre> | ||
+ | { | ||
+ | sudo apt-get update | ||
+ | sudo apt-get -y install socat conntrack ipset | ||
+ | } | ||
+ | </pre> | ||
+ | |||
+ | Note: The <code>socat</code> binary enables support for the <code>kubectl port-forward</code> command. | ||
+ | |||
+ | ; Download and Install Worker Binaries | ||
+ | |||
+ | * Create the installation directories: | ||
+ | <pre> | ||
+ | $ sudo mkdir -p \ | ||
+ | /etc/cni/net.d \ | ||
+ | /opt/cni/bin \ | ||
+ | /var/lib/kubelet \ | ||
+ | /var/lib/kube-proxy \ | ||
+ | /var/lib/kubernetes \ | ||
+ | /var/run/kubernetes | ||
+ | </pre> | ||
+ | |||
+ | <pre> | ||
+ | $ K8S_VERSION=v1.15.2 | ||
+ | $ mkdir tar && cd $_ | ||
+ | $ wget -q --show-progress --https-only --timestamping \ | ||
+ | "https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz" \ | ||
+ | "https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17" \ | ||
+ | "https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64" \ | ||
+ | "https://github.com/containernetworking/plugins/releases/download/v0.8.1/cni-plugins-linux-amd64-v0.8.1.tgz" \ | ||
+ | "https://github.com/containerd/containerd/releases/download/v1.2.7/containerd-1.2.7.linux-amd64.tar.gz" \ | ||
+ | "https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64/kubectl" \ | ||
+ | "https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64/kube-proxy" \ | ||
+ | "https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64/kubelet" | ||
+ | </pre> | ||
+ | |||
+ | * Install the worker binaries: | ||
+ | <pre> | ||
+ | cni-plugins-linux-amd64-v0.8.1.tgz | ||
+ | containerd-1.2.7.linux-amd64.tar.gz | ||
+ | crictl-v1.15.0-linux-amd64.tar.gz | ||
+ | kube-proxy | ||
+ | kubectl | ||
+ | kubelet | ||
+ | runc.amd64 | ||
+ | runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 | ||
+ | { | ||
+ | sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc | ||
+ | sudo mv runc.amd64 runc | ||
+ | chmod +x kubectl kube-proxy kubelet runc runsc | ||
+ | sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/ | ||
+ | sudo tar -xvf crictl-v1.15.0-linux-amd64.tar.gz -C /usr/local/bin/ | ||
+ | sudo tar -xvf cni-plugins-linux-amd64-v0.8.1.tgz -C /opt/cni/bin/ | ||
+ | sudo tar -xvf containerd-1.2.7.linux-amd64.tar.gz -C / | ||
+ | cd $HOME | ||
+ | } | ||
+ | </pre> | ||
+ | |||
+ | ===Configure CNI Networking=== | ||
+ | |||
+ | * Retrieve the Pod CIDR range for the current compute instance: | ||
+ | <pre> | ||
+ | POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ | ||
+ | http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) | ||
+ | </pre> | ||
+ | |||
+ | * Create the <code>bridge</code> network configuration file: | ||
+ | <pre> | ||
+ | cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf | ||
+ | { | ||
+ | "cniVersion": "0.3.1", | ||
+ | "name": "bridge", | ||
+ | "type": "bridge", | ||
+ | "bridge": "cnio0", | ||
+ | "isGateway": true, | ||
+ | "ipMasq": true, | ||
+ | "ipam": { | ||
+ | "type": "host-local", | ||
+ | "ranges": [ | ||
+ | [{"subnet": "${POD_CIDR}"}] | ||
+ | ], | ||
+ | "routes": [{"dst": "0.0.0.0/0"}] | ||
+ | } | ||
+ | } | ||
+ | EOF | ||
+ | </pre> | ||
+ | |||
+ | * Create the <code>loopback</code> network configuration file: | ||
+ | <pre> | ||
+ | cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf | ||
+ | { | ||
+ | "cniVersion": "0.3.1", | ||
+ | "type": "loopback" | ||
+ | } | ||
+ | EOF | ||
+ | </pre> | ||
+ | |||
+ | ===Configure containerd=== | ||
+ | |||
+ | * Create the containerd configuration file: | ||
+ | sudo mkdir -p /etc/containerd/ | ||
+ | <pre> | ||
+ | cat << EOF | sudo tee /etc/containerd/config.toml | ||
+ | [plugins] | ||
+ | [plugins.cri.containerd] | ||
+ | snapshotter = "overlayfs" | ||
+ | [plugins.cri.containerd.default_runtime] | ||
+ | runtime_type = "io.containerd.runtime.v1.linux" | ||
+ | runtime_engine = "/usr/local/bin/runc" | ||
+ | runtime_root = "" | ||
+ | [plugins.cri.containerd.untrusted_workload_runtime] | ||
+ | runtime_type = "io.containerd.runtime.v1.linux" | ||
+ | runtime_engine = "/usr/local/bin/runsc" | ||
+ | runtime_root = "/run/containerd/runsc" | ||
+ | [plugins.cri.containerd.gvisor] | ||
+ | runtime_type = "io.containerd.runtime.v1.linux" | ||
+ | runtime_engine = "/usr/local/bin/runsc" | ||
+ | runtime_root = "/run/containerd/runsc" | ||
+ | EOF | ||
+ | </pre> | ||
+ | Note: Untrusted workloads will be run using the gVisor (runsc) runtime. | ||
+ | |||
+ | * Create the <code>containerd.service</code> [[systemd]] unit file: | ||
+ | <pre> | ||
+ | cat << EOF | sudo tee /etc/systemd/system/containerd.service | ||
+ | [Unit] | ||
+ | Description=containerd container runtime | ||
+ | Documentation=https://containerd.io | ||
+ | After=network.target | ||
+ | |||
+ | [Service] | ||
+ | ExecStartPre=/sbin/modprobe overlay | ||
+ | ExecStart=/bin/containerd | ||
+ | Restart=always | ||
+ | RestartSec=5 | ||
+ | Delegate=yes | ||
+ | KillMode=process | ||
+ | OOMScoreAdjust=-999 | ||
+ | LimitNOFILE=1048576 | ||
+ | LimitNPROC=infinity | ||
+ | LimitCORE=infinity | ||
+ | |||
+ | [Install] | ||
+ | WantedBy=multi-user.target | ||
+ | EOF | ||
+ | </pre> | ||
+ | |||
+ | ===Configure the Kubelet=== | ||
+ | |||
+ | <pre> | ||
+ | { | ||
+ | sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/ | ||
+ | sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig | ||
+ | sudo mv ca.pem /var/lib/kubernetes/ | ||
+ | } | ||
+ | </pre> | ||
+ | |||
+ | * Create the <code>kubelet-config.yaml</code> configuration file: | ||
+ | <pre> | ||
+ | cat << EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml | ||
+ | kind: KubeletConfiguration | ||
+ | apiVersion: kubelet.config.k8s.io/v1beta1 | ||
+ | authentication: | ||
+ | anonymous: | ||
+ | enabled: false | ||
+ | webhook: | ||
+ | enabled: true | ||
+ | x509: | ||
+ | clientCAFile: "/var/lib/kubernetes/ca.pem" | ||
+ | authorization: | ||
+ | mode: Webhook | ||
+ | clusterDomain: "cluster.local" | ||
+ | clusterDNS: | ||
+ | - "10.32.0.10" | ||
+ | podCIDR: "${POD_CIDR}" | ||
+ | resolvConf: "/run/systemd/resolve/resolv.conf" | ||
+ | runtimeRequestTimeout: "15m" | ||
+ | tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem" | ||
+ | tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem" | ||
+ | EOF | ||
+ | </pre> | ||
+ | |||
+ | Note: The <code>resolvConf</code> configuration is used to avoid loops when using CoreDNS for service discovery on systems running <code>systemd-resolved</code>. | ||
+ | |||
+ | * Create the <code>kubelet.service</code> systemd unit file: | ||
+ | <pre> | ||
+ | cat << EOF | sudo tee /etc/systemd/system/kubelet.service | ||
+ | [Unit] | ||
+ | Description=Kubernetes Kubelet | ||
+ | Documentation=https://github.com/kubernetes/kubernetes | ||
+ | After=containerd.service | ||
+ | Requires=containerd.service | ||
+ | |||
+ | [Service] | ||
+ | ExecStart=/usr/local/bin/kubelet \\ | ||
+ | --config=/var/lib/kubelet/kubelet-config.yaml \\ | ||
+ | --container-runtime=remote \\ | ||
+ | --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\ | ||
+ | --image-pull-progress-deadline=2m \\ | ||
+ | --kubeconfig=/var/lib/kubelet/kubeconfig \\ | ||
+ | --network-plugin=cni \\ | ||
+ | --register-node=true \\ | ||
+ | --v=2 | ||
+ | Restart=on-failure | ||
+ | RestartSec=5 | ||
+ | |||
+ | [Install] | ||
+ | WantedBy=multi-user.target | ||
+ | EOF | ||
+ | </pre> | ||
+ | |||
+ | ===Configure the Kubernetes Proxy=== | ||
+ | |||
+ | $ sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig | ||
+ | |||
+ | * Create the <code>kube-proxy-config.yaml</code> configuration file: | ||
+ | <pre> | ||
+ | cat << EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml | ||
+ | kind: KubeProxyConfiguration | ||
+ | apiVersion: kubeproxy.config.k8s.io/v1alpha1 | ||
+ | clientConnection: | ||
+ | kubeconfig: "/var/lib/kube-proxy/kubeconfig" | ||
+ | mode: "iptables" | ||
+ | clusterCIDR: "10.200.0.0/16" | ||
+ | EOF | ||
+ | </pre> | ||
+ | |||
+ | * Create the <code>kube-proxy.service</code> systemd unit file: | ||
+ | <pre> | ||
+ | cat << EOF | sudo tee /etc/systemd/system/kube-proxy.service | ||
+ | [Unit] | ||
+ | Description=Kubernetes Kube Proxy | ||
+ | Documentation=https://github.com/kubernetes/kubernetes | ||
+ | |||
+ | [Service] | ||
+ | ExecStart=/usr/local/bin/kube-proxy \\ | ||
+ | --config=/var/lib/kube-proxy/kube-proxy-config.yaml | ||
+ | Restart=on-failure | ||
+ | RestartSec=5 | ||
+ | |||
+ | [Install] | ||
+ | WantedBy=multi-user.target | ||
+ | EOF | ||
+ | </pre> | ||
+ | |||
+ | ===Start the Worker Services=== | ||
+ | |||
+ | <pre> | ||
+ | { | ||
+ | sudo systemctl daemon-reload | ||
+ | sudo systemctl enable containerd kubelet kube-proxy | ||
+ | sudo systemctl start containerd kubelet kube-proxy | ||
+ | } | ||
+ | </pre> | ||
+ | Note: Remember to run the above commands on each worker node: <code>worker-0</code>, <code>worker-1</code>, and <code>worker-2</code>. | ||
+ | |||
+ | * Check the statuses of the worker services: | ||
+ | $ systemctl status containerd kubelet kube-proxy | ||
+ | |||
+ | ===Verification=== | ||
+ | |||
+ | NOTE: The compute instances created in this article will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances. | ||
+ | |||
+ | * List the registered Kubernetes nodes: | ||
+ | <pre> | ||
+ | gcloud compute ssh controller-0 \ | ||
+ | --command "kubectl --kubeconfig admin.kubeconfig get nodes -o wide" | ||
+ | |||
+ | NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME | ||
+ | worker-0 Ready <none> 3m3s v1.15.2 10.240.0.20 <none> Ubuntu 18.04.2 LTS 4.15.0-1037-gcp containerd://1.2.7 | ||
+ | worker-1 Ready <none> 3m3s v1.15.2 10.240.0.21 <none> Ubuntu 18.04.2 LTS 4.15.0-1037-gcp containerd://1.2.7 | ||
+ | worker-2 Ready <none> 3m3s v1.15.2 10.240.0.22 <none> Ubuntu 18.04.2 LTS 4.15.0-1037-gcp containerd://1.2.7 | ||
+ | </pre> | ||
+ | |||
+ | ==Configuring kubectl for Remote Access== | ||
+ | |||
+ | In this section, we will generate a kubeconfig file for the <code>kubectl</code> command line utility based on the <code>admin</code> user credentials. | ||
+ | |||
+ | NOTE: Run the commands in this section from the same directory used to generate the admin client certificates. | ||
+ | |||
+ | ; The Admin Kubernetes Configuration File | ||
+ | |||
+ | Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used. | ||
+ | |||
+ | '''WARNING:''' The following commands will overwrite your current/default kubeconfig (whatever <code>KUBECONFIG</code> environment variable is pointing to, if it is set. If it is not set, then it will overwrite your <code>$HOME/.kube/config</code> file). | ||
+ | |||
+ | * Generate a kubeconfig file suitable for authenticating as the <code>admin</code> user: | ||
+ | <pre> | ||
+ | { | ||
+ | KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ | ||
+ | --region $(gcloud config get-value compute/region) \ | ||
+ | --format 'value(address)') | ||
+ | |||
+ | kubectl config set-cluster kubernetes-the-hard-way \ | ||
+ | --certificate-authority=ca.pem \ | ||
+ | --embed-certs=true \ | ||
+ | --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 | ||
+ | |||
+ | kubectl config set-credentials admin \ | ||
+ | --client-certificate=admin.pem \ | ||
+ | --client-key=admin-key.pem | ||
+ | |||
+ | kubectl config set-context kubernetes-the-hard-way \ | ||
+ | --cluster=kubernetes-the-hard-way \ | ||
+ | --user=admin | ||
+ | |||
+ | kubectl config use-context kubernetes-the-hard-way | ||
+ | } | ||
+ | </pre> | ||
+ | |||
+ | ; Verification | ||
+ | |||
+ | * Check the health of the remote Kubernetes cluster: | ||
+ | <pre> | ||
+ | $ kubectl get componentstatuses | ||
+ | |||
+ | NAME STATUS MESSAGE ERROR | ||
+ | scheduler Healthy ok | ||
+ | controller-manager Healthy ok | ||
+ | etcd-1 Healthy {"health":"true"} | ||
+ | etcd-2 Healthy {"health":"true"} | ||
+ | etcd-0 Healthy {"health":"true"} | ||
+ | </pre> | ||
+ | |||
+ | ==Provisioning Pod Network Routes== | ||
+ | |||
+ | Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point, pods can not communicate with other pods running on different nodes due to missing network [https://cloud.google.com/compute/docs/vpc/routes routes]. | ||
+ | |||
+ | In this section, we will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address. | ||
+ | |||
+ | Note: There are [https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this other ways] to implement the Kubernetes networking model. | ||
+ | |||
+ | ===The Routing Table=== | ||
+ | |||
+ | In this section, we will gather the information required to create routes in the <code>kubernetes-the-hard-way</code> VPC network. | ||
+ | |||
+ | * Print the internal IP address and Pod CIDR range for each worker instance: | ||
+ | <pre> | ||
+ | for instance in worker-0 worker-1 worker-2; do | ||
+ | gcloud compute instances describe ${instance} \ | ||
+ | --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' | ||
+ | done | ||
+ | |||
+ | 10.240.0.20 10.200.0.0/24 | ||
+ | 10.240.0.21 10.200.1.0/24 | ||
+ | 10.240.0.22 10.200.2.0/24 | ||
+ | </pre> | ||
+ | |||
+ | ===Routes=== | ||
+ | |||
+ | * List the default routes in the <code>kubernetes-the-hard-way</code> VPC network: | ||
+ | <pre> | ||
+ | $ gcloud compute routes list --filter "network: kubernetes-the-hard-way" | ||
+ | NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY | ||
+ | default-route-294de28447c4e405 kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 | ||
+ | default-route-638561d1ca3f4621 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000 | ||
+ | </pre> | ||
+ | |||
+ | * Create network routes for each worker instance: | ||
+ | <pre> | ||
+ | for i in 0 1 2; do | ||
+ | gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ | ||
+ | --network kubernetes-the-hard-way \ | ||
+ | --next-hop-address 10.240.0.2${i} \ | ||
+ | --destination-range 10.200.${i}.0/24 | ||
+ | done | ||
+ | </pre> | ||
+ | |||
+ | * List the routes in the <code>kubernetes-the-hard-way</code> VPC network: | ||
+ | <pre> | ||
+ | $ gcloud compute routes list --filter "network: kubernetes-the-hard-way" | ||
+ | |||
+ | NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY | ||
+ | default-route-294de28447c4e405 kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 | ||
+ | default-route-638561d1ca3f4621 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000 | ||
+ | kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000 | ||
+ | kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000 | ||
+ | kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000 | ||
+ | </pre> | ||
+ | |||
+ | ==Deploying the DNS Cluster Add-on== | ||
+ | |||
+ | In this section, we will deploy the [https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ DNS add-on] which provides DNS based service discovery, backed by [https://coredns.io/ CoreDNS], to applications running inside the Kubernetes cluster. | ||
+ | |||
+ | ===The DNS Cluster Add-on=== | ||
+ | |||
+ | * Deploy the <code>coredns</code> cluster add-on: | ||
+ | <pre> | ||
+ | $ kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml | ||
+ | |||
+ | serviceaccount/coredns created | ||
+ | clusterrole.rbac.authorization.k8s.io/system:coredns created | ||
+ | clusterrolebinding.rbac.authorization.k8s.io/system:coredns created | ||
+ | configmap/coredns created | ||
+ | deployment.extensions/coredns created | ||
+ | service/kube-dns created | ||
+ | </pre> | ||
+ | |||
+ | * List the pods created by the <code>kube-dns</code> deployment: | ||
+ | <pre> | ||
+ | $ kubectl -n kube-system get pods -l k8s-app=kube-dns | ||
+ | |||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | coredns-7945fb857d-kpd67 1/1 Running 0 40s | ||
+ | coredns-7945fb857d-rpvwl 1/1 Running 0 40s | ||
+ | </pre> | ||
+ | |||
+ | ===Verification=== | ||
+ | |||
+ | * Create a busybox deployment: | ||
+ | <pre> | ||
+ | $ kubectl run busybox --image=busybox:1.31.0 --command -- sleep 3600 | ||
+ | </pre> | ||
+ | |||
+ | * List the pod created by the <code>busybox</code> deployment: | ||
+ | <pre> | ||
+ | $ kubectl get pods -l run=busybox | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | busybox-57786959c7-xpfxv 1/1 Running 0 16s | ||
+ | </pre> | ||
+ | |||
+ | * Retrieve the full name of the <code>busybox</code> pod: | ||
+ | <pre> | ||
+ | $ POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}") | ||
+ | $ echo $POD_NAME | ||
+ | busybox-57786959c7-xpfxv | ||
+ | </pre> | ||
+ | |||
+ | * Execute a DNS lookup for the <code>kubernetes</code> service inside the <code>busybox</code> pod: | ||
+ | <pre> | ||
+ | $ kubectl exec -ti $POD_NAME -- nslookup kubernetes | ||
+ | |||
+ | Server: 10.32.0.10 | ||
+ | Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local | ||
+ | |||
+ | Name: kubernetes | ||
+ | Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local | ||
+ | </pre> | ||
+ | |||
+ | ==Smoke testing== | ||
+ | |||
+ | In this section, we will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly and passes a build verification test (BVT). | ||
+ | |||
+ | ===Data Encryption=== | ||
+ | |||
+ | Goal: Verify the ability to [https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted encrypt secret data at rest]. | ||
+ | |||
+ | * Create a generic secret: | ||
+ | <pre> | ||
+ | $ kubectl create secret generic kubernetes-the-hard-way \ | ||
+ | --from-literal="mykey=mydata" | ||
+ | </pre> | ||
+ | |||
+ | * Print a hexdump of the <code>kubernetes-the-hard-way</code> secret stored in etcd: | ||
+ | <pre> | ||
+ | $ gcloud compute ssh controller-0 \ | ||
+ | --command "sudo ETCDCTL_API=3 etcdctl get \ | ||
+ | --endpoints=https://127.0.0.1:2379 \ | ||
+ | --cacert=/etc/etcd/ca.pem \ | ||
+ | --cert=/etc/etcd/kubernetes.pem \ | ||
+ | --key=/etc/etcd/kubernetes-key.pem\ | ||
+ | /registry/secrets/default/kubernetes-the-hard-way | hexdump -C" | ||
+ | |||
+ | 00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret| | ||
+ | 00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern| | ||
+ | 00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa| | ||
+ | 00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc| | ||
+ | 00000040 3a 76 31 3a 6b 65 79 31 3a a5 d7 cd 20 d1 12 a3 |:v1:key1:... ...| | ||
+ | 00000050 47 13 fe cb ea b1 9f f2 1f 63 7d 1f c4 03 cb 3f |G........c}....?| | ||
+ | 00000060 27 b4 3e 40 9a 32 0b 91 a5 84 bf ee 1c b5 9e ea |'.>@.2..........| | ||
+ | 00000070 4c a3 fb e1 6d 83 18 1e 50 42 0b 2d cb 90 c8 92 |L...m...PB.-....| | ||
+ | 00000080 f5 29 81 2a 01 db 9d 22 1f 67 3b f4 fd a8 76 59 |.).*...".g;...vY| | ||
+ | 00000090 e8 1e e0 a5 91 65 c3 5d 17 0a 32 fc 5e 73 4d 35 |.....e.]..2.^sM5| | ||
+ | 000000a0 69 a1 78 d9 a9 83 6b 53 c3 5e aa c9 e0 c4 72 6a |i.x...kS.^....rj| | ||
+ | 000000b0 26 56 8b 5e fc 34 6b f0 12 1f 5b 0a 70 aa 07 de |&V.^.4k...[.p...| | ||
+ | 000000c0 0a de 76 4b ed be 09 61 2c 43 5e e3 35 0b 43 60 |..vK...a,C^.5.C`| | ||
+ | 000000d0 e3 d5 34 0b 9e 7b 05 67 30 3c 49 a8 33 9c f2 da |..4..{.g0<I.3...| | ||
+ | 000000e0 4f c9 e9 b5 54 31 f8 14 75 0a |O...T1..u.| | ||
+ | 000000ea | ||
+ | |||
+ | #~OR~ | ||
+ | |||
+ | $ gcloud compute ssh controller-0 \ | ||
+ | --command "sudo ETCDCTL_API=3 etcdctl get \ | ||
+ | --endpoints=https://127.0.0.1:2379 \ | ||
+ | --cacert=/etc/etcd/ca.pem \ | ||
+ | --cert=/etc/etcd/kubernetes.pem \ | ||
+ | --key=/etc/etcd/kubernetes-key.pem\ | ||
+ | /registry/secrets/default/kubernetes-the-hard-way -w fields | grep Value" | ||
+ | |||
+ | "Value" : "k8s:enc:aescbc:v1:key1:\xa5..." | ||
+ | </pre> | ||
+ | |||
+ | The etcd key should be prefixed with <code>k8s:enc:aescbc:v1:key1</code>, which indicates the Advanced Encryption Standard (AES) - CBC (Cipher Blocker Chaining) provider was used to encrypt the data with the <code>key1</code> encryption key. See: [[:wikipedia:Block cipher mode of operation]] for details. | ||
+ | |||
+ | ===Deployments=== | ||
+ | |||
+ | In this section, we will verify the ability to create and manage Deployments. | ||
+ | |||
+ | * Create a deployment for an Nginx web server: | ||
+ | <pre> | ||
+ | $ kubectl run nginx --image=nginx | ||
+ | </pre> | ||
+ | |||
+ | * List the pod created by the nginx deployment: | ||
+ | <pre> | ||
+ | $ kubectl get pods -l run=nginx | ||
+ | |||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | nginx-7bb7cd8db5-7nrsw 1/1 Running 0 14s | ||
+ | </pre> | ||
+ | |||
+ | ===Port Forwarding=== | ||
+ | |||
+ | In this section, we will verify the ability to access applications remotely using [https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/ port forwarding]. | ||
+ | |||
+ | * Retrieve the full name of the nginx pod: | ||
+ | <pre> | ||
+ | POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}") | ||
+ | </pre> | ||
+ | |||
+ | * Forward port 8080 on your local machine to port 80 of the nginx pod: | ||
+ | <pre> | ||
+ | $ kubectl port-forward $POD_NAME 8080:80 | ||
+ | |||
+ | Forwarding from 127.0.0.1:8080 -> 80 | ||
+ | Forwarding from [::1]:8080 -> 80 | ||
+ | </pre> | ||
+ | |||
+ | * In a new terminal make an HTTP request using the forwarding address: | ||
+ | <pre> | ||
+ | $ curl --head http://127.0.0.1:8080 | ||
+ | HTTP/1.1 200 OK | ||
+ | Server: nginx/1.17.2 | ||
+ | Date: Fri, 09 Aug 2019 22:44:03 GMT | ||
+ | Content-Type: text/html | ||
+ | Content-Length: 612 | ||
+ | Last-Modified: Tue, 23 Jul 2019 11:45:37 GMT | ||
+ | Connection: keep-alive | ||
+ | ETag: "5d36f361-264" | ||
+ | Accept-Ranges: bytes | ||
+ | </pre> | ||
+ | |||
+ | * Switch back to the previous terminal and stop the port forwarding to the nginx pod: | ||
+ | <pre> | ||
+ | Forwarding from 127.0.0.1:8080 -> 80 | ||
+ | Forwarding from [::1]:8080 -> 80 | ||
+ | Handling connection for 8080 | ||
+ | ^C | ||
+ | </pre> | ||
+ | |||
+ | ===Logs=== | ||
+ | |||
+ | In this section, we will verify the ability to [https://kubernetes.io/docs/concepts/cluster-administration/logging/ retrieve container logs]. | ||
+ | |||
+ | * Print the nginx pod logs: | ||
+ | <pre> | ||
+ | $ kubectl logs $POD_NAME | ||
+ | 127.0.0.1 - - [09/Aug/2019:22:44:03 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.58.0" "-" | ||
+ | </pre> | ||
+ | |||
+ | ===Exec=== | ||
+ | |||
+ | In this section, we will verify the ability to [https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#running-individual-commands-in-a-container execute commands in a container]. | ||
+ | |||
+ | * Print the nginx version by executing the nginx -v command in the nginx container: | ||
+ | <pre> | ||
+ | $ kubectl exec -ti $POD_NAME -- nginx -v | ||
+ | nginx version: nginx/1.17.2 | ||
+ | </pre> | ||
+ | |||
+ | ===Services=== | ||
+ | |||
+ | In this section, we will verify the ability to expose applications using a [https://kubernetes.io/docs/concepts/services-networking/service/ Service]. | ||
+ | |||
+ | * Expose the nginx deployment using a [https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport NodePort] service: | ||
+ | <pre> | ||
+ | $ kubectl expose deployment nginx --port 80 --type NodePort | ||
+ | |||
+ | $ kubectl get svc -l run=nginx | ||
+ | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | ||
+ | nginx NodePort 10.32.0.112 <none> 80:31119/TCP 23s | ||
+ | </pre> | ||
+ | |||
+ | NOTE: The LoadBalancer service type can not be used because your cluster is not configured with [https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider cloud provider integration]. Setting up cloud provider integration is out of scope for this article. | ||
+ | |||
+ | * Retrieve the node port assigned to the nginx service: | ||
+ | <pre> | ||
+ | NODE_PORT=$(kubectl get svc nginx \ | ||
+ | --output=jsonpath='{range .spec.ports[0]}{.nodePort}') | ||
+ | |||
+ | $ echo $NODE_PORT | ||
+ | 31119 | ||
+ | </pre> | ||
+ | |||
+ | * Create a firewall rule that allows remote access to the nginx node port: | ||
+ | <pre> | ||
+ | $ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ | ||
+ | --allow=tcp:${NODE_PORT} \ | ||
+ | --network kubernetes-the-hard-way | ||
+ | |||
+ | $ gcloud compute firewall-rules list --filter="name ~ allow-nginx" | ||
+ | NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED | ||
+ | kubernetes-the-hard-way-allow-nginx-service kubernetes-the-hard-way INGRESS 1000 tcp:31119 False | ||
+ | </pre> | ||
+ | |||
+ | * Retrieve the external IP address of a worker instance: | ||
+ | <pre> | ||
+ | EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ | ||
+ | --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') | ||
+ | </pre> | ||
+ | |||
+ | * Make an HTTP request using the external IP address and the nginx node port: | ||
+ | <pre> | ||
+ | $ curl -I http://${EXTERNAL_IP}:${NODE_PORT} | ||
+ | HTTP/1.1 200 OK | ||
+ | Server: nginx/1.17.2 | ||
+ | Date: Fri, 09 Aug 2019 23:15:01 GMT | ||
+ | Content-Type: text/html | ||
+ | Content-Length: 612 | ||
+ | Last-Modified: Tue, 23 Jul 2019 11:45:37 GMT | ||
+ | Connection: keep-alive | ||
+ | ETag: "5d36f361-264" | ||
+ | Accept-Ranges: bytes | ||
+ | </pre> | ||
+ | |||
+ | ===Untrusted Workloads=== | ||
+ | |||
+ | This section will verify the ability to run untrusted workloads using [https://github.com/google/gvisor gVisor]. | ||
+ | |||
+ | * Create the untrusted pod: | ||
+ | <pre> | ||
+ | cat << EOF | kubectl apply -f - | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | name: untrusted | ||
+ | annotations: | ||
+ | io.kubernetes.cri.untrusted-workload: "true" | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: webserver | ||
+ | image: gcr.io/hightowerlabs/helloworld:2.0.0 | ||
+ | EOF | ||
+ | </pre> | ||
+ | |||
+ | ; Verification | ||
+ | |||
+ | * Verify the untrusted pod is running under gVisor (runsc) by inspecting the assigned worker node: | ||
+ | <pre> | ||
+ | $ kubectl get pods -o wide | ||
+ | |||
+ | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES | ||
+ | busybox-57786959c7-5wlf4 1/1 Running 1 61m 10.200.1.4 worker-1 <none> <none> | ||
+ | nginx-7bb7cd8db5-7nrsw 1/1 Running 0 35m 10.200.0.3 worker-0 <none> <none> | ||
+ | untrusted 1/1 Running 0 67s 10.200.1.5 worker-1 <none> <none> | ||
+ | </pre> | ||
+ | |||
+ | * Get the node name where the untrusted pod is running: | ||
+ | <pre> | ||
+ | $ INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}') | ||
+ | |||
+ | $ echo $INSTANCE_NAME | ||
+ | worker-1 | ||
+ | </pre> | ||
+ | |||
+ | * SSH into the worker node: | ||
+ | <pre> | ||
+ | $ gcloud compute ssh ${INSTANCE_NAME} | ||
+ | </pre> | ||
+ | |||
+ | * List the containers running under gVisor: | ||
+ | <pre> | ||
+ | $ sudo runsc --root /run/containerd/runsc/k8s.io list | ||
+ | |||
+ | I0809 23:19:45.762228 18463 x:0] *************************** | ||
+ | I0809 23:19:45.762415 18463 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list] | ||
+ | I0809 23:19:45.762481 18463 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17 | ||
+ | I0809 23:19:45.762535 18463 x:0] PID: 18463 | ||
+ | I0809 23:19:45.762602 18463 x:0] UID: 0, GID: 0 | ||
+ | I0809 23:19:45.762652 18463 x:0] Configuration: | ||
+ | I0809 23:19:45.762695 18463 x:0] RootDir: /run/containerd/runsc/k8s.io | ||
+ | I0809 23:19:45.762791 18463 x:0] Platform: ptrace | ||
+ | I0809 23:19:45.762899 18463 x:0] FileAccess: exclusive, overlay: false | ||
+ | I0809 23:19:45.762996 18463 x:0] Network: sandbox, logging: false | ||
+ | I0809 23:19:45.763093 18463 x:0] Strace: false, max size: 1024, syscalls: [] | ||
+ | I0809 23:19:45.763188 18463 x:0] *************************** | ||
+ | ID PID STATUS BUNDLE CREATED OWNER | ||
+ | db3d2549c91111eb68467e9aba28723743e34965f0cf6a9683206b2c93bba1bd 17834 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/db3d2549c91111eb68467e9aba28723743e34965f0cf6a9683206b2c93bba1bd 0001-01-01T00:00:00Z | ||
+ | f8dcc505e3e0850da6e33d5fd9e35308951afe8a5ab840a31e19ef93f16dbe7f 17901 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/f8dcc505e3e0850da6e33d5fd9e35308951afe8a5ab840a31e19ef93f16dbe7f 0001-01-01T00:00:00Z | ||
+ | I0809 23:19:45.766604 18463 x:0] Exiting with status: 0 | ||
+ | </pre> | ||
+ | |||
+ | * Get the ID of the untrusted pod: | ||
+ | <pre> | ||
+ | $ POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \ | ||
+ | pods --name untrusted -q) | ||
+ | |||
+ | $ echo $POD_ID | ||
+ | db3d2549c91111eb68467e9aba28723743e34965f0cf6a9683206b2c93bba1bd | ||
+ | </pre> | ||
+ | |||
+ | * Get the ID of the webserver container running in the untrusted pod: | ||
+ | <pre> | ||
+ | $ CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \ | ||
+ | ps -p ${POD_ID} -q) | ||
+ | |||
+ | $ echo $CONTAINER_ID | ||
+ | f8dcc505e3e0850da6e33d5fd9e35308951afe8a5ab840a31e19ef93f16dbe7f | ||
+ | </pre> | ||
+ | |||
+ | * Use the gVisor runsc command to display the processes running inside the webserver container: | ||
+ | <pre> | ||
+ | $ sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID} | ||
+ | |||
+ | I0809 23:22:26.200992 18720 x:0] *************************** | ||
+ | I0809 23:22:26.201201 18720 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io ps f8dcc505e3e0850da6e33d5fd9e35308951afe8a5ab840a31e19ef93f16dbe7f] | ||
+ | I0809 23:22:26.201275 18720 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17 | ||
+ | I0809 23:22:26.201331 18720 x:0] PID: 18720 | ||
+ | I0809 23:22:26.201388 18720 x:0] UID: 0, GID: 0 | ||
+ | I0809 23:22:26.201449 18720 x:0] Configuration: | ||
+ | I0809 23:22:26.201494 18720 x:0] RootDir: /run/containerd/runsc/k8s.io | ||
+ | I0809 23:22:26.201590 18720 x:0] Platform: ptrace | ||
+ | I0809 23:22:26.201792 18720 x:0] FileAccess: exclusive, overlay: false | ||
+ | I0809 23:22:26.201900 18720 x:0] Network: sandbox, logging: false | ||
+ | I0809 23:22:26.201999 18720 x:0] Strace: false, max size: 1024, syscalls: [] | ||
+ | I0809 23:22:26.202095 18720 x:0] *************************** | ||
+ | UID PID PPID C STIME TIME CMD | ||
+ | 0 1 0 0 23:16 0s app | ||
+ | I0809 23:22:26.203699 18720 x:0] Exiting with status: 0 | ||
+ | </pre> | ||
+ | |||
+ | ==Cleanup== | ||
+ | |||
+ | In this section, we will delete everything we created in GCP in this article. | ||
+ | |||
+ | ===Compute Instances=== | ||
+ | |||
+ | * Delete the controller and worker compute instances: | ||
+ | <pre> | ||
+ | gcloud -q compute instances delete \ | ||
+ | controller-0 controller-1 controller-2 \ | ||
+ | worker-0 worker-1 worker-2 | ||
+ | </pre> | ||
+ | |||
+ | ===Networking=== | ||
+ | |||
+ | * Delete the external load balancer network resources: | ||
+ | <pre> | ||
+ | { | ||
+ | gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ | ||
+ | --region $(gcloud config get-value compute/region) | ||
+ | |||
+ | gcloud -q compute target-pools delete kubernetes-target-pool | ||
+ | |||
+ | gcloud -q compute http-health-checks delete kubernetes | ||
+ | |||
+ | gcloud -q compute addresses delete kubernetes-the-hard-way | ||
+ | } | ||
+ | </pre> | ||
+ | |||
+ | * Delete the kubernetes-the-hard-way firewall rules: | ||
+ | <pre> | ||
+ | gcloud -q compute firewall-rules delete \ | ||
+ | kubernetes-the-hard-way-allow-nginx-service \ | ||
+ | kubernetes-the-hard-way-allow-internal \ | ||
+ | kubernetes-the-hard-way-allow-external \ | ||
+ | kubernetes-the-hard-way-allow-health-check | ||
+ | </pre> | ||
+ | |||
+ | * Delete the kubernetes-the-hard-way network VPC: | ||
+ | <pre> | ||
+ | { | ||
+ | gcloud -q compute routes delete \ | ||
+ | kubernetes-route-10-200-0-0-24 \ | ||
+ | kubernetes-route-10-200-1-0-24 \ | ||
+ | kubernetes-route-10-200-2-0-24 | ||
+ | |||
+ | gcloud -q compute networks subnets delete kubernetes | ||
+ | |||
+ | gcloud -q compute networks delete kubernetes-the-hard-way | ||
+ | } | ||
+ | </pre> | ||
==See also== | ==See also== |
Latest revision as of 23:27, 9 August 2019
This article will show how to setup Kubernetes The Hard Way, as originally developed by Kelsey Hightower. I will add my own additions, changes, alterations, etc. to the process (and this will be continually expanded upon).
I will show you how to set up Kubernetes from scratch using Google Cloud Platform (GCP) VMs running Ubuntu 18.04.
I will use the latest version of Kubernetes (as of August 2019):
$ curl -sSL https://dl.k8s.io/release/stable.txt v1.15.2
Contents
- 1 Install the client tools
- 2 Provisioning compute resources
- 3 Provisioning a CA and Generating TLS Certificates
- 4 Generating Kubernetes Configuration Files for Authentication
- 5 Generating the Data Encryption Config and Key
- 6 Bootstrapping the etcd Cluster
- 7 Bootstrapping the Kubernetes Control Plane
- 8 The Kubernetes Frontend Load Balancer
- 9 Bootstrapping the Kubernetes Worker Nodes
- 10 Configuring kubectl for Remote Access
- 11 Provisioning Pod Network Routes
- 12 Deploying the DNS Cluster Add-on
- 13 Smoke testing
- 14 Cleanup
- 15 See also
- 16 External links
Install the client tools
Note: See here for how to install on other OSes.
In this section, we will install the command line utilities required to complete this tutorial:
- Install CFSSL
The cfssl and cfssljson command line utilities will be used to provision a PKI Infrastructure and generate TLS certificates.
- Download and install cfssl and cfssljson from the cfssl repository:
$ wget -q --show-progress --https-only --timestamping \ https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \ https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 $ chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 $ sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl $ sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
- Verify cfssl version 1.2.0 or higher is installed:
$ cfssl version Version: 1.2.0 Revision: dev Runtime: go1.6
Note: The cfssljson command line utility does not provide a way to print its version.
- Install kubectl
The kubectl command line utility is used to interact with the Kubernetes API Server.
- Download and install kubectl from the official release binaries:
$ K8S_VERSION=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt) $ curl -LO https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64/kubectl $ chmod +x kubectl $ sudo mv kubectl /usr/local/bin/
- Verify kubectl version 1.12.0 or higher is installed:
$ kubectl version --client Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Provisioning compute resources
Networking
- Virtual Private Cloud Network (VPC)
In this section, a dedicated Virtual Private Cloud (VPC) network will be setup to host the Kubernetes cluster.
- Create the kubernetes-the-hard-way custom VPC network:
$ gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom Created [https://www.googleapis.com/compute/v1/projects/<project-name>/global/networks/kubernetes-the-hard-way]. $ gcloud compute networks list --filter="name~'.*hard.*'" NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4 kubernetes-the-hard-way CUSTOM REGIONAL
A subnet must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
- Create the kubernetes subnet in the kubernetes-the-hard-way VPC network:
$ gcloud compute networks subnets create kubernetes \ --network kubernetes-the-hard-way \ --range 10.240.0.0/24 Created [https://www.googleapis.com/compute/v1/projects/<project-name>/regions/us-west1/subnetworks/kubernetes]. $ gcloud compute networks subnets list --filter="network ~ kubernetes-the-hard-way" NAME REGION NETWORK RANGE kubernetes us-west1 kubernetes-the-hard-way 10.240.0.0/24
Note: The 10.240.0.0/24
IP address range can host up to 254 compute instances.
- Firewall rules
- Create a firewall rule that allows internal communication across all protocols:
$ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ --allow tcp,udp,icmp \ --network kubernetes-the-hard-way \ --source-ranges 10.240.0.0/24,10.200.0.0/16
- Create a firewall rule that allows external SSH, ICMP, and HTTPS:
$ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ --allow tcp:22,tcp:6443,icmp \ --network kubernetes-the-hard-way \ --source-ranges 0.0.0.0/0
Note: An external load balancer will be used to expose the Kubernetes API Servers to remote clients.
- List the firewall rules in the
kubernetes-the-hard-way
VPC network:
$ gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way" NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp False kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp False
- Kubernetes public IP address
- Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
$ gcloud compute addresses create kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region)
- Verify that the
kubernetes-the-hard-way
static IP address was created in your default compute region:
$ gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')" NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS kubernetes-the-hard-way XX.XX.XX.XX EXTERNAL us-west1 RESERVED
Compute instances
The compute instances will be provisioned using Ubuntu Server 18.04, which has good support for the containerd container runtime. Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
- Kubernetes Controllers
- Create three compute instances, which will host the Kubernetes control plane:
for i in 0 1 2; do gcloud compute instances create controller-${i} \ --async \ --boot-disk-size 200GB \ --can-ip-forward \ --image-family ubuntu-1804-lts \ --image-project ubuntu-os-cloud \ --machine-type n1-standard-1 \ --private-network-ip 10.240.0.1${i} \ --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ --subnet kubernetes \ --tags kubernetes-the-hard-way,controller done
- Kubernetes Workers
Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking further down. The pod-cidr
instance metadata will be used to expose pod subnet allocations to compute instances at runtime.
Note: The Kubernetes cluster CIDR range is defined by the Controller Manager's --cluster-cidr
flag. The cluster CIDR range will be set to 10.200.0.0/16
, which supports 254 subnets.
- Create three compute instances, which will host the Kubernetes worker nodes:
for i in 0 1 2; do gcloud compute instances create worker-${i} \ --async \ --boot-disk-size 200GB \ --can-ip-forward \ --image-family ubuntu-1804-lts \ --image-project ubuntu-os-cloud \ --machine-type n1-standard-1 \ --metadata pod-cidr=10.200.${i}.0/24 \ --private-network-ip 10.240.0.2${i} \ --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ --subnet kubernetes \ --tags kubernetes-the-hard-way,worker done
- Verification
- List the compute instances in your default compute zone:
$ gcloud compute instances list --filter="tags:kubernetes-the-hard-way" NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS controller-0 us-west1-a n1-standard-1 10.240.0.10 XX.XX.XX.XX RUNNING controller-1 us-west1-a n1-standard-1 10.240.0.11 XX.XX.XX.XX RUNNING controller-2 us-west1-a n1-standard-1 10.240.0.12 XX.XX.XX.XX RUNNING worker-0 us-west1-a n1-standard-1 10.240.0.20 XX.XX.XX.XX RUNNING worker-1 us-west1-a n1-standard-1 10.240.0.21 XX.XX.XX.XX RUNNING worker-2 us-west1-a n1-standard-1 10.240.0.22 XX.XX.XX.XX RUNNING
- SSH into the instances:
$ gcloud compute ssh controller-0
Provisioning a CA and Generating TLS Certificates
In this section, we will provision a PKI Infrastructure using CloudFlare's PKI toolkit, cfssl (which we installed above), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.
Certificate Authority
Provision a Certificate Authority that can be used to generate additional TLS certificates.
- Generate the CA configuration file, certificate, and private key:
{ cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "8760h" }, "profiles": { "kubernetes": { "usages": ["signing", "key encipherment", "server auth", "client auth"], "expiry": "8760h" } } } } EOF cat > ca-csr.json <<EOF { "CN": "Kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Seattle", "O": "Kubernetes", "OU": "CA", "ST": "Washington" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca }
- Results:
ca-key.pem ca.pem
Client and Server Certificates
In this section, we will generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes admin
user.
- The Admin Client Certificate
- Generate the
admin
client certificate and private key:
{ cat > admin-csr.json <<EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Seattle", "O": "system:masters", "OU": "Kubernetes The Hard Way", "ST": "Washington" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare admin }
- Results:
admin-key.pem admin.pem
- The Kubelet Client Certificates
Kubernetes uses a special-purpose authorization mode called "Node Authorizer", which specifically authorizes API requests made by Kubelets. In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the system:nodes
group, with a username of system:node:<nodeName>
. In this section, we will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
- Generate a certificate and private key for each Kubernetes worker node:
for instance in worker-0 worker-1 worker-2; do cat > ${instance}-csr.json <<EOF { "CN": "system:node:${instance}", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Seattle", "O": "system:nodes", "OU": "Kubernetes The Hard Way", "ST": "Washington" } ] } EOF EXTERNAL_IP=$(gcloud compute instances describe ${instance} \ --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') INTERNAL_IP=$(gcloud compute instances describe ${instance} \ --format 'value(networkInterfaces[0].networkIP)') cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \ -profile=kubernetes \ ${instance}-csr.json | cfssljson -bare ${instance} done
- Results:
worker-0-key.pem worker-0.pem worker-1-key.pem worker-1.pem worker-2-key.pem worker-2.pem
- The Controller Manager Client Certificate
- Generate the
kube-controller-manager
client certificate and private key:
{ cat > kube-controller-manager-csr.json <<EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Seattle", "O": "system:kube-controller-manager", "OU": "Kubernetes The Hard Way", "ST": "Washington" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager }
- Results:
kube-controller-manager-key.pem kube-controller-manager.pem
- The Kube Proxy Client Certificate
- Generate the
kube-proxy
client certificate and private key:
{ cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Seattle", "O": "system:node-proxier", "OU": "Kubernetes The Hard Way", "ST": "Washington" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-proxy-csr.json | cfssljson -bare kube-proxy }
- Results:
kube-proxy-key.pem kube-proxy.pem
- The Scheduler Client Certificate
- Generate the
kube-scheduler
client certificate and private key:
{ cat > kube-scheduler-csr.json <<EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Seattle", "O": "system:kube-scheduler", "OU": "Kubernetes The Hard Way", "ST": "Washington" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-scheduler-csr.json | cfssljson -bare kube-scheduler }
- Results:
kube-scheduler-key.pem kube-scheduler.pem
- The Kubernetes API Server Certificate
The kubernetes-the-hard-way
static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
- Generate the Kubernetes API Server certificate and private key:
{ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') cat > kubernetes-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Seattle", "O": "Kubernetes", "OU": "Kubernetes The Hard Way", "ST": "Washington" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \ -profile=kubernetes \ kubernetes-csr.json | cfssljson -bare kubernetes }
- Results:
kubernetes-key.pem kubernetes.pem
- The Service Account Key Pair
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the managing service accounts documentation.
- Generate the
service-account
certificate and private key:
{ cat > service-account-csr.json <<EOF { "CN": "service-accounts", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Seattle", "O": "Kubernetes", "OU": "Kubernetes The Hard Way", "ST": "Washington" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ service-account-csr.json | cfssljson -bare service-account }
- Results:
service-account-key.pem service-account.pem
- Distribute the Client and Server Certificates
- Copy the appropriate certificates and private keys to each worker instance:
for instance in worker-0 worker-1 worker-2; do gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/ done
- Copy the appropriate certificates and private keys to each controller instance:
for instance in controller-0 controller-1 controller-2; do gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ service-account-key.pem service-account.pem ${instance}:~/ done
Note: The kube-proxy
, kube-controller-manager
, kube-scheduler
, and kubelet
client certificates will be used to generate client authentication configuration files in the next section.
Generating Kubernetes Configuration Files for Authentication
In this section, we will generate Kubernetes configuration files, also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.
Client Authentication Configs
In this section, we will generate kubeconfig files for the controller manager
, kubelet
, kube-proxy
, and scheduler
clients and the admin user.
- Kubernetes Public IP Address
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
- Retrieve the
kubernetes-the-hard-way
static IP address:
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)')
- The kubelet Kubernetes Configuration File
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes Node Authorizer.
- Generate a kubeconfig file for each worker node:
for instance in worker-0 worker-1 worker-2; do kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \ --kubeconfig=${instance}.kubeconfig kubectl config set-credentials system:node:${instance} \ --client-certificate=${instance}.pem \ --client-key=${instance}-key.pem \ --embed-certs=true \ --kubeconfig=${instance}.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:node:${instance} \ --kubeconfig=${instance}.kubeconfig kubectl config use-context default --kubeconfig=${instance}.kubeconfig done
- Results:
worker-0.kubeconfig worker-1.kubeconfig worker-2.kubeconfig
- The kube-proxy Kubernetes Configuration File
- Generate a kubeconfig file for the
kube-proxy
service:
{ kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials system:kube-proxy \ --client-certificate=kube-proxy.pem \ --client-key=kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig }
- Results:
kube-proxy.kubeconfig
- The kube-controller-manager Kubernetes Configuration File
- Generate a kubeconfig file for the
kube-controller-manager
service:
{ kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager \ --client-certificate=kube-controller-manager.pem \ --client-key=kube-controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:kube-controller-manager \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig }
- Results:
kube-controller-manager.kubeconfig
- The kube-scheduler Kubernetes Configuration File
- Generate a kubeconfig file for the
kube-scheduler
service:
{ kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=kube-scheduler.pem \ --client-key=kube-scheduler-key.pem \ --embed-certs=true \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:kube-scheduler \ --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig }
- Results:
kube-scheduler.kubeconfig
- The admin Kubernetes Configuration File
- Generate a kubeconfig file for the
admin
user:
{ kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=admin.kubeconfig kubectl config set-credentials admin \ --client-certificate=admin.pem \ --client-key=admin-key.pem \ --embed-certs=true \ --kubeconfig=admin.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=admin \ --kubeconfig=admin.kubeconfig kubectl config use-context default --kubeconfig=admin.kubeconfig }
- Results:
admin.kubeconfig
Distribute the Kubernetes Configuration Files
- Copy the appropriate
kubelet
andkube-proxy
kubeconfig files to each worker instance:
for instance in worker-0 worker-1 worker-2; do gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/ done
- Copy the appropriate
kube-controller-manager
andkube-scheduler
kubeconfig files to each controller instance:
for instance in controller-0 controller-1 controller-2; do gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/ done
Generating the Data Encryption Config and Key
Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to encrypt cluster data at rest.
In this section, we will generate an encryption key and an encryption config suitable for encrypting Kubernetes Secrets.
- Create the
encryption-config.yaml
encryption config file:
cat > encryption-config.yaml <<EOF kind: EncryptionConfig apiVersion: v1 resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: $(head -c 32 /dev/urandom | base64 -i -) - identity: {} EOF
- Copy the
encryption-config.yaml
encryption config file to each controller instance:
for instance in controller-0 controller-1 controller-2; do gcloud compute scp encryption-config.yaml ${instance}:~/ done
Bootstrapping the etcd Cluster
Kubernetes components are stateless and store cluster state in etcd. In this section, we will bootstrap a three-node etcd cluster and configure it for high availability and secure remote access.
- Prerequisites
The commands in this section must be run on each controller instance: controller-0
, controller-1
, and controller-2
.
Using tmux, split your shell into 3 x panes (ctrl + b "
) and then log into each controller instance:
$ gcloud compute ssh controller-0 ctrl + b o $ gcloud compute ssh controller-1 ctrl + b o $ gcloud compute ssh controller-2
Now, set tmux to run all subsequent commands (unless otherwise stated) in parallel on all 3 x controller instances (in all 3 x tmux panes):
ctrl + b : set synchronize-panes on # off #~OR~ setw synchronize-panes # toggles on/off
- Download and Install the etcd Binaries
Download the official etcd release binaries from the coreos/etcd GitHub project:
ETCD_VER=v3.3.13 # choose either URL GOOGLE_URL=https://storage.googleapis.com/etcd GITHUB_URL=https://github.com/etcd-io/etcd/releases/download DOWNLOAD_URL=${GOOGLE_URL} wget -q --show-progress --https-only --timestamping \ "${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz"
- Extract and install the etcd server and the etcdctl command line utility:
{ tar -xvf etcd-${ETCD_VER}-linux-amd64.tar.gz sudo mv etcd-${ETCD_VER}-linux-amd64/etcd* /usr/local/bin/ rm -rf etcd-${ETCD_VER}-linux-amd64* }
- Configure the etcd Server
{ sudo mkdir -p /etc/etcd /var/lib/etcd sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/ }
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers.
- Retrieve the internal IP address for the current compute instance:
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
$ ETCD_NAME=$(hostname -s)
- Create the
etcd.service
systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/etcd.service [Unit] Description=etcd Documentation=https://github.com/coreos [Service] ExecStart=/usr/local/bin/etcd \\ --name ${ETCD_NAME} \\ --cert-file=/etc/etcd/kubernetes.pem \\ --key-file=/etc/etcd/kubernetes-key.pem \\ --peer-cert-file=/etc/etcd/kubernetes.pem \\ --peer-key-file=/etc/etcd/kubernetes-key.pem \\ --trusted-ca-file=/etc/etcd/ca.pem \\ --peer-trusted-ca-file=/etc/etcd/ca.pem \\ --peer-client-cert-auth \\ --client-cert-auth \\ --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\ --advertise-client-urls https://${INTERNAL_IP}:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
- Start the etcd Server
{ sudo systemctl daemon-reload sudo systemctl enable etcd sudo systemctl start etcd }
Remember to run the above commands on each controller node: controller-0
, controller-1
, and controller-2
.
- Verification
- List the etcd cluster members:
$ sudo ETCDCTL_API=3 etcdctl member list \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/etcd/ca.pem \ --cert=/etc/etcd/kubernetes.pem \ --key=/etc/etcd/kubernetes-key.pem 3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379 f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379 ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379
Bootstrapping the Kubernetes Control Plane
In this section, we will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. We will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
- Prerequisites
The commands in this section must be run on each controller instance: controller-0
, controller-1
, and controller-2
.
Using tmux, split your shell into 3 x panes (ctrl + b "
) and then log into each controller instance:
$ gcloud compute ssh controller-0 ctrl + b o $ gcloud compute ssh controller-1 ctrl + b o $ gcloud compute ssh controller-2
Now, set tmux to run all subsequent commands (unless otherwise stated) in parallel on all 3 x controller instances (in all 3 x tmux panes):
ctrl + b : set synchronize-panes on # off #~OR~ setw synchronize-panes # toggles on/off
Provision the Kubernetes Control Plane
- Create the Kubernetes configuration directory:
$ sudo mkdir -p /etc/kubernetes/config
- Download and Install the Kubernetes Controller Binaries
- Download the official Kubernetes release binaries:
$ K8S_VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) $ K8S_URL=https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64 $ wget -q --show-progress --https-only --timestamping \ "${K8S_URL}/kube-apiserver" \ "${K8S_URL}/kube-controller-manager" \ "${K8S_URL}/kube-scheduler" \ "${K8S_URL}/kubectl"
- Install the Kubernetes binaries:
{ chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ }
Configure the Kubernetes API Server
{ sudo mkdir -p /var/lib/kubernetes/ sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ service-account-key.pem service-account.pem \ encryption-config.yaml /var/lib/kubernetes/ sudo chmod 0600 /var/lib/kubernetes/encryption-config.yaml }
The instance internal IP address will be used to advertise the API Server to members of the cluster.
- Retrieve the internal IP address for the current compute instance:
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
- Create the
kube-apiserver.service
systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --advertise-address=${INTERNAL_IP} \\ --allow-privileged=true \\ --apiserver-count=3 \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-path=/var/log/audit.log \\ --authorization-mode=Node,RBAC \\ --bind-address=0.0.0.0 \\ --client-ca-file=/var/lib/kubernetes/ca.pem \\ --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\ --etcd-cafile=/var/lib/kubernetes/ca.pem \\ --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\ --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\ --etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\ --event-ttl=1h \\ --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\ --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\ --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\ --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\ --kubelet-https=true \\ --runtime-config=api/all \\ --service-account-key-file=/var/lib/kubernetes/service-account.pem \\ --service-cluster-ip-range=10.32.0.0/24 \\ --service-node-port-range=30000-32767 \\ --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\ --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
Configure the Kubernetes Controller Manager
- Move the
kube-controller-manager
kubeconfig into place:
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
- Create the
kube-controller-manager.service
systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kube-controller-manager \\ --address=0.0.0.0 \\ --cluster-cidr=10.200.0.0/16 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\ --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\ --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\ --leader-elect=true \\ --root-ca-file=/var/lib/kubernetes/ca.pem \\ --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\ --service-cluster-ip-range=10.32.0.0/24 \\ --use-service-account-credentials=true \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
Configure the Kubernetes Scheduler
- Move the
kube-scheduler
kubeconfig into place:
$ sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
- Create the
kube-scheduler.yaml
configuration file:
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml apiVersion: kubescheduler.config.k8s.io/v1alpha1 kind: KubeSchedulerConfiguration clientConnection: kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig" leaderElection: leaderElect: true EOF
- Create the
kube-scheduler.service
systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kube-scheduler \\ --config=/etc/kubernetes/config/kube-scheduler.yaml \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
Start the Controller Services
{ sudo systemctl daemon-reload sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler }
Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
Enable HTTP Health Checks
A Google Network Load Balancer will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround, an Nginx webserver can be used to proxy HTTP health checks. In this section, Nginx will be installed and configured to accept HTTP health checks on port 80
and proxy the connections to the API server on https://127.0.0.1:6443/healthz
.
The /healthz
API server endpoint does not require authentication by default.
- Install a basic webserver to handle HTTP health checks:
$ sudo apt-get install -y nginx $ cat > kubernetes.default.svc.cluster.local <<EOF server { listen 80; server_name kubernetes.default.svc.cluster.local; location /healthz { proxy_pass https://127.0.0.1:6443/healthz; proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem; } } EOF { sudo mv kubernetes.default.svc.cluster.local \ /etc/nginx/sites-available/kubernetes.default.svc.cluster.local sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local \ /etc/nginx/sites-enabled/ } $ sudo systemctl restart nginx && sudo systemctl enable nginx
Verification
$ kubectl get componentstatuses --kubeconfig admin.kubeconfig NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"}
- Test the nginx HTTP health check proxy:
$ curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz HTTP/1.1 200 OK Server: nginx/1.14.0 (Ubuntu) Date: Sun, 30 Sep 2018 17:44:24 GMT Content-Type: text/plain; charset=utf-8 Content-Length: 2 Connection: keep-alive ok
Note: Remember to run the above commands on each controller node: controller-0
, controller-1
, and controller-2
.
RBAC for Kubelet Authorization
In this section, we will configure Role-Based Access Control (RBAC) permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
Note: We are also setting the Kubelet --authorization-mode
flag to Webhook
. Webhook mode uses the SubjectAccessReview API to determine authorization.
- SSH into just the
controller-0
instance:
$ gcloud compute ssh controller-0
- Create the
system:kube-apiserver-to-kubelet
ClusterRole with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f - apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubelet rules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*" EOF
The Kubernetes API Server authenticates to the Kubelet as the kubernetes
user using the client certificate as defined by the --kubelet-client-certificate
flag.
- Bind the
system:kube-apiserver-to-kubelet
ClusterRole to thekubernetes
user:
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f - apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: system:kube-apiserver namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubelet subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kubernetes EOF
The Kubernetes Frontend Load Balancer
In this section, we will provision an external load balancer to front the Kubernetes API Servers. The kubernetes-the-hard-way
static IP address (created above) will be attached to the resulting load balancer.
Note: The compute instances created in this article will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
- Rules for Network Load Balancing
When we create our external load balancer, we need to create an ingress firewall rule for Network Load Balancing, which requires a legacy health check. The source IP ranges for legacy health checks for Network Load Balancing are:
35.191.0.0/16 209.85.152.0/22 209.85.204.0/22
- Provision a Network Load Balancer
- Create the external load balancer network resources:
{ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') gcloud compute http-health-checks create kubernetes \ --description "Kubernetes Health Check" \ --host "kubernetes.default.svc.cluster.local" \ --request-path "/healthz" gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \ --network kubernetes-the-hard-way \ --source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \ --allow tcp gcloud compute target-pools create kubernetes-target-pool \ --http-health-check kubernetes gcloud compute target-pools add-instances kubernetes-target-pool \ --instances controller-0,controller-1,controller-2 gcloud compute forwarding-rules create kubernetes-forwarding-rule \ --address ${KUBERNETES_PUBLIC_ADDRESS} \ --ports 6443 \ --region $(gcloud config get-value compute/region) \ --target-pool kubernetes-target-pool }
- Get some basic information on our external load balancer:
$ gcloud compute target-pools list --filter="name:kubernetes-target-pool" NAME REGION SESSION_AFFINITY BACKUP HEALTH_CHECKS kubernetes-target-pool us-west1 NONE kubernetes
Note: In the GCP API, there is no direct "load balancer" entity; just a collection of components that constitute the load balancer.
Verification
- Retrieve the
kubernetes-the-hard-way
static IP address:
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)')
- Make an HTTP request for the Kubernetes version info:
$ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version { "major": "1", "minor": "15", "gitVersion": "v1.15.2", "gitCommit": "f6278300bebbb750328ac16ee6dd3aa7d3549568", "gitTreeState": "clean", "buildDate": "2019-08-05T09:15:22Z", "goVersion": "go1.12.5", "compiler": "gc", "platform": "linux/amd64" }
Bootstrapping the Kubernetes Worker Nodes
In this section, we will bootstrap three Kubernetes worker nodes. The following components will be installed on each node:
- Prerequisites
The commands in this section must be run on each worker instance/node: worker-0
, worker-1
, and worker-2
.
Using tmux, split your shell into 3 x panes (ctrl + b "
) and then log into each worker instance:
$ gcloud compute ssh worker-0 ctrl + b o $ gcloud compute ssh worker-1 ctrl + b o $ gcloud compute ssh worker-2
Now, set tmux to run all subsequent commands (unless otherwise stated) in parallel on all 3 x controller instances (in all 3 x tmux panes):
ctrl + b : set synchronize-panes on # off #~OR~ setw synchronize-panes # toggles on/off
Provisioning a Kubernetes Worker Node
- Install the OS dependencies:
{ sudo apt-get update sudo apt-get -y install socat conntrack ipset }
Note: The socat
binary enables support for the kubectl port-forward
command.
- Download and Install Worker Binaries
- Create the installation directories:
$ sudo mkdir -p \ /etc/cni/net.d \ /opt/cni/bin \ /var/lib/kubelet \ /var/lib/kube-proxy \ /var/lib/kubernetes \ /var/run/kubernetes
$ K8S_VERSION=v1.15.2 $ mkdir tar && cd $_ $ wget -q --show-progress --https-only --timestamping \ "https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz" \ "https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17" \ "https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64" \ "https://github.com/containernetworking/plugins/releases/download/v0.8.1/cni-plugins-linux-amd64-v0.8.1.tgz" \ "https://github.com/containerd/containerd/releases/download/v1.2.7/containerd-1.2.7.linux-amd64.tar.gz" \ "https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64/kubectl" \ "https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64/kube-proxy" \ "https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64/kubelet"
- Install the worker binaries:
cni-plugins-linux-amd64-v0.8.1.tgz containerd-1.2.7.linux-amd64.tar.gz crictl-v1.15.0-linux-amd64.tar.gz kube-proxy kubectl kubelet runc.amd64 runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 { sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc sudo mv runc.amd64 runc chmod +x kubectl kube-proxy kubelet runc runsc sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/ sudo tar -xvf crictl-v1.15.0-linux-amd64.tar.gz -C /usr/local/bin/ sudo tar -xvf cni-plugins-linux-amd64-v0.8.1.tgz -C /opt/cni/bin/ sudo tar -xvf containerd-1.2.7.linux-amd64.tar.gz -C / cd $HOME }
Configure CNI Networking
- Retrieve the Pod CIDR range for the current compute instance:
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
- Create the
bridge
network configuration file:
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf { "cniVersion": "0.3.1", "name": "bridge", "type": "bridge", "bridge": "cnio0", "isGateway": true, "ipMasq": true, "ipam": { "type": "host-local", "ranges": [ [{"subnet": "${POD_CIDR}"}] ], "routes": [{"dst": "0.0.0.0/0"}] } } EOF
- Create the
loopback
network configuration file:
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf { "cniVersion": "0.3.1", "type": "loopback" } EOF
Configure containerd
- Create the containerd configuration file:
sudo mkdir -p /etc/containerd/
cat << EOF | sudo tee /etc/containerd/config.toml [plugins] [plugins.cri.containerd] snapshotter = "overlayfs" [plugins.cri.containerd.default_runtime] runtime_type = "io.containerd.runtime.v1.linux" runtime_engine = "/usr/local/bin/runc" runtime_root = "" [plugins.cri.containerd.untrusted_workload_runtime] runtime_type = "io.containerd.runtime.v1.linux" runtime_engine = "/usr/local/bin/runsc" runtime_root = "/run/containerd/runsc" [plugins.cri.containerd.gvisor] runtime_type = "io.containerd.runtime.v1.linux" runtime_engine = "/usr/local/bin/runsc" runtime_root = "/run/containerd/runsc" EOF
Note: Untrusted workloads will be run using the gVisor (runsc) runtime.
- Create the
containerd.service
systemd unit file:
cat << EOF | sudo tee /etc/systemd/system/containerd.service [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target [Service] ExecStartPre=/sbin/modprobe overlay ExecStart=/bin/containerd Restart=always RestartSec=5 Delegate=yes KillMode=process OOMScoreAdjust=-999 LimitNOFILE=1048576 LimitNPROC=infinity LimitCORE=infinity [Install] WantedBy=multi-user.target EOF
Configure the Kubelet
{ sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/ sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig sudo mv ca.pem /var/lib/kubernetes/ }
- Create the
kubelet-config.yaml
configuration file:
cat << EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false webhook: enabled: true x509: clientCAFile: "/var/lib/kubernetes/ca.pem" authorization: mode: Webhook clusterDomain: "cluster.local" clusterDNS: - "10.32.0.10" podCIDR: "${POD_CIDR}" resolvConf: "/run/systemd/resolve/resolv.conf" runtimeRequestTimeout: "15m" tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem" tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem" EOF
Note: The resolvConf
configuration is used to avoid loops when using CoreDNS for service discovery on systems running systemd-resolved
.
- Create the
kubelet.service
systemd unit file:
cat << EOF | sudo tee /etc/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=containerd.service Requires=containerd.service [Service] ExecStart=/usr/local/bin/kubelet \\ --config=/var/lib/kubelet/kubelet-config.yaml \\ --container-runtime=remote \\ --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\ --image-pull-progress-deadline=2m \\ --kubeconfig=/var/lib/kubelet/kubeconfig \\ --network-plugin=cni \\ --register-node=true \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
Configure the Kubernetes Proxy
$ sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
- Create the
kube-proxy-config.yaml
configuration file:
cat << EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml kind: KubeProxyConfiguration apiVersion: kubeproxy.config.k8s.io/v1alpha1 clientConnection: kubeconfig: "/var/lib/kube-proxy/kubeconfig" mode: "iptables" clusterCIDR: "10.200.0.0/16" EOF
- Create the
kube-proxy.service
systemd unit file:
cat << EOF | sudo tee /etc/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kube-proxy \\ --config=/var/lib/kube-proxy/kube-proxy-config.yaml Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
Start the Worker Services
{ sudo systemctl daemon-reload sudo systemctl enable containerd kubelet kube-proxy sudo systemctl start containerd kubelet kube-proxy }
Note: Remember to run the above commands on each worker node: worker-0
, worker-1
, and worker-2
.
- Check the statuses of the worker services:
$ systemctl status containerd kubelet kube-proxy
Verification
NOTE: The compute instances created in this article will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
- List the registered Kubernetes nodes:
gcloud compute ssh controller-0 \ --command "kubectl --kubeconfig admin.kubeconfig get nodes -o wide" NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME worker-0 Ready <none> 3m3s v1.15.2 10.240.0.20 <none> Ubuntu 18.04.2 LTS 4.15.0-1037-gcp containerd://1.2.7 worker-1 Ready <none> 3m3s v1.15.2 10.240.0.21 <none> Ubuntu 18.04.2 LTS 4.15.0-1037-gcp containerd://1.2.7 worker-2 Ready <none> 3m3s v1.15.2 10.240.0.22 <none> Ubuntu 18.04.2 LTS 4.15.0-1037-gcp containerd://1.2.7
Configuring kubectl for Remote Access
In this section, we will generate a kubeconfig file for the kubectl
command line utility based on the admin
user credentials.
NOTE: Run the commands in this section from the same directory used to generate the admin client certificates.
- The Admin Kubernetes Configuration File
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
WARNING: The following commands will overwrite your current/default kubeconfig (whatever KUBECONFIG
environment variable is pointing to, if it is set. If it is not set, then it will overwrite your $HOME/.kube/config
file).
- Generate a kubeconfig file suitable for authenticating as the
admin
user:
{ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 kubectl config set-credentials admin \ --client-certificate=admin.pem \ --client-key=admin-key.pem kubectl config set-context kubernetes-the-hard-way \ --cluster=kubernetes-the-hard-way \ --user=admin kubectl config use-context kubernetes-the-hard-way }
- Verification
- Check the health of the remote Kubernetes cluster:
$ kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"}
Provisioning Pod Network Routes
Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point, pods can not communicate with other pods running on different nodes due to missing network routes.
In this section, we will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address.
Note: There are other ways to implement the Kubernetes networking model.
The Routing Table
In this section, we will gather the information required to create routes in the kubernetes-the-hard-way
VPC network.
- Print the internal IP address and Pod CIDR range for each worker instance:
for instance in worker-0 worker-1 worker-2; do gcloud compute instances describe ${instance} \ --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' done 10.240.0.20 10.200.0.0/24 10.240.0.21 10.200.1.0/24 10.240.0.22 10.200.2.0/24
Routes
- List the default routes in the
kubernetes-the-hard-way
VPC network:
$ gcloud compute routes list --filter "network: kubernetes-the-hard-way" NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY default-route-294de28447c4e405 kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 default-route-638561d1ca3f4621 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000
- Create network routes for each worker instance:
for i in 0 1 2; do gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ --network kubernetes-the-hard-way \ --next-hop-address 10.240.0.2${i} \ --destination-range 10.200.${i}.0/24 done
- List the routes in the
kubernetes-the-hard-way
VPC network:
$ gcloud compute routes list --filter "network: kubernetes-the-hard-way" NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY default-route-294de28447c4e405 kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 default-route-638561d1ca3f4621 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000 kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000 kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000 kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
Deploying the DNS Cluster Add-on
In this section, we will deploy the DNS add-on which provides DNS based service discovery, backed by CoreDNS, to applications running inside the Kubernetes cluster.
The DNS Cluster Add-on
- Deploy the
coredns
cluster add-on:
$ kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.extensions/coredns created service/kube-dns created
- List the pods created by the
kube-dns
deployment:
$ kubectl -n kube-system get pods -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE coredns-7945fb857d-kpd67 1/1 Running 0 40s coredns-7945fb857d-rpvwl 1/1 Running 0 40s
Verification
- Create a busybox deployment:
$ kubectl run busybox --image=busybox:1.31.0 --command -- sleep 3600
- List the pod created by the
busybox
deployment:
$ kubectl get pods -l run=busybox NAME READY STATUS RESTARTS AGE busybox-57786959c7-xpfxv 1/1 Running 0 16s
- Retrieve the full name of the
busybox
pod:
$ POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}") $ echo $POD_NAME busybox-57786959c7-xpfxv
- Execute a DNS lookup for the
kubernetes
service inside thebusybox
pod:
$ kubectl exec -ti $POD_NAME -- nslookup kubernetes Server: 10.32.0.10 Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
Smoke testing
In this section, we will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly and passes a build verification test (BVT).
Data Encryption
Goal: Verify the ability to encrypt secret data at rest.
- Create a generic secret:
$ kubectl create secret generic kubernetes-the-hard-way \ --from-literal="mykey=mydata"
- Print a hexdump of the
kubernetes-the-hard-way
secret stored in etcd:
$ gcloud compute ssh controller-0 \ --command "sudo ETCDCTL_API=3 etcdctl get \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/etcd/ca.pem \ --cert=/etc/etcd/kubernetes.pem \ --key=/etc/etcd/kubernetes-key.pem\ /registry/secrets/default/kubernetes-the-hard-way | hexdump -C" 00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret| 00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern| 00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa| 00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc| 00000040 3a 76 31 3a 6b 65 79 31 3a a5 d7 cd 20 d1 12 a3 |:v1:key1:... ...| 00000050 47 13 fe cb ea b1 9f f2 1f 63 7d 1f c4 03 cb 3f |G........c}....?| 00000060 27 b4 3e 40 9a 32 0b 91 a5 84 bf ee 1c b5 9e ea |'.>@.2..........| 00000070 4c a3 fb e1 6d 83 18 1e 50 42 0b 2d cb 90 c8 92 |L...m...PB.-....| 00000080 f5 29 81 2a 01 db 9d 22 1f 67 3b f4 fd a8 76 59 |.).*...".g;...vY| 00000090 e8 1e e0 a5 91 65 c3 5d 17 0a 32 fc 5e 73 4d 35 |.....e.]..2.^sM5| 000000a0 69 a1 78 d9 a9 83 6b 53 c3 5e aa c9 e0 c4 72 6a |i.x...kS.^....rj| 000000b0 26 56 8b 5e fc 34 6b f0 12 1f 5b 0a 70 aa 07 de |&V.^.4k...[.p...| 000000c0 0a de 76 4b ed be 09 61 2c 43 5e e3 35 0b 43 60 |..vK...a,C^.5.C`| 000000d0 e3 d5 34 0b 9e 7b 05 67 30 3c 49 a8 33 9c f2 da |..4..{.g0<I.3...| 000000e0 4f c9 e9 b5 54 31 f8 14 75 0a |O...T1..u.| 000000ea #~OR~ $ gcloud compute ssh controller-0 \ --command "sudo ETCDCTL_API=3 etcdctl get \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/etcd/ca.pem \ --cert=/etc/etcd/kubernetes.pem \ --key=/etc/etcd/kubernetes-key.pem\ /registry/secrets/default/kubernetes-the-hard-way -w fields | grep Value" "Value" : "k8s:enc:aescbc:v1:key1:\xa5..."
The etcd key should be prefixed with k8s:enc:aescbc:v1:key1
, which indicates the Advanced Encryption Standard (AES) - CBC (Cipher Blocker Chaining) provider was used to encrypt the data with the key1
encryption key. See: wikipedia:Block cipher mode of operation for details.
Deployments
In this section, we will verify the ability to create and manage Deployments.
- Create a deployment for an Nginx web server:
$ kubectl run nginx --image=nginx
- List the pod created by the nginx deployment:
$ kubectl get pods -l run=nginx NAME READY STATUS RESTARTS AGE nginx-7bb7cd8db5-7nrsw 1/1 Running 0 14s
Port Forwarding
In this section, we will verify the ability to access applications remotely using port forwarding.
- Retrieve the full name of the nginx pod:
POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")
- Forward port 8080 on your local machine to port 80 of the nginx pod:
$ kubectl port-forward $POD_NAME 8080:80 Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80
- In a new terminal make an HTTP request using the forwarding address:
$ curl --head http://127.0.0.1:8080 HTTP/1.1 200 OK Server: nginx/1.17.2 Date: Fri, 09 Aug 2019 22:44:03 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 23 Jul 2019 11:45:37 GMT Connection: keep-alive ETag: "5d36f361-264" Accept-Ranges: bytes
- Switch back to the previous terminal and stop the port forwarding to the nginx pod:
Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 Handling connection for 8080 ^C
Logs
In this section, we will verify the ability to retrieve container logs.
- Print the nginx pod logs:
$ kubectl logs $POD_NAME 127.0.0.1 - - [09/Aug/2019:22:44:03 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.58.0" "-"
Exec
In this section, we will verify the ability to execute commands in a container.
- Print the nginx version by executing the nginx -v command in the nginx container:
$ kubectl exec -ti $POD_NAME -- nginx -v nginx version: nginx/1.17.2
Services
In this section, we will verify the ability to expose applications using a Service.
- Expose the nginx deployment using a NodePort service:
$ kubectl expose deployment nginx --port 80 --type NodePort $ kubectl get svc -l run=nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx NodePort 10.32.0.112 <none> 80:31119/TCP 23s
NOTE: The LoadBalancer service type can not be used because your cluster is not configured with cloud provider integration. Setting up cloud provider integration is out of scope for this article.
- Retrieve the node port assigned to the nginx service:
NODE_PORT=$(kubectl get svc nginx \ --output=jsonpath='{range .spec.ports[0]}{.nodePort}') $ echo $NODE_PORT 31119
- Create a firewall rule that allows remote access to the nginx node port:
$ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ --allow=tcp:${NODE_PORT} \ --network kubernetes-the-hard-way $ gcloud compute firewall-rules list --filter="name ~ allow-nginx" NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED kubernetes-the-hard-way-allow-nginx-service kubernetes-the-hard-way INGRESS 1000 tcp:31119 False
- Retrieve the external IP address of a worker instance:
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
- Make an HTTP request using the external IP address and the nginx node port:
$ curl -I http://${EXTERNAL_IP}:${NODE_PORT} HTTP/1.1 200 OK Server: nginx/1.17.2 Date: Fri, 09 Aug 2019 23:15:01 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 23 Jul 2019 11:45:37 GMT Connection: keep-alive ETag: "5d36f361-264" Accept-Ranges: bytes
Untrusted Workloads
This section will verify the ability to run untrusted workloads using gVisor.
- Create the untrusted pod:
cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: untrusted annotations: io.kubernetes.cri.untrusted-workload: "true" spec: containers: - name: webserver image: gcr.io/hightowerlabs/helloworld:2.0.0 EOF
- Verification
- Verify the untrusted pod is running under gVisor (runsc) by inspecting the assigned worker node:
$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox-57786959c7-5wlf4 1/1 Running 1 61m 10.200.1.4 worker-1 <none> <none> nginx-7bb7cd8db5-7nrsw 1/1 Running 0 35m 10.200.0.3 worker-0 <none> <none> untrusted 1/1 Running 0 67s 10.200.1.5 worker-1 <none> <none>
- Get the node name where the untrusted pod is running:
$ INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}') $ echo $INSTANCE_NAME worker-1
- SSH into the worker node:
$ gcloud compute ssh ${INSTANCE_NAME}
- List the containers running under gVisor:
$ sudo runsc --root /run/containerd/runsc/k8s.io list I0809 23:19:45.762228 18463 x:0] *************************** I0809 23:19:45.762415 18463 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list] I0809 23:19:45.762481 18463 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17 I0809 23:19:45.762535 18463 x:0] PID: 18463 I0809 23:19:45.762602 18463 x:0] UID: 0, GID: 0 I0809 23:19:45.762652 18463 x:0] Configuration: I0809 23:19:45.762695 18463 x:0] RootDir: /run/containerd/runsc/k8s.io I0809 23:19:45.762791 18463 x:0] Platform: ptrace I0809 23:19:45.762899 18463 x:0] FileAccess: exclusive, overlay: false I0809 23:19:45.762996 18463 x:0] Network: sandbox, logging: false I0809 23:19:45.763093 18463 x:0] Strace: false, max size: 1024, syscalls: [] I0809 23:19:45.763188 18463 x:0] *************************** ID PID STATUS BUNDLE CREATED OWNER db3d2549c91111eb68467e9aba28723743e34965f0cf6a9683206b2c93bba1bd 17834 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/db3d2549c91111eb68467e9aba28723743e34965f0cf6a9683206b2c93bba1bd 0001-01-01T00:00:00Z f8dcc505e3e0850da6e33d5fd9e35308951afe8a5ab840a31e19ef93f16dbe7f 17901 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/f8dcc505e3e0850da6e33d5fd9e35308951afe8a5ab840a31e19ef93f16dbe7f 0001-01-01T00:00:00Z I0809 23:19:45.766604 18463 x:0] Exiting with status: 0
- Get the ID of the untrusted pod:
$ POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \ pods --name untrusted -q) $ echo $POD_ID db3d2549c91111eb68467e9aba28723743e34965f0cf6a9683206b2c93bba1bd
- Get the ID of the webserver container running in the untrusted pod:
$ CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \ ps -p ${POD_ID} -q) $ echo $CONTAINER_ID f8dcc505e3e0850da6e33d5fd9e35308951afe8a5ab840a31e19ef93f16dbe7f
- Use the gVisor runsc command to display the processes running inside the webserver container:
$ sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID} I0809 23:22:26.200992 18720 x:0] *************************** I0809 23:22:26.201201 18720 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io ps f8dcc505e3e0850da6e33d5fd9e35308951afe8a5ab840a31e19ef93f16dbe7f] I0809 23:22:26.201275 18720 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17 I0809 23:22:26.201331 18720 x:0] PID: 18720 I0809 23:22:26.201388 18720 x:0] UID: 0, GID: 0 I0809 23:22:26.201449 18720 x:0] Configuration: I0809 23:22:26.201494 18720 x:0] RootDir: /run/containerd/runsc/k8s.io I0809 23:22:26.201590 18720 x:0] Platform: ptrace I0809 23:22:26.201792 18720 x:0] FileAccess: exclusive, overlay: false I0809 23:22:26.201900 18720 x:0] Network: sandbox, logging: false I0809 23:22:26.201999 18720 x:0] Strace: false, max size: 1024, syscalls: [] I0809 23:22:26.202095 18720 x:0] *************************** UID PID PPID C STIME TIME CMD 0 1 0 0 23:16 0s app I0809 23:22:26.203699 18720 x:0] Exiting with status: 0
Cleanup
In this section, we will delete everything we created in GCP in this article.
Compute Instances
- Delete the controller and worker compute instances:
gcloud -q compute instances delete \ controller-0 controller-1 controller-2 \ worker-0 worker-1 worker-2
Networking
- Delete the external load balancer network resources:
{ gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ --region $(gcloud config get-value compute/region) gcloud -q compute target-pools delete kubernetes-target-pool gcloud -q compute http-health-checks delete kubernetes gcloud -q compute addresses delete kubernetes-the-hard-way }
- Delete the kubernetes-the-hard-way firewall rules:
gcloud -q compute firewall-rules delete \ kubernetes-the-hard-way-allow-nginx-service \ kubernetes-the-hard-way-allow-internal \ kubernetes-the-hard-way-allow-external \ kubernetes-the-hard-way-allow-health-check
- Delete the kubernetes-the-hard-way network VPC:
{ gcloud -q compute routes delete \ kubernetes-route-10-200-0-0-24 \ kubernetes-route-10-200-1-0-24 \ kubernetes-route-10-200-2-0-24 gcloud -q compute networks subnets delete kubernetes gcloud -q compute networks delete kubernetes-the-hard-way }
See also
External links
- Kubernetes the Hard Way — on GitHub
- CFSSL — CloudFlare's PKI/TLS toolkit on GitHub