Kubernetes/GKE
Google Kubernetes Engine (GKE) is a managed, production-ready environment for deploying containerized applications in Kubernetes.
Contents
- 1 Deployments
- 2 Jobs and CronJobs
- 3 Cluster scaling
- 4 Configuring Pod Autoscaling and NodePools
- 5 Managing node pools
- 6 Deploying Kubernetes Engine via Helm Charts
- 7 Network security
- 8 Creating Services and Ingress Resources
- 9 Load balancing objects in GKE
- 10 Persistent Data and Storage
- 11 Configuring Persistent Storage for Kubernetes Engine
- 12 External links
Deployments
A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, .spec.template
) is changed, for example, if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout.
- Trigger a deployment rollout
- To update the version of nginx in the deployment, execute the following command:
$ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record $ kubectl rollout status deployment.v1.apps/nginx-deployment $ kubectl rollout history deployment nginx-deployment
- Trigger a deployment rollback
To roll back an object's rollout, you can use the kubectl rollout undo
command.
To roll back to the previous version of the nginx deployment, execute the following command:
$ kubectl rollout undo deployments nginx-deployment
- View the updated rollout history of the deployment.
$ kubectl rollout history deployment nginx-deployment deployments "nginx-deployment" REVISION CHANGE-CAUSE 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true 3 <none>
- View the details of the latest deployment revision:
$ kubectl rollout history deployment/nginx-deployment --revision=3
The output should look like the example. Your output might not be an exact match but it will show that the current revision has rolled back to nginx:1.7.9.
deployments "nginx-deployment" with revision #3 Pod Template: Labels: app=nginx pod-template-hash=3123191453 Containers: nginx: Image: nginx:1.7.9 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none>
Perform a canary deployment
A canary deployment is a separate deployment used to test a new version of your application. A single service targets both the canary and the normal deployments. And it can direct a subset of users to the canary version to mitigate the risk of new releases. The manifest file nginx-canary.yaml that is provided for you deploys a single pod running a newer version of nginx than your main deployment. In this task, you create a canary deployment using this new deployment file.
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-canary labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx track: canary Version: 1.9.1 spec: containers: - name: nginx image: nginx:1.9.1 ports: - containerPort: 80
The manifest for the nginx Service you deployed in the previous task uses a label selector to target the Pods with the app: nginx label. Both the normal deployment and this new canary deployment have the app: nginx label. Inbound connections will be distributed by the service to both the normal and canary deployment Pods. The canary deployment has fewer replicas (Pods) than the normal deployment, and thus it is available to fewer users than the normal deployment.
- Create the canary deployment based on the configuration file.
$ kubectl apply -f nginx-canary.yaml
When the deployment is complete, verify that both the nginx and the nginx-canary deployments are present.
$ kubectl get deployments
Switch back to the browser tab that is connected to the external LoadBalancer service ip and refresh the page. You should continue to see the standard "Welcome to nginx" page.
Switch back to the Cloud Shell and scale down the primary deployment to 0 replicas.
$ kubectl scale --replicas=0 deployment nginx-deployment
Verify that the only running replica is now the Canary deployment:
$ kubectl get deployments
Switch back to the browser tab that is connected to the external LoadBalancer service ip and refresh the page. You should continue to see the standard "Welcome to nginx" page showing that the Service is automatically balancing traffic to the canary deployment.
Note: Session affinity The Service configuration used in the lab does not ensure that all requests from a single client will always connect to the same Pod. Each request is treated separately and can connect to either the normal nginx deployment or to the nginx-canary deployment. This potential to switch between different versions may cause problems if there are significant changes in functionality in the canary release. To prevent this you can set the sessionAffinity field to ClientIP in the specification of the service if you need a client's first request to determine which Pod will be used for all subsequent connections.
For example:
apiVersion: v1 kind: Service metadata: name: nginx spec: type: LoadBalancer sessionAffinity: ClientIP selector: app: nginx ports: - protocol: TCP port: 60000 targetPort: 80
Jobs and CronJobs
- Simple example:
$ kubectl run pi --image perl --restart Never -- perl -Mbignum bpi -wle 'print bpi(2000)'
- Parallel Job with fixed completion count
$ cat << EOF > my-app-job.yaml apiVersion: batch/v1 kind: Job metadata: name: my-app-job spec: completions: 3 parallelism: 2 template: spec: [...] EOF
spec: backoffLimit: 4 activeDeadlineSeconds: 300
- Example#1
- Create and run a Job
You will create a job using a sample deployment manifest called example-job.yaml that has been provided for you. This Job computes the value of Pi to 2,000 places and then prints the result.
apiVersion: batch/v1 kind: Job metadata: # Unique key of the Job instance name: example-job spec: template: metadata: name: example-job spec: containers: - name: pi image: perl command: ["perl"] args: ["-Mbignum=bpi", "-wle", "print bpi(2000)"] # Do not restart containers after they exit restartPolicy: Never
To create a Job from this file, execute the following command:
$ kubectl apply -f example-job.yaml $ kubectl describe job Host Port: <none> Command: perl Args: -Mbignum=bpi -wle print bpi(2000) Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 17s job-controller Created pod: example-job-gtf7w $ kubectl get pods NAME READY STATUS RESTARTS AGE example-job-gtf7w 0/1 Completed 0 43s
- Clean up and delete the Job
When a Job completes, the Job stops creating Pods. The Job API object is not removed when it completes, which allows you to view its status. Pods created by the Job are not deleted, but they are terminated. Retention of the Pods allows you to view their logs and to interact with them.
To get a list of the Jobs in the cluster, execute the following command:
$ kubectl get jobs NAME DESIRED SUCCESSFUL AGE example-job 1 1 2m
To retrieve the log file from the Pod that ran the Job execute the following command. You must replace [POD-NAME] with the node name you recorded in the last task
$ kubectl logs [POD-NAME] 3.141592653589793238...
The output will show that the job wrote the first two thousand digits of pi to the Pod log.
To delete the Job, execute the following command:
$ kubectl delete job example-job
If you try to query the logs again the command will fail as the Pod can no longer be found.
Define and deploy a CronJob manifest
You can create CronJobs to perform finite, time-related tasks that run once or repeatedly at a time that you specify.
In this section, we will create and run a CronJob, and then clean up and delete the Job.
- Create and run a CronJob
The CronJob manifest file example-cronjob.yaml has been provided for you. This CronJob deploys a new container every minute that prints the time, date and "Hello, World!".
apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo "Hello, World!" restartPolicy: OnFailure
<block> Note
CronJobs use the required schedule field, which accepts a time in the Unix standard crontab format. All CronJob times are in UTC:
- The first value indicates the minute (between 0 and 59).
- The second value indicates the hour (between 0 and 23).
- The third value indicates the day of the month (between 1 and 31).
- The fourth value indicates the month (between 1 and 12).
- The fifth value indicates the day of the week (between 0 and 6).
The schedule field also accepts * and ? as wildcard values. Combining / with ranges specifies that the task should repeat at a regular interval. In the example, */1 * * * * indicates that the task should repeat every minute of every day of every month. </block>
To create a Job from this file, execute the following command:
$ kubectl apply -f example-cronjob.yaml <pre> To check the status of this Job, execute the following command, where [job_name] is the name of your job: <pre> $ kubectl describe job [job_name] Image: busybox Port: <none> Host Port: <none> Args: /bin/sh -c date; echo "Hello, World!" Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 35s job-controller Created pod: hello-1565824980-sgdnn
View the output of the Job by querying the logs for the Pod. Replace [POD-NAME] with the name of the Pod you recorded in the last step.
$ kubectl logs <pod-name> Wed Aug 14 23:23:03 UTC 2019 Hello, World!
To view all job resources in your cluster, including all of the Pods created by the CronJob which have completed, execute the following command:
$ kubectl get jobs NAME COMPLETIONS DURATION AGE hello-1565824980 1/1 2s 2m29s hello-1565825040 1/1 2s 89s hello-1565825100 1/1 2s 29s
Your job names might be different from the example output. By default, Kubernetes sets the Job history limits so that only the last three successful and last failed job are retained so this list will only contain the most recent three of four jobs.
- Clean up and delete the Job
In order to stop the CronJob and clean up the Jobs associated with it you must delete the CronJob.
To delete all these jobs, execute the following command:
$ kubectl delete cronjob hello
To verify that the jobs were deleted, execute the following command:
$ kubectl get jobs No resources found.
All the Jobs were removed.
Cluster scaling
Think of cluster scaling as a coarse-grain operation that should happen infrequently in pods scaling with deployments as a fine-grain operation that should happen frequently.
- Pod conditions that prevent node deletion
- Not run by a controller
- e.g., Pods that are not set in a Deployment, ReplicaSet, Job, etc.
- Has local storage
- Restricted by constraint rules
- Pods that have
cluster-autoscaler.kubernetes.io/safe-to-evict
annotation set to False - Pods that have the
RestrictivePodDisruptionBudget
- At the node-level, if the
kubernetes.io/scale-down-disabled
annotation is set to True
- gcloud
- Create a cluster with autoscaling enabled:
$ gcloud container clusters create <cluster-name> \ --num-nodes 30 \ --enable-autoscaling \ --min-nodes 15 \ --max-nodes 50 \ [--zone <compute-zone>]
- Add a node pool with autoscaling enabled:
$ gcloud container node-pools create <pool-name> \ --cluster <cluster-name> \ --enable-autoscaling \ --min-nodes 15 \ --max-nodes 50 \ [--zone <compute-zone>]
- Enable autoscaling for an existing node pool:
$ gcloud container clusters update \ <cluster-name> \ --enable-autoscaling \ --min-nodes 1 \ --max-nodes 10 \ --zone <compute-zone> \ --node-pool <pool-name>
- Disable autoscaling for an existing node pool:
$ gcloud container clusters update \ <cluster-name> \ --no-enable-autoscaling \ --node-pool <pool-name> \ [--zone <compute-zone> --project <project-id>]
Configuring Pod Autoscaling and NodePools
Create a GKE cluster
In Cloud Shell, type the following command to create environment variables for the GCP zone and cluster name that will be used to create the cluster for this lab.
export my_zone=us-central1-a export my_cluster=standard-cluster-1
- Configure tab completion for the kubectl command-line tool.
source <(kubectl completion bash)
- Create a VPC-native Kubernetes cluster:
$ gcloud container clusters create $my_cluster \ --num-nodes 2 --enable-ip-alias --zone $my_zone
- Configure access to your cluster for kubectl:
$ gcloud container clusters get-credentials $my_cluster --zone $my_zone
- Deploy a sample web application to your GKE cluster
Deploy a sample application to your cluster using the web.yaml deployment file that has been created for you:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: web spec: replicas: 1 selector: matchLabels: run: web template: metadata: labels: run: web spec: containers: - image: gcr.io/google-samples/hello-app:1.0 name: web ports: - containerPort: 8080 protocol: TCP
This manifest creates a deployment using a sample web application container image that listens on an HTTP server on port 8080.
- To create a deployment from this file, execute the following command:
$ kubectl create -f web.yaml --save-config 
- Create a service resource of type NodePort on port 8080 for the web deployment:
$ kubectl expose deployment web --target-port=8080 --type=NodePort 
- Verify that the service was created and that a node port was allocated:
$ kubectl get service web NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE web NodePort 10.12.6.154 <none> 8080:30972/TCP 5m4s
Your IP address and port number might be different from the example output.
Configure autoscaling on the cluster
In this section, we will configure the cluster to automatically scale the sample application that we deployed earlier.
- Configure autoscaling
- Get the list of deployments to determine whether your sample web application is still running:
$ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE web 1 1 1 1 94s
- To configure your sample application for autoscaling (and to set the maximum number of replicas to four and the minimum to one, with a CPU utilization target of 1%), execute the following command:
$ kubectl autoscale deployment web --max 4 --min 1 --cpu-percent 1
When you use kubectl autoscale, you specify a maximum and minimum number of replicas for your application, as well as a CPU utilization target.
- Get the list of deployments to verify that there is still only one deployment of the web application:
$ kubectl get deployment
- Inspect the HorizontalPodAutoscaler object
The kubectl autoscale command you used in the previous task creates a HorizontalPodAutoscaler object that targets a specified resource, called the scale target, and scales it as needed. The autoscaler periodically adjusts the number of replicas of the scale target to match the average CPU utilization that you specify when creating the autoscaler.
- To get the list of HorizontalPodAutoscaler resources, execute the following command:
$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE web Deployment/web 1%/1% 1 4 1 50s
- To inspect the configuration of HorizontalPodAutoscaler in YAML form, execute the following command:
$ kubectl describe horizontalpodautoscaler web <pre> Name: web Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Thu, 15 Aug 2019 12:32:37 -0700 Reference: Deployment/web Metrics: ( current / target ) resource cpu on pods (as a percentage of request): 1% (1m) / 1% Min replicas: 1 Max replicas: 4 Deployment pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: <none>
- Test the autoscale configuration
You need to create a heavy load on the web application to force it to scale out. You create a configuration file that defines a deployment of four containers that run an infinite loop of HTTP queries against the sample application web server.
You create the load on your web application by deploying the loadgen application using the loadgen.yaml file that has been provided for you.
apiVersion: apps/v1 kind: Deployment metadata: name: loadgen spec: replicas: 4 selector: matchLabels: app: loadgen template: metadata: labels: app: loadgen spec: containers: - name: loadgen image: k8s.gcr.io/busybox args: - /bin/sh - -c - while true; do wget -q -O- http://web:8080; done
- Get the list of deployments to verify that the load generator is running:
$ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE loadgen 4 4 4 4 11s web 1 1 1 1 9m9s
- Inspect HorizontalPodAutoscaler:
$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE web Deployment/web 20%/1% 1 4 1 7m58s
Once the loadgen Pod starts to generate traffic, the web deployment CPU utilization begins to increase. In the example output, the targets are now at 35% CPU utilization compared to the 1% CPU threshold.
- After a few minutes, inspect the HorizontalPodAutoscaler again:
$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE web Deployment/web 68%/1% 1 4 4 9m39s $ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE loadgen 4 4 4 4 2m44s web 4 4 4 3 11m
- To stop the load on the web application, scale the loadgen deployment to zero replicas.
$ kubectl scale deployment loadgen --replicas 0
- Get the list of deployments to verify that loadgen has scaled down.
$ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE loadgen 0 0 0 0 3m25s web 4 4 4 3 12m
The loadgen deployment should have zero replicas.
Wait 2 to 3 minutes, and then get the list of deployments again to verify that the web application has scaled down to the minimum value of 1 replica that you configured when you deployed the autoscaler.
$ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE loadgen 0 0 0 0 4m web 1 1 1 1 15m
You should now have one deployment of the web application.
Managing node pools
In this section, we will create a new pool of nodes using preemptible instances, and then will constrain the web deployment to run only on the preemptible nodes.
- Add a node pool
- To deploy a new node pool with three preemptible VM instances, execute the following command:
$ gcloud container node-pools create "temp-pool-1" \ --cluster=$my_cluster --zone=$my_zone \ --num-nodes "2" --node-labels=temp=true --preemptible
If you receive an error that no preemptible instances are available you can remove the --preemptible
option to proceed with the lab.
- Get the list of nodes to verify that the new nodes are ready:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION gke-standard-cluster-1-default-pool-61fba731-01mc Ready <none> 21m v1.12.8-gke.10 gke-standard-cluster-1-default-pool-61fba731-bvfx Ready <none> 21m v1.12.8-gke.10 gke-standard-cluster-1-temp-pool-1-e8966c96-nccc Ready <none> 46s v1.12.8-gke.10 gke-standard-cluster-1-temp-pool-1-e8966c96-pk21 Ready <none> 43s v1.12.8-gke.10
You should now have 4 nodes. (Your names will be different from the example output.)
All the nodes that you added have the temp=true label because you set that label when you created the node-pool. This label makes it easier to locate and configure these nodes.
- To list only the nodes with the temp=true label, execute the following command:
$ kubectl get nodes -l temp=true NAME STATUS ROLES AGE VERSION gke-standard-cluster-1-temp-pool-1-e8966c96-nccc Ready <none> 2m1s v1.12.8-gke.10 gke-standard-cluster-1-temp-pool-1-e8966c96-pk21 Ready <none> 118s v1.12.8-gke.10
- Control scheduling with taints and tolerations
To prevent the scheduler from running a Pod on the temporary nodes, you add a taint to each of the nodes in the temp pool. Taints are implemented as a key-value pair with an effect (such as NoExecute) that determines whether Pods can run on a certain node. Only nodes that are configured to tolerate the key-value of the taint are scheduled to run on these nodes.
To add a taint to each of the newly created nodes, execute the following command. You can use the temp=true label to apply this change across all the new nodes simultaneously.
$ kubectl taint node -l temp=true nodetype=preemptible:NoExecute node/gke-standard-cluster-1-temp-pool-1-e8966c96-nccc tainted node/gke-standard-cluster-1-temp-pool-1-e8966c96-pk21 tainted $ kubectl describe nodes | grep ^Taints Taints: <none> Taints: <none> Taints: nodetype=preemptible:NoExecute Taints: nodetype=preemptible:NoExecute
To allow application Pods to execute on these tainted nodes, you must add a tolerations key to the deployment configuration.
Edit the web.yaml file to add the following key in the template's spec section:
tolerations: - key: "nodetype" operator: Equal value: "preemptible"
The spec
section of the file should look like the following:
... spec: tolerations: - key: "nodetype" operator: Equal value: "preemptible" containers: - image: gcr.io/google-samples/hello-app:1.0 name: web ports: - containerPort: 8080 protocol: TCP
To force the web deployment to use the new node-pool add a nodeSelector key in the template's spec section. This is parallel to the tolerations key you just added.
nodeSelector: temp: "true"
Note: GKE adds a custom label to each node called cloud.google.com/gke-nodepool that contains the name of the node-pool that the node belongs to. This key can also be used as part of a nodeSelector to ensure Pods are only deployed to suitable nodes.
The full web.yaml deployment should now look as follows.
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: web spec: replicas: 1 selector: matchLabels: run: web template: metadata: labels: run: web spec: tolerations: - key: "nodetype" operator: Equal value: "preemptible" nodeSelector: temp: "true" containers: - image: gcr.io/google-samples/hello-app:1.0 name: web ports: - containerPort: 8080 protocol: TCP
To apply this change, execute the following command:
kubectl apply -f web.yaml
If you have problems editing this file successfully you can use the pre-prepared sample file called web-tolerations.yaml instead.
- Get the list of Pods:
$ kubectl get pods NAME READY STATUS RESTARTS AGE web-7cb566bccd-pkfst 1/1 Running 0 1m
To confirm the change, inspect the running web Pod(s) using the following command
$ kubectl describe pods -l run=web
A Tolerations section with nodetype=preemptible in the list should appear near the bottom of the (truncated) output.
... Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s nodetype=preemptible Events: ...
The output confirms that the Pods will tolerate the taint value on the new preemptible nodes, and thus that they can be scheduled to execute on those nodes.
To force the web application to scale out again scale the loadgen deployment back to four replicas.
$ kubectl scale deployment loadgen --replicas 4
You could scale just the web application directly but using the loadgen app will allow you to see how the different taint, toleration and nodeSelector settings that apply to the web and loadgen applications affect which nodes they are scheduled on.
Get the list of Pods using thewide output format to show the nodes running the Pods
$ kubectl get pods -o wide
This shows that the loadgen app is running only on default-pool nodes while the web app is running only the preemptible nodes in temp-pool-1.
The taint setting prevents Pods from running on the preemptible nodes so the loadgen application only runs on the default pool. The toleration setting allows the web application to run on the preemptible nodes and the nodeSelector forces the web application Pods to run on those nodes.
NAME READY STATUS [...] NODE Loadgen-x0 1/1 Running [...] gke-xx-default-pool-y0 loadgen-x1 1/1 Running [...] gke-xx-default-pool-y2 loadgen-x3 1/1 Running [...] gke-xx-default-pool-y3 loadgen-x4 1/1 Running [...] gke-xx-default-pool-y4 web-x1 1/1 Running [...] gke-xx-temp-pool-1-z1 web-x2 1/1 Running [...] gke-xx-temp-pool-1-z2 web-x3 1/1 Running [...] gke-xx-temp-pool-1-z3 web-x4 1/1 Running [...] gke-xx-temp-pool-1-z4
Deploying Kubernetes Engine via Helm Charts
Ensure your user account has the cluster-admin role in your cluster.
$ kubectl create clusterrolebinding user-admin-binding \ --clusterrole=cluster-admin \ --user=$(gcloud config get-value account)
- Create a Kubernetes service account that is Tiller - the server side of Helm, can be used for deploying charts.
$ kubectl create serviceaccount tiller --namespace kube-system
- Grant the Tiller service account the cluster-admin role in your cluster:
$ kubectl create clusterrolebinding tiller-admin-binding \ --clusterrole=cluster-admin \ --serviceaccount=kube-system:tiller
- Execute the following commands to initialize Helm using the service account:
$ helm init --service-account=tiller $ kubectl -n kube-system get pods | grep ^tiller tiller-deploy-8548d8bd7c-l548r 1/1 Running 0 18s $ helm repo update $ helm version Client: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"}
Execute the following command to deploy a set of resources to create a Redis service on the active context cluster:
$ helm install stable/redis
A Helm chart is a package of resource configuration files, along with configurable parameters. This single command deployed a collection of resources.
A Kubernetes Service defines a set of Pods and a stable endpoint by which network traffic can access them. In Cloud Shell, execute the following command to view Services that were deployed through the Helm chart:
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.12.0.1 <none> 443/TCP 3m24s opining-wolverine-redis-headless ClusterIP None <none> 6379/TCP 11s opining-wolverine-redis-master ClusterIP 10.12.5.246 <none> 6379/TCP 11s opining-wolverine-redis-slave ClusterIP 10.12.14.196 <none> 6379/TCP 11s
A Kubernetes StatefulSet manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods. In Cloud Shell, execute the following commands to view a StatefulSet that was deployed through the Helm chart:
$ kubectl get statefulsets NAME DESIRED CURRENT AGE opining-wolverine-redis-master 1 1 59s opining-wolverine-redis-slave 2 2 59s
A Kubernetes ConfigMap lets you storage and manage configuration artifacts, so that they are decoupled from container-image content. In Cloud Shell, execute the following commands to view ConfigMaps that were deployed through the Helm chart:
$ kubectl get configmaps NAME DATA AGE opining-wolverine-redis 3 95s opining-wolverine-redis-health 6 95s
A Kubernetes Secret, like a ConfigMap, lets you store and manage configuration artifacts, but it's specially intended for sensitive information such as passwords and authorization keys. In Cloud Shell, execute the following commands to view some of the Secret that was deployed through the Helm chart:
$ kubectl get secrets NAME TYPE DATA AGE opining-wolverine-redis Opaque 1 2m5s
You can inspect the Helm chart directly using the following command:
$ helm inspect stable/redis
If you want to see the templates that the Helm chart deploys you can use the following command:
$ helm install stable/redis --dry-run --debug
- Test Redis functionality
You store and retrieve values in the new Redis deployment running in your Kubernetes Engine cluster.
Execute the following command to store the service ip-address for the Redis cluster in an environment variable:
$ export REDIS_IP=$(kubectl get services -l app=redis -o json | jq -r '.items[].spec | select(.selector.role=="master")' | jq -r '.clusterIP')
Retrieve the Redis password and store it in an environment variable:
$ export REDIS_PW=$(kubectl get secret -l app=redis -o jsonpath="{.items[0].data.redis-password}" | base64 --decode)
- Display the Redis cluster address and password:
$ echo Redis Cluster Address : $REDIS_IP $ echo Redis auth password : $REDIS_PW
- Open an interactive shell to a temporary Pod, passing in the cluster address and password as environment variables:
$ kubectl run redis-test --rm --tty -i --restart='Never' \ --env REDIS_PW=$REDIS_PW \ --env REDIS_IP=$REDIS_IP \ --image docker.io/bitnami/redis:4.0.12 -- bash
- Connect to the Redis cluster:
# redis-cli -h $REDIS_IP -a $REDIS_PW
- Set a key value:
set mykey this_amazing_value
This will display OK if successful.
- Retrieve the key value:
get mykey
This will return the value you stored indicating that the Redis cluster can successfully store and retrieve data.
Network security
Network policy
A Pod-level firewall restricting access to other Pods and Services. (Disabled by default in GKE.)
Must be enabled:
- Requires at least 2 nodes of n1-standard-1 or higher (recommended minimum of 3 nodes)
- Requires nodes to be recreated
- Enable network policy for a new cluster:
$ gcloud container clusters create <name> \ --enable-network-policy
- Enable a network policy for an existing cluster:
$ gcloud container clusters update <name> \ --update-addons-NetworkPolicy=ENABLED $ gcloud container cluster update <name> \ --enable-network-policy
- Disabling a network policy:
$ gcloud container clusters create <name> \ --no-enable-network-policy
- Writing a network policy
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: demo-network-policy namespace: default spec: podSelector: matchLabels: role: demo-app policyTypes: - Ingress - Egress ingress: - from: - ipBlock: cidr: 172.17.0.0/16 except: - 172.17.1.0/16 - namespaceSelector: matchLabels: project: myproject - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 6379 egress: - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 5978
- Network policy defaults
- Pros:
- Limits "attack surface" of Pods in your cluster.
- Cons:
- A lot of work to manage (use Istio instead)
metadata: name: default-deny spec: podSelector: {} policyTypes: - Ingress
metadata: name: default-deny spec: podSelector: {} policyTypes: - Egress
metadata: name: default-deny spec: podSelector: {} policyTypes: - Ingress - Egress
metadata: name: allow-all spec: podSelector: {} policyTypes: - Ingress ingress: - {}
metadata: name: allow-all spec: podSelector: {} policyTypes: - Egress egress: - {}
Setup a private GKE cluster
In the Cloud Shell, enter the following command to review the details of your new cluster:
$ gcloud container clusters describe private-cluster --region us-central1-a
- The following values appear only under the private cluster:
privateEndpoint
- an internal IP address. Nodes use this internal IP address to communicate with the cluster master.
publicEndpoint
- an external IP address. External services and administrators can use the external IP address to communicate with the cluster master.
- You have several options to lock down your cluster to varying degrees:
- The whole cluster can have external access.
- The whole cluster can be private.
- The nodes can be private while the cluster master is public, and you can limit which external networks are authorized to access the cluster master.
Without public IP addresses, code running on the nodes cannot access the public Internet unless you configure a NAT gateway such as Cloud NAT.
You might use private clusters to provide services such as internal APIs that are meant only to be accessed by resources inside your network. For example, the resources might be private tools that only your company uses. Or they might be backend services accessed by your frontend services, and perhaps only those frontend services are accessed directly by external customers or users. In such cases, private clusters are a good way to reduce the surface area of attack for your application.
Restrict incoming traffic to Pods
First, we will create a GKE cluster to use for the demos below.
- Create a GKE cluster
- In Cloud Shell, type the following command to set the environment variable for the zone and cluster name:
export my_zone=us-central1-a export my_cluster=standard-cluster-1
- Configure kubectl tab completion in Cloud Shell:
source <(kubectl completion bash)
- Create a Kubernetes cluster (note that this command adds the additional flag
--enable-network-policy
. This flag allows this cluster to use cluster network policies):
$ gcloud container clusters create $my_cluster \ --num-nodes 2 \ --enable-ip-alias \ --zone $my_zone \ --enable-network-policy
- Configure access to your cluster for the
kubectl
command-line tool:
$ gcloud container clusters get-credentials $my_cluster --zone $my_zone
Run a simple web server application with the label app=hello
, and expose the web application internally in the cluster:
$ kubectl run hello-web --labels app=hello \ --image=gcr.io/google-samples/hello-app:1.0 --port 8080 --expose
- Restrict incoming traffic to Pods
- The following
NetworkPolicy
manifest file defines an ingress policy that allows access to Pods labeledapp: hello
from Pods labeledapp: foo
:
$ cat << EOF > hello-allow-from-foo.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: hello-allow-from-foo spec: policyTypes: - Ingress podSelector: matchLabels: app: hello ingress: - from: - podSelector: matchLabels: app: foo EOF $ kubectl apply -f hello-allow-from-foo.yaml $ kubectl get networkpolicy NAME POD-SELECTOR AGE hello-allow-from-foo app=hello 7s
- Validate the ingress policy
- Run a temporary Pod called
test-1
with the labelapp=foo
and get a shell in the Pod:
$ kubectl run test-1 --labels app=foo --image=alpine --restart=Never --rm --stdin --tty
The kubectl switches used here in conjunction with the run command are important to note:
--stdin
(alternatively-i
)- creates an interactive session attached to STDIN on the container.
--tty
(alternatively-t
)- allocates a TTY for each container in the pod.
--rm
- instructs Kubernetes to treat this as a temporary Pod that will be removed as soon as it completes its startup task. As this is an interactive session it will be removed as soon as the user exits the session.
--label
(alternatively-l
)- adds a set of labels to the pod.
--restart
- defines the restart policy for the Pod
- Make a request to the
hello-web:8080
endpoint to verify that the incoming traffic is allowed:
/ # wget -qO- --timeout=2 http://hello-web:8080 Hello, world! Version: 1.0.0 Hostname: hello-web-75f66f69d-qgzjb / #
- Now, run a different Pod using the same Pod name but using a label, app=other, that does not match the podSelector in the active network policy. This Pod should not have the ability to access the hello-web application:
$ kubectl run test-1 --labels app=other --image=alpine --restart=Never --rm --stdin --tty
- Make a request to the hello-web:8080 endpoint to verify that the incoming traffic is not allowed:
/ # wget -qO- --timeout=2 http://hello-web:8080 wget: download timed out / #
The request times out.
Restrict outgoing traffic from the Pods
You can restrict outgoing (egress) traffic as you do incoming traffic. However, in order to query internal hostnames (such as hello-web) or external hostnames (such as www.example.com), you must allow DNS resolution in your egress network policies. DNS traffic occurs on port 53, using TCP and UDP protocols.
The following NetworkPolicy manifest file defines a policy that permits Pods with the label app: foo
to communicate with Pods labeled app: hello on any port number, and allows the Pods labeled app: foo to communicate to any computer on UDP port 53, which is used for DNS resolution. Without the DNS port open, you will not be able to resolve the hostnames:
$ cat << EOF > foo-allow-to-hello.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: foo-allow-to-hello spec: policyTypes: - Egress podSelector: matchLabels: app: foo egress: - to: - podSelector: matchLabels: app: hello - to: ports: - protocol: UDP port: 53 EOF $ kubectl apply -f foo-allow-to-hello.yaml $ kubectl get networkpolicy NAME POD-SELECTOR AGE foo-allow-to-hello app=foo 7s hello-allow-from-foo app=hello 5m
- Validate the egress policy
- Deploy a new web application called hello-web-2 and expose it internally in the cluster:
$ kubectl run hello-web-2 --labels app=hello-2 \ --image=gcr.io/google-samples/hello-app:1.0 --port 8080 --expose
- Run a temporary Pod with the app=foo label and get a shell prompt inside the container:
$ kubectl run test-3 --labels app=foo --image=alpine --restart=Never --rm --stdin --tty
- Verify that the Pod can establish connections to
hello-web:8080
:
/ # wget -qO- --timeout=2 http://hello-web:8080 Hello, world! Version: 1.0.0 Hostname: hello-web-75f66f69d-qgzjb / #
- Verify that the Pod cannot establish connections to
hello-web-2:8080
wget -qO- --timeout=2 http://hello-web-2:8080
This fails because none of the Network policies you have defined allow traffic to Pods labelled app: hello-2.
- Verify that the Pod cannot establish connections to external websites, such as www.example.com:
wget -qO- --timeout=2 http://www.example.com
This fails because the network policies do not allow external http traffic (tcp port 80).
/ # ping 8.8.8.8 -c 3 PING 8.8.8.8 (8.8.8.8): 56 data bytes --- 8.8.8.8 ping statistics --- 3 packets transmitted, 0 packets received, 100% packet loss
Creating Services and Ingress Resources
- Create Pods and services to test DNS resolution
- Create a service called
dns-demo
with two sample application Pods calleddns-demo-1
anddns-demo-2
:
$ cat << EOF > dns-demo.yaml apiVersion: v1 kind: Service metadata: name: dns-demo spec: selector: name: dns-demo clusterIP: None ports: - name: dns-demo port: 1234 targetPort: 1234 --- apiVersion: v1 kind: Pod metadata: name: dns-demo-1 labels: name: dns-demo spec: hostname: dns-demo-1 subdomain: dns-demo containers: - name: nginx image: nginx --- apiVersion: v1 kind: Pod metadata: name: dns-demo-2 labels: name: dns-demo spec: hostname: dns-demo-2 subdomain: dns-demo containers: - name: nginx image: nginx EOF $ kubectl apply -f dns-demo.yaml $ kubectl get pods NAME READY STATUS RESTARTS AGE dns-demo-1 1/1 Running 0 19s dns-demo-2 1/1 Running 0 19s
- Access Pods and services by FQDN
- Test name resolution for pods and services from the Cloud Shell and from Pods running inside your cluster (note: you can find the IP address for
dns-demo-2
by displaying the details of the Pod):
$ kubectl describe pods dns-demo-2
You will see the IP address in the first section of the below the status, before the details of the individual containers:
kubectl describe pods dns-demo-2 Name: dns-demo-2 Namespace: default Priority: 0 PriorityClassName: <none> Node: gke-standard-cluster-1-default-pool-a6c9108e-05m2/10.128.0.5 Start Time: Mon, 19 Aug 2019 16:58:11 -0700 Labels: name=dns-demo Annotations: [...] Status: Running IP: 10.8.2.5 Containers: nginx:
In the example above, the Pod IP address was 10.8.2.8. You can query just the Pod IP address on its own using the following syntax for the kubectl describe pods
command:
$ echo $(kubectl get pod dns-demo-2 --template={{.status.podIP}}) 10.8.2.5
The format of the FQDN of a Pod is hostname.subdomain.namespace.svc.cluster.local
. The last three pieces (svc.cluster.local
) stay constant in any cluster, however, the first three pieces are specific to the Pod that you are trying to access. In this case, the hostname is dns-demo-2
, the subdomain is dns-demo
, and the namespace is default
, because we did not specify a non-default namespace. The FQDN of the dns-demo-2
Pod is therefore dns-demo-2.dns-demo.default.svc.cluster.local
.
- Ping
dns-demo-2
from your local machine (or from the Cloud Shell):
$ ping dns-demo-2.dns-demo.default.svc.cluster.local ping: dns-demo-2.dns-demo.default.svc.cluster.local: Name or service not known
The ping fails because we are not inside the cluster itself.
To get inside the cluster, open an interactive session to Bash running from dns-demo-1.
$ kubectl exec -it dns-demo-1 /bin/bash
Now that we are inside a container in the cluster, our commands run from that context. However, we do not have a tool to ping in this container, so the ping command will not work.
- Update apt-get and install a ping tool (from within the container):
root@dns-demo-1:/# apt-get update && apt-get install -y iputils-ping
- Ping dns-demo-2:
root@dns-demo-1:/# ping dns-demo-2.dns-demo.default.svc.cluster.local -c 3 PING dns-demo-2.dns-demo.default.svc.cluster.local (10.8.2.5) 56(84) bytes of data. 64 bytes from dns-demo-2.dns-demo.default.svc.cluster.local (10.8.2.5): icmp_seq=1 ttl=62 time=1.46 ms 64 bytes from dns-demo-2.dns-demo.default.svc.cluster.local (10.8.2.5): icmp_seq=2 ttl=62 time=0.397 ms 64 bytes from dns-demo-2.dns-demo.default.svc.cluster.local (10.8.2.5): icmp_seq=3 ttl=62 time=0.387 ms --- dns-demo-2.dns-demo.default.svc.cluster.local ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 16ms rtt min/avg/max/mdev = 0.387/0.748/1.461/0.504 ms
This ping should succeed and report that the target has the IP address you found earlier for the dns-demo-2
Pod.
- Ping the
dns-demo
service's FQDN, instead of a specific Pod inside the service:
ping dns-demo.default.svc.cluster.local
This ping should also succeed but it will return a response from the FQDN of one of the two demo-dns
Pods. This Pod might be either demo-dns-1
or demo-dns-2
.
When you deploy applications, your application code runs inside a container in the cluster, and thus your code can access other services by using the FQDNs of those services. This approach is better than using IP addresses or even Pod names because those are more likely to change.
Deploy a sample workload and a ClusterIP service
In this section, we will create a deployment for a set of Pods within the cluster and then expose them using a ClusterIP service.
- Deploy a sample web application to your GKE cluster
- Deploy a sample web application container image that listens on an HTTP server on port 8080:
$ cat << EOF > hello-v1.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: hello-v1 spec: replicas: 3 selector: matchLabels: run: hello-v1 template: metadata: labels: run: hello-v1 name: hello-v1 spec: containers: - image: gcr.io/google-samples/hello-app:1.0 name: hello-v1 ports: - containerPort: 8080 protocol: TCP EOF $ kubectl create -f hello-v1.yaml $ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-v1 3 3 3 3 10s
- Define service types in the manifest
- Deploy a Service using a ClusterIP:
$ cat << EOF > hello-svc.yaml apiVersion: v1 kind: Service metadata: name: hello-svc spec: type: ClusterIP selector: name: hello-v1 ports: - protocol: TCP port: 80 targetPort: 8080 EOF $ kubectl apply -f ./hello-svc.yaml
This manifest defines a ClusterIP service and applies it to Pods that correspond to the selector. In this case, the manifest is applied to the hello-v1
Pods that we deployed. This service will automatically be applied to any other deployments with the name: hello-v1
label.
- Verify that the Service was created and that a Cluster-IP was allocated:
$ kubectl get service hello-svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-svc ClusterIP 10.12.1.159 <none> 80/TCP 29s
No external IP is allocated for this service. Because the Kubernetes Cluster IP addresses are not externally accessible by default, creating this Service does not make your application accessible outside of the cluster.
- Test your application
- Attempt to open an HTTP session to the new Service using the following command:
$ curl hello-svc.default.svc.cluster.local curl: (6) Could not resolve host: hello-svc.default.svc.cluster.local
The connection should fail because that service is not exposed outside of the cluster.
Now, test the Service from inside the cluster using the interactive shell you have running on the dns-demo-1 Pod
. Return to your first Cloud Shell window, which is currently redirecting the STDIN and STDOUT of the dns-demo-1
Pod.
- Install curl so you can make calls to web services from the command line:
$ apt-get install -y curl
- Use the following command to test the HTTP connection between the Pods:
$ curl hello-svc.default.svc.cluster.local Hello, world! Version: 1.0.0 Hostname: hello-v1-5574c4bff6-72wzc
This connection should succeed and provide a response similar to the output below. Your hostname might be different from the example output.
- Convert the service to use NodePort
In this section, we will convert our existing ClusterIP service to a NodePort service and then retest access to the service from inside and outside the cluster.
- Apply a modified version of our previous
hello-svc
Service manifest:
$ cat << EOF > hello-nodeport-svc.yaml apiVersion: v1 kind: Service metadata: name: hello-svc spec: type: NodePort selector: name: hello-v1 ports: - protocol: TCP port: 80 targetPort: 8080 nodePort: 30100 EOF $ kubectl apply -f ./hello-nodeport-svc.yaml
This manifest redefines hello-svc
as a NodePort
service and assigns the service port 30100 on each node of the cluster for that service.
- Verify that the service type has changed to
NodePort
:
$ kubectl get service hello-svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-svc NodePort 10.12.1.159 <none> 80:30100/TCP 5m30s
Note that there is still no external IP allocated for this service.
- Test the application
- Attempt to open an HTTP session to the new service:
$ curl hello-svc.default.svc.cluster.local curl: (6) Could not resolve host: hello-svc.default.svc.cluster.local
The connection should fail because that service is not exposed outside of the cluster.
Return to your first Cloud Shell window, which is currently redirecting the STDIN and STDOUT of the dns-test
Pod.
- Test the HTTP connection between the Pods:
$ curl hello-svc.default.svc.cluster.local Hello, world! Version: 1.0.0 Hostname: hello-v1-5574c4bff6-72wzc
- Deploy a new set of Pods and a LoadBalancer service
We will now deploy a new set of Pods running a different version of the application so that we can easily differentiate the two services. We will then expose the new Pods as a LoadBalancer
Service and access the service from outside the cluster.
- Create a new deployment that runs version 2 of the sample "hello" application on port 8080:
$ cat << EOF > hello-v2.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: hello-v2 spec: replicas: 3 selector: matchLabels: run: hello-v2 template: metadata: labels: run: hello-v2 name: hello-v2 spec: containers: - image: gcr.io/google-samples/hello-app:2.0 name: hello-v2 ports: - containerPort: 8080 protocol: TCP EOF $ kubectl create -f hello-v2.yaml $ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-v1 3 3 3 3 8m22s hello-v2 3 3 3 3 6s
- Define service types in the manifest
- Deploy a
LoadBalancer
Service:
apiVersion: v1 kind: Service metadata: name: hello-lb-svc spec: type: LoadBalancer selector: name: hello-v2 ports: - protocol: TCP port: 80 targetPort: 8080
This manifest defines a LoadBalancer
Service, which deploys a GCP Network Load Balancer to provide external access to the service. This service is only applied to the Pods with the name: hello-v2
selector.
$ kubectl apply -f ./hello-lb-svc.yaml $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dns-demo ClusterIP None <none> 1234/TCP 18m hello-lb-svc LoadBalancer 10.12.3.30 35.193.235.140 80:30980/TCP 95s hello-svc NodePort 10.12.1.159 <none> 80:30100/TCP 10m kubernetes ClusterIP 10.12.0.1 <none> 443/TCP 21m $ export LB_EXTERNAL_IP=35.193.235.140
Notice that the new LoadBalancer
Service has an external IP. This is implemented using a GCP load balancer and will take a few minutes to create. This external IP address makes the service accessible from outside the cluster. Take note of this External IP address for use below.
- Test your application
- Attempt to open an HTTP session to the new service:
$ curl hello-lb-svc.default.svc.cluster.local curl: (6) Could not resolve host: hello-lb-svc.default.svc.cluster.local
The connection should fail because that service name is not exposed outside of the cluster. This occurs because the external IP address is not registered with this hostname.
- Try the connection again using the External IP address associated with the service:
$ curl ${LB_EXTERNAL_IP} Hello, world! Version: 2.0.0 Hostname: hello-v2-7db7758bf4-998gf
This time the connection does not fail because the LoadBalancer's external IP address can be reached from outside GCP.
Return to your first Cloud Shell window, which is currently redirecting the STDIN and STDOUT of the dns-demo-1
Pod.
- Use the following command to test the HTTP connection between the Pods.
root@dns-demo-1:/# curl hello-lb-svc.default.svc.cluster.local Hello, world! Version: 2.0.0 Hostname: hello-v2-7db7758bf4-qkb42
The internal DNS name works within the Pod, and you can see that you are accessing the same v2 version of the application as you were from outside of the cluster using the external IP address.
Try the connection again within the Pod using the External IP address associated with the service (replace the IP with the external IP of the service created above):
root@dns-demo-1:/# curl 35.193.235.140 Hello, world! Version: 2.0.0 Hostname: hello-v2-7db7758bf4-crxzf
The external IP also works from inside Pods running in the cluster and returns a result from the same v2 version of the applications.
Deploy an Ingress resource
We have two services in our cluster for the "hello" application. One service is hosting version 1.0 via a NodePort service, while the other service is hosting version 2.0 via a LoadBalancer service. We will now deploy an Ingress resource that will direct traffic to both services based on the URL entered by the user.
- Create an Ingress resource
Ingress is a Kubernetes resource that encapsulates a collection of rules and configuration for routing external HTTP(S) traffic to internal services.
On GKE, Ingress is implemented using Cloud Load Balancing. When you create an Ingress resource in your cluster, GKE creates an HTTP(S) load balancer and configures it to route traffic to your application.
- Define and deploy an Ingress resource that directs traffic to our web services based on the path entered:
$ cat << EOF > hello-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hello-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: Paths: - path: /v1 backend: serviceName: hello-svc servicePort: 80 - path: /v2 backend: serviceName: hello-lb-svc servicePort: 80 EOF $ kubectl apply -f hello-ingress.yaml
When we deploy this manifest, Kubernetes creates an ingress resource on your cluster. The ingress controller running in your cluster is responsible for creating an HTTP(S) load balancer to route all external HTTP traffic (on port 80) to the web NodePort service and the LoadBalancer service that we exposed.
- Test your application
- Get the external IP address of the load balancer serving our application:
$ kubectl describe ingress hello-ingress Name: hello-ingress Namespace: default Address: 35.244.213.159 Default backend: default-http-backend:80 (10.8.1.6:8080) Rules: Host Path Backends ---- ---- -------- * /v1 hello-svc:80 (<none>) /v2 hello-lb-svc:80 (<none>) Annotations: [...] ingress.kubernetes.io/backends: {"k8s-be-30013--59854b80169ba7aa":"HEALTHY","k8s-be-30100--59854b80169ba7aa":"HEALTHY","k8s-be-30980--59854b80169ba7aa":"HEALTHY"} [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 6m34s loadbalancer-controller default/hello-ingress Normal CREATE 5m16s loadbalancer-controller ip: 35.244.213.159
You may have to wait for a few minutes for the load balancer to become active, and for the health checks to succeed, before the external address will be displayed. Repeat the command every few minutes to check if the Ingress resource has finished initializing.
Use the External IP address associated with the Ingress resource, and type the following command, substituting [external_IP] with the Ingress resource's external IP address. Be sure to include the /v1 in the URL path:
$ curl 35.244.213.159/v1 Hello, world! Version: 1.0.0 Hostname: hello-v1-5574c4bff6-mbn5
The v1 URL is configured in hello-ingress.yaml to point to the hello-svc NodePort service that directs traffic to the v1 application Pods.
Note: GKE might take a few minutes to set up forwarding rules until the Global load balancer used for the Ingress resource is ready to serve your application. In the meantime, you might get errors such as HTTP 404 or HTTP 500 until the load balancer configuration is propagated across the globe.
- Now, test the v2 URL path from Cloud Shell. Use the External IP address associated with the Ingress resource, and type the following command, substituting [external_IP] with the Ingress resource's external IP address. Be sure to include the /v2 in the URL path.
$ curl [external_IP]/v2 Hello, world! Version: 2.0.0 Hostname: hello-v2-7db7758bf4-998gf
- Inspect the changes to your networking resources in the GCP Console
There are two load balancers listed:
- One was created for the external IP of the hello-lb-svc service. This typically has a UID style name and is configured to load balance TCP port 80 traffic to the cluster nodes.
- The second was created for the Ingress object and is a full HTTP(S) load balancer that includes host and path rules that match the Ingress configuration. This will have hello-ingress in its name.
Click the load balancer with hello-ingress in the name. This will display the summary information about the protocols, ports, paths and backend services of the Ingress load balancer.
The v2 URL is configured in hello-ingress.yaml to point to the hello-lb-svc LoadBalancer service that directs traffic to the v2 application Pods.
Load balancing objects in GKE
Kubernetes object | How implemented in GKE | Typical usage scenario |
---|---|---|
Service of type ClusterIP | GKE networking | Cluster-internal applications and microservices |
Service of type LoadBalancer | GCP Network Load Balancer (regional) | Application front ends |
Ingress object, backed by a Service of type NodePort | GCP HTTP(S) Load Balancer (global) | Application front ends; gives access to advanced features like Cloud Armor, Identity-Aware Proxy (beta) |
Persistent Data and Storage
- Volume types:
- emptyDir: Ephemeral. Shares Pod's lifecycle.
- ConfigMap: Object can be referenced in a volume.
- Secret: Stores sensitive info, such as passwords.
- downwardAPI: Makes data about Pods available to containers.
- Creating a Pod with an NFS Volume
apiVersion: v1 kind: Pod metadata: name: web spec: containers: - name: web image: nginx volumeMounts: - mountPath: /mnt/vol name: nfs volumes: - name: nfs server: 10.1.2.3 path: "/" readOnly: false
- Creating and using a compute engine persistent disk
NOTE: This is the old way of mounting persistent volumes. It is no longer a best practice to do the following. Showing here for completeness.
$ gcloud compute disks create \ --size=100GB \ --zone=us-west2-a demo-disk
[...] spec: containers: - name: demo-container image: gcr.io/hello-app:1.0 volumeMounts: - mountPath: /demo-pod name: pd-volume volumes: - name: pd-volume gcePersistentDisk: pdName: demo-disk # <- must match gcloud fsType: ext4
A better way is to abstract the persistent volume (PV) from the Pod by separating the PV from a Persistent Volume Claim (PVC).
kind: PersistentVolume apiVersion: v1 metadata: name: pd-volume spec: storageClassName: "standard" capacity: storage: 100G accessModes: - ReadWriteOnce: gcePersistentDisk: pdName: demo-disk fsType: ext4
Note: PVC StorageClassName
must match the PV StorageClassName
.
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: standard provisioner: kubernetes.io/gce-pd parameters: type: pd-standard replication-type: none
In GKE, a PVC with not defined storage class will use the above (default) storage class.
- Example using SSD:
kind: PersistentVolume [...] spec: storageClassName: "ssd" --- kind: StorageClass [...] metadata: name: ssd parameters: type: pd-ssd
- Volume Access Modes
Access Modes determine how the Volume will read or write. The types of access modes that are available depend on the volume type.
-
ReadWriteOnce
: mounts the volume as read/write to a single node; -
ReadOnlyMany
: mounts a volume as read-only to many nodes; and -
ReadWriteMany
: mounts volumes as read/write to many nodes.
For most applications, persistent disks are mounted as ReadWriteOnce
.
Note: GCP persistent disks do not support ReadWriteMany
. However, NFS does.
- Example Persistent Volume Claim (PVC):
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pd-volume-claim spec: storageClassName: "standard" accessModes: - ReadWriteOnce: resources: requests: storage: 100G
- Use the above PVC in a Pod (i.e., mount it):
kind: Pod apiVersion: v1 metadata: name: demo-pod spec: containers: - name: demo-container image: gcr.io/hello-app:1.0 volumeMounts: - mountPath: /demo-pod name: pd-volume volumes: - name: pd-volume PersistentVolumeClaim: claimName: pd-volume-claim
The above method abstracts...
- An alternative option is "Dynamic Provisioning".
- Retain the volume:
[...] spec: persistentVolumeReclaimPolicy: Retain
- Regional persistent disks
Increases availability by replicating data between zones:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: ssd provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd '''replication-type: regional-pd''' zones: us-west1-a, us-west1-b
In the above example, if there was an outage in one of the zones, GKE will automatically failover to the other (still up) zone.
You can also use persistent volumes for other controllers, such as deployments and stateful sets. Remember, a deployment is simply a Pod template that runs and maintains a set of identical pods, commonly known as replicas. You can use these deployments for stateless applications. Deployment replicas can share an existing persistent volume using ReadOnlyMany
or ReadWriteMany
access mode. ReadWriteMany
access mode can only be used for storage types that support it, such as NFS systems.
The ReadWriteOnce
access mode is not recommended for Deployments because the replicas need to attach and reattach to persistent volumes dynamically. If a first pod needs to detach itself, the second pod needs to be attached first. However, the second pod cannot attach because the first pod is already attached. This creates a deadlock. So neither pod can make progress. Stateful sets resolve this deadlock. Whenever your application needs to maintain state in persistent volumes, managing it with a stateful set rather than a deployment is the way to go.
Configuring Persistent Storage for Kubernetes Engine
Create PVs and PVCs
In this section, we will create a PVC, which triggers Kubernetes to automatically create a PV.
- Create and apply a manifest with a PVC
Most of the time, you do not need to directly configure PV objects or create Compute Engine persistent disks. Instead, you can create a PVC, and Kubernetes automatically provisions a persistent disk for you.
- Check that there are currently not PVCs defined in our cluster:
$ kubectl get persistentvolumeclaim No resource found.
- Create a manifest that creates a 30 gigabyte PVC called
hello-web-disk
, which can be mounted as read-write volume on a single node at a time:
$ cat << EOF > pvc-demo.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: hello-web-disk spec: accessModes: - ReadWriteOnce resources: requests: storage: 30Gi EOF $ kubectl apply -f pvc-demo.yaml $ kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE hello-web-disk Bound pvc-b26e69ea-c38a-11e9-8f0d-42010a8001e8 30Gi RWO standard 4s
Mount and verify GCP persistent disk PVCs in Pods
In this section, we will attach our persistent disk PVC to a Pod. You mount the PVC as a volume as part of the manifest for the Pod.
- Mount the PVC to a Pod
The following manifest deploys an Nginx container, attaches the pvc-demo-volume
to the Pod, and mounts that volume to the path /var/www/html
inside the Nginx container. Files saved to this directory inside the container will be saved to the persistent volume and persist even if the Pod and the container are shutdown and recreated:
$ cat << EOF > kind: Pod apiVersion: v1 metadata: name: pvc-demo-pod spec: containers: - name: frontend image: nginx volumeMounts: - mountPath: "/var/www/html" name: pvc-demo-volume volumes: - name: pvc-demo-volume persistentVolumeClaim: claimName: hello-web-disk EOF $ kubectl apply -f pod-volume-demo.yaml $ kubectl get pods NAME READY STATUS RESTARTS AGE pvc-demo-pod 0/1 ContainerCreating 0 13s
If you do above quickly after creating the Pod, you will see the status listed as "ContainerCreating" while the volume is mounted before the status changes to "Running".
- Verify the PVC is accessible within the Pod:
$ kubectl exec -it pvc-demo-pod -- sh
- Create a simple text message as a web page in the Pod:
# echo "Test webpage in a persistent volume!" > /var/www/html/index.html # chmod +x /var/www/html/index.html
- Test the persistence of the PV
Let's delete the Pod from the cluster, confirm that the PV still exists, then redeploy the Pod and verify the contents of the PV remain intact.
- Delete the
pvc-demo-pod
:
$ kubectl delete pod pvc-demo-pod <pre> * List the Pods in the cluster: <pre> $ kubectl get pods No resources found.
$ kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE hello-web-disk Bound pvc-b26e69ea-c38a-11e9-8f0d-42010a8001e8 30Gi RWO standard 3m55s
Our PVC still exists, and was not deleted when the Pod was deleted.
- Redeploy the
pvc-demo-pod
:
$ kubectl apply -f pod-volume-demo.yaml $ kubectl get pods NAME READY STATUS RESTARTS AGE pvc-demo-pod 1/1 Running 0 3m48s
The Pod will deploy and the status will change to "Running" faster this time because the PV already exists and does not need to be created.
- Verify the PVC is still accessible within the Pod,:
$ kubectl exec -it pvc-demo-pod -- sh # cat /var/www/html/index.html Test webpage in a persistent volume!
The contents of the persistent volume were not removed, even though the Pod was deleted from the cluster and recreated.
Create StatefulSets with PVCs
In this section, we use our PVC in a StatefulSet. A StatefulSet is like a Deployment, except that the Pods are given unique identifiers.
- Release the PVC
- Before we can use the PVC with the StatefulSet, we must delete the Pod that is currently using it:
$ kubectl delete pod pvc-demo-pod
- Create a StatefulSet
- Create a StatefulSet that includes a LoadBalancer service and three replicas of a Pod containing an Nginx container and a volumeClaimTemplate for 30 gigabyte PVCs with the name
hello-web-disk
. The Nginx containers mount the PVC calledhello-web-disk
at/var/www/html
as in the previous task:
$ cat << EOF > statefulset-demo.yaml kind: Service apiVersion: v1 metadata: name: statefulset-demo-service spec: ports: - protocol: TCP port: 80 targetPort: 9376 type: LoadBalancer --- kind: StatefulSet apiVersion: apps/v1 metadata: name: statefulset-demo spec: selector: matchLabels: app: MyApp serviceName: statefulset-demo-service replicas: 3 updateStrategy: type: RollingUpdate template: metadata: labels: app: MyApp spec: containers: - name: stateful-set-container image: nginx ports: - containerPort: 80 name: http volumeMounts: - name: hello-web-disk mountPath: "/var/www/html" volumeClaimTemplates: - metadata: name: hello-web-disk spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 30Gi EOF $ kubectl apply -f statefulset-demo.yaml
You now have a StatefulSet running behind a service named statefulset-demo-service
.
- Verify the connection of Pods in StatefulSets
- View the details of the StatefulSet:
$ kubectl describe statefulset statefulset-demo
Note the event status at the end of the output. The Service and StatefulSet created successfully.
$ kubectl get pods NAME READY STATUS RESTARTS AGE statefulset-demo-0 1/1 Running 0 110s statefulset-demo-1 1/1 Running 0 86s statefulset-demo-2 1/1 Running 0 65s
- List the PVCs associated with the above StatefulSet:
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE hello-web-disk Bound pvc-b26e69ea-c38a-11e9-8f0d-42010a8001e8 30Gi RWO standard 10m hello-web-disk-statefulset-demo-0 Bound pvc-d41e3ebd-c38b-11e9-8f0d-42010a8001e8 30Gi RWO standard 2m13s hello-web-disk-statefulset-demo-1 Bound pvc-e1fa6ed4-c38b-11e9-8f0d-42010a8001e8 30Gi RWO standard 109s hello-web-disk-statefulset-demo-2 Bound pvc-ee789c40-c38b-11e9-8f0d-42010a8001e8 30Gi RWO standard 88s
The original hello-web-disk
is still there and you can now see the individual PVCs that were created for each Pod in the new StatefulSet Pod.
- View the details of the first PVC in the StatefulSet:
$ kubectl describe pvc hello-web-disk-statefulset-demo-0
- Verify the persistence of Persistent Volume connections to Pods managed by StatefulSets
In this section, we will verify the connection of Pods in StatefulSets to particular PVs as the Pods are stopped and restarted.
- Verify that the PVC is accessible within the Pod:
$ kubectl exec -it statefulset-demo-0 -- sh
- Verify that there is no
index.html
text file in the/var/www/html directory
:
# cat /var/www/html/index.html cat: /var/www/html/index.html: No such file or directory
- Create a simple text message as a web page in the Pod:
$ echo "Test webpage in a persistent volume!" > /var/www/html/index.html $ chmod +x /var/www/html/index.html
- Delete the Pod where you updated the file on the PVC:
kubectl delete pod statefulset-demo-0
- List the Pods in the cluster:
$ kubectl get pods NAME READY STATUS RESTARTS AGE statefulset-demo-0 0/1 ContainerCreating 0 11s statefulset-demo-1 1/1 Running 0 6m1s statefulset-demo-2 1/1 Running 0 5m40s
You will see that the StatefulSet is automatically restarting the statefulset-demo-0
Pod. Wait until the Pod status shows that it is running again.
- Connect to the shell on the new
statefulset-demo-0
Pod:
$ kubectl exec -it statefulset-demo-0 -- sh # cat /var/www/html/index.html Test webpage in a persistent volume!
The StatefulSet restarts the Pod and reconnects the existing dedicated PVC to the new Pod ensuring that the data for that Pod is preserved.