Difference between revisions of "Kubernetes/GKE"
(→External links) |
(→Deployments) |
||
(24 intermediate revisions by the same user not shown) | |||
Line 43: | Line 43: | ||
</pre> | </pre> | ||
− | The output should look like the example. Your output might not be an exact match but it will show that the current revision has rolled back to nginx:1.7.9. | + | The output should look like the example. Your output might not be an exact match but it will show that the current revision has rolled back to <code>nginx:1.7.9</code>. |
<pre> | <pre> | ||
Line 62: | Line 62: | ||
===Perform a canary deployment=== | ===Perform a canary deployment=== | ||
− | A canary deployment is a separate deployment used to test a new version of your application. A single service targets both the canary and the normal deployments. And it can direct a subset of users to the canary version to mitigate the risk of new releases. The manifest file nginx-canary.yaml that is provided for you deploys a single pod running a newer version of | + | A canary deployment is a separate deployment used to test a new version of your application. A single service targets both the canary and the normal deployments. And it can direct a subset of users to the canary version to mitigate the risk of new releases. The manifest file <code>nginx-canary.yaml</code> that is provided for you deploys a single pod running a newer version of Nginx than your main deployment. In this task, you create a canary deployment using this new deployment file. |
<pre> | <pre> | ||
apiVersion: apps/v1 | apiVersion: apps/v1 | ||
Line 89: | Line 89: | ||
</pre> | </pre> | ||
− | The manifest for the | + | The manifest for the Nginx Service you deployed in the previous task uses a label selector to target the Pods with the <code>app: nginx</code> label. Both the normal deployment and this new canary deployment have the <code>app: nginx</code> label. Inbound connections will be distributed by the service to both the normal and canary deployment Pods. The canary deployment has fewer replicas (Pods) than the normal deployment, and thus it is available to fewer users than the normal deployment. |
* Create the canary deployment based on the configuration file. | * Create the canary deployment based on the configuration file. | ||
Line 101: | Line 101: | ||
</pre> | </pre> | ||
− | Switch back to the browser tab that is connected to the external LoadBalancer service | + | Switch back to the browser tab that is connected to the external LoadBalancer service IP and refresh the page. You should continue to see the standard "Welcome to nginx" page. |
Switch back to the Cloud Shell and scale down the primary deployment to 0 replicas. | Switch back to the Cloud Shell and scale down the primary deployment to 0 replicas. | ||
Line 113: | Line 113: | ||
</pre> | </pre> | ||
− | Switch back to the browser tab that is connected to the external LoadBalancer service | + | Switch back to the browser tab that is connected to the external LoadBalancer service IP and refresh the page. You should continue to see the standard "Welcome to nginx" page showing that the Service is automatically balancing traffic to the canary deployment. |
Note: Session affinity | Note: Session affinity | ||
− | The Service configuration used in the lab does not ensure that all requests from a single client will always connect to the same Pod. Each request is treated separately and can connect to either the normal | + | The Service configuration used in the lab does not ensure that all requests from a single client will always connect to the same Pod. Each request is treated separately and can connect to either the normal Nginx deployment or to the nginx-canary deployment. This potential to switch between different versions may cause problems if there are significant changes in functionality in the canary release. To prevent this you can set the <code>sessionAffinity</code> field to <code>ClientIP</code> in the specification of the service if you need a client's first request to determine which Pod will be used for all subsequent connections. |
For example: | For example: | ||
Line 271: | Line 271: | ||
</pre> | </pre> | ||
− | < | + | <blockquote> |
− | Note | + | '''Note''' |
− | CronJobs use the required schedule field, which accepts | + | CronJobs use the required schedule field, which accepts time in the Unix standard crontab format. All CronJob times are in UTC: |
* The first value indicates the minute (between 0 and 59). | * The first value indicates the minute (between 0 and 59). | ||
Line 282: | Line 282: | ||
* The fifth value indicates the day of the week (between 0 and 6). | * The fifth value indicates the day of the week (between 0 and 6). | ||
− | The schedule field also accepts * and ? as wildcard values. Combining / with ranges specifies that the task should repeat at a regular interval. In the example, */1 * * * * indicates that the task should repeat every minute of every day of every month. | + | The schedule field also accepts <code>*</code> and <code>?</code> as wildcard values. Combining <code>/</code> with ranges specifies that the task should repeat at a regular interval. In the example, <code>*/1 * * * *</code> indicates that the task should repeat every minute of every day of every month. |
− | </ | + | </blockquote> |
To create a Job from this file, execute the following command: | To create a Job from this file, execute the following command: | ||
Line 310: | Line 310: | ||
</pre> | </pre> | ||
− | View the output of the Job by querying the logs for the Pod. Replace | + | View the output of the Job by querying the logs for the Pod. Replace <code><pod-name></code> with the name of the Pod you recorded in the last step. |
<pre> | <pre> | ||
$ kubectl logs <pod-name> | $ kubectl logs <pod-name> | ||
Line 345: | Line 345: | ||
</pre> | </pre> | ||
All the Jobs were removed. | All the Jobs were removed. | ||
− | |||
==Cluster scaling== | ==Cluster scaling== | ||
Line 499: | Line 498: | ||
</pre> | </pre> | ||
− | ; Inspect the | + | ; Inspect the ''Horizontal Pod Autoscaler'' object |
− | The kubectl | + | The kubectl auto-scale command you used in the previous task creates a HorizontalPodAutoscaler object that targets a specified resource, called the scale target, and scales it as needed. The auto-scaler periodically adjusts the number of replicas of the scale target to match the average CPU utilization that you specify when creating the auto-scaler. |
− | * To get the list of | + | * To get the list of ''Horizontal Pod Autoscaler'' resources, execute the following command: |
<pre> | <pre> | ||
$ kubectl get hpa | $ kubectl get hpa | ||
Line 510: | Line 509: | ||
</pre> | </pre> | ||
− | * To inspect the configuration of | + | * To inspect the configuration of ''Horizontal Pod Autoscaler'' in YAML form, execute the following command: |
<pre> | <pre> | ||
$ kubectl describe horizontalpodautoscaler web | $ kubectl describe horizontalpodautoscaler web | ||
− | |||
Name: web | Name: web | ||
Namespace: default | Namespace: default | ||
Line 538: | Line 536: | ||
You need to create a heavy load on the web application to force it to scale out. You create a configuration file that defines a deployment of four containers that run an infinite loop of HTTP queries against the sample application web server. | You need to create a heavy load on the web application to force it to scale out. You create a configuration file that defines a deployment of four containers that run an infinite loop of HTTP queries against the sample application web server. | ||
− | You create the load on your web application by deploying the loadgen application using the loadgen.yaml file that has been provided for you. | + | You create the load on your web application by deploying the loadgen application using the <code>loadgen.yaml</code> file that has been provided for you. |
<pre> | <pre> | ||
apiVersion: apps/v1 | apiVersion: apps/v1 | ||
Line 571: | Line 569: | ||
</pre> | </pre> | ||
− | * Inspect | + | * Inspect the ''Horizontal Pod Autoscaler'': |
<pre> | <pre> | ||
$ kubectl get hpa | $ kubectl get hpa | ||
Line 580: | Line 578: | ||
Once the loadgen Pod starts to generate traffic, the web deployment CPU utilization begins to increase. In the example output, the targets are now at 35% CPU utilization compared to the 1% CPU threshold. | Once the loadgen Pod starts to generate traffic, the web deployment CPU utilization begins to increase. In the example output, the targets are now at 35% CPU utilization compared to the 1% CPU threshold. | ||
− | * After a few minutes, inspect the | + | * After a few minutes, inspect the ''Horizontal Pod Autoscaler'' again: |
<pre> | <pre> | ||
$ kubectl get hpa | $ kubectl get hpa | ||
Line 616: | Line 614: | ||
You should now have one deployment of the web application. | You should now have one deployment of the web application. | ||
+ | |||
+ | ==Managing node pools== | ||
+ | |||
+ | In this section, we will create a new pool of nodes using preemptible instances, and then will constrain the web deployment to run only on the preemptible nodes. | ||
+ | |||
+ | ; Add a node pool | ||
+ | |||
+ | * To deploy a new node pool with three preemptible VM instances, execute the following command: | ||
+ | <pre> | ||
+ | $ gcloud container node-pools create "temp-pool-1" \ | ||
+ | --cluster=$my_cluster --zone=$my_zone \ | ||
+ | --num-nodes "2" --node-labels=temp=true --preemptible | ||
+ | </pre> | ||
+ | If you receive an error that no preemptible instances are available you can remove the <code>--preemptible</code> option to proceed with the lab. | ||
+ | |||
+ | * Get the list of nodes to verify that the new nodes are ready: | ||
+ | <pre> | ||
+ | $ kubectl get nodes | ||
+ | NAME STATUS ROLES AGE VERSION | ||
+ | gke-standard-cluster-1-default-pool-61fba731-01mc Ready <none> 21m v1.12.8-gke.10 | ||
+ | gke-standard-cluster-1-default-pool-61fba731-bvfx Ready <none> 21m v1.12.8-gke.10 | ||
+ | gke-standard-cluster-1-temp-pool-1-e8966c96-nccc Ready <none> 46s v1.12.8-gke.10 | ||
+ | gke-standard-cluster-1-temp-pool-1-e8966c96-pk21 Ready <none> 43s v1.12.8-gke.10 | ||
+ | </pre> | ||
+ | |||
+ | You should now have 4 nodes. (Your names will be different from the example output.) | ||
+ | |||
+ | All the nodes that you added have the temp=true label because you set that label when you created the node-pool. This label makes it easier to locate and configure these nodes. | ||
+ | |||
+ | * To list only the nodes with the temp=true label, execute the following command: | ||
+ | <pre> | ||
+ | $ kubectl get nodes -l temp=true | ||
+ | NAME STATUS ROLES AGE VERSION | ||
+ | gke-standard-cluster-1-temp-pool-1-e8966c96-nccc Ready <none> 2m1s v1.12.8-gke.10 | ||
+ | gke-standard-cluster-1-temp-pool-1-e8966c96-pk21 Ready <none> 118s v1.12.8-gke.10 | ||
+ | </pre> | ||
+ | |||
+ | ; Control scheduling with taints and tolerations | ||
+ | |||
+ | To prevent the scheduler from running a Pod on the temporary nodes, you add a taint to each of the nodes in the temp pool. Taints are implemented as a key-value pair with an effect (such as NoExecute) that determines whether Pods can run on a certain node. Only nodes that are configured to tolerate the key-value of the taint are scheduled to run on these nodes. | ||
+ | |||
+ | To add a taint to each of the newly created nodes, execute the following command. | ||
+ | You can use the <code>temp=true</code> label to apply this change across all the new nodes simultaneously. | ||
+ | <pre> | ||
+ | $ kubectl taint node -l temp=true nodetype=preemptible:NoExecute | ||
+ | node/gke-standard-cluster-1-temp-pool-1-e8966c96-nccc tainted | ||
+ | node/gke-standard-cluster-1-temp-pool-1-e8966c96-pk21 tainted | ||
+ | |||
+ | $ kubectl describe nodes | grep ^Taints | ||
+ | Taints: <none> | ||
+ | Taints: <none> | ||
+ | Taints: nodetype=preemptible:NoExecute | ||
+ | Taints: nodetype=preemptible:NoExecute | ||
+ | </pre> | ||
+ | |||
+ | To allow application Pods to execute on these tainted nodes, you must add a tolerations key to the deployment configuration. | ||
+ | |||
+ | Edit the <code>web.yaml</code> file to add the following key in the template's <code>spec</code> section: | ||
+ | <pre> | ||
+ | tolerations: | ||
+ | - key: "nodetype" | ||
+ | operator: Equal | ||
+ | value: "preemptible" | ||
+ | </pre> | ||
+ | |||
+ | The <code>spec</code> section of the file should look like the following: | ||
+ | <pre> | ||
+ | ... | ||
+ | spec: | ||
+ | tolerations: | ||
+ | - key: "nodetype" | ||
+ | operator: Equal | ||
+ | value: "preemptible" | ||
+ | containers: | ||
+ | - image: gcr.io/google-samples/hello-app:1.0 | ||
+ | name: web | ||
+ | ports: | ||
+ | - containerPort: 8080 | ||
+ | protocol: TCP | ||
+ | </pre> | ||
+ | |||
+ | To force the web deployment to use the new node-pool add a <code>nodeSelector</code> key in the template's spec section. This is parallel to the tolerations key you just added. | ||
+ | <pre> | ||
+ | nodeSelector: | ||
+ | temp: "true" | ||
+ | </pre> | ||
+ | Note: GKE adds a custom label to each node called <code>cloud.google.com/gke-nodepool</code>, which contains the name of the node-pool that the node belongs to. This key can also be used as part of a <code>nodeSelector</code> to ensure Pods are only deployed to suitable nodes. | ||
+ | |||
+ | The full <code>web.yaml</code> deployment should now look as follows: | ||
+ | <pre> | ||
+ | apiVersion: extensions/v1beta1 | ||
+ | kind: Deployment | ||
+ | metadata: | ||
+ | name: web | ||
+ | spec: | ||
+ | replicas: 1 | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | run: web | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | run: web | ||
+ | spec: | ||
+ | tolerations: | ||
+ | - key: "nodetype" | ||
+ | operator: Equal | ||
+ | value: "preemptible" | ||
+ | nodeSelector: | ||
+ | temp: "true" | ||
+ | containers: | ||
+ | - image: gcr.io/google-samples/hello-app:1.0 | ||
+ | name: web | ||
+ | ports: | ||
+ | - containerPort: 8080 | ||
+ | protocol: TCP | ||
+ | </pre> | ||
+ | |||
+ | To apply this change, execute the following command: | ||
+ | <pre> | ||
+ | kubectl apply -f web.yaml | ||
+ | </pre> | ||
+ | |||
+ | If you have problems editing this file successfully, you can use the pre-prepared sample file called <code>web-tolerations.yaml</code> instead. | ||
+ | |||
+ | * Get the list of Pods: | ||
+ | <pre> | ||
+ | $ kubectl get pods | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | web-7cb566bccd-pkfst 1/1 Running 0 1m | ||
+ | </pre> | ||
+ | |||
+ | To confirm the change, inspect the running web Pod(s) using the following command: | ||
+ | <pre> | ||
+ | $ kubectl describe pods -l run=web | ||
+ | </pre> | ||
+ | |||
+ | A Tolerations section with <code>nodetype=preemptible</code> in the list should appear near the bottom of the (truncated) output. | ||
+ | <pre> | ||
+ | ... | ||
+ | Node-Selectors: <none> | ||
+ | Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s | ||
+ | node.kubernetes.io/unreachable:NoExecute for 300s | ||
+ | nodetype=preemptible | ||
+ | Events: | ||
+ | ... | ||
+ | </pre> | ||
+ | |||
+ | The output confirms that the Pods will tolerate the taint value on the new preemptible nodes, and thus that they can be scheduled to execute on those nodes. | ||
+ | |||
+ | To force the web application to scale out again, scale the loadgen deployment back to four replicas: | ||
+ | <pre> | ||
+ | $ kubectl scale deployment loadgen --replicas 4 | ||
+ | </pre> | ||
+ | |||
+ | You could scale just the web application directly but using the loadgen app will allow you to see how the different taint, toleration and <code>nodeSelector</code> settings that apply to the web and loadgen applications affect which nodes they are scheduled on. | ||
+ | |||
+ | Get the list of Pods using the wide output format to show the nodes running the Pods: | ||
+ | <pre> | ||
+ | $ kubectl get pods -o wide | ||
+ | </pre> | ||
+ | |||
+ | This shows that the loadgen app is running only on <code>default-pool</code> nodes while the web app is running only the preemptible nodes in <code>temp-pool-1</code>. | ||
+ | |||
+ | The taint setting prevents Pods from running on the preemptible nodes so the loadgen application only runs on the default pool. The toleration setting allows the web application to run on the preemptible nodes and the nodeSelector forces the web application Pods to run on those nodes. | ||
+ | <pre> | ||
+ | NAME READY STATUS [...] NODE | ||
+ | Loadgen-x0 1/1 Running [...] gke-xx-default-pool-y0 | ||
+ | loadgen-x1 1/1 Running [...] gke-xx-default-pool-y2 | ||
+ | loadgen-x3 1/1 Running [...] gke-xx-default-pool-y3 | ||
+ | loadgen-x4 1/1 Running [...] gke-xx-default-pool-y4 | ||
+ | web-x1 1/1 Running [...] gke-xx-temp-pool-1-z1 | ||
+ | web-x2 1/1 Running [...] gke-xx-temp-pool-1-z2 | ||
+ | web-x3 1/1 Running [...] gke-xx-temp-pool-1-z3 | ||
+ | web-x4 1/1 Running [...] gke-xx-temp-pool-1-z4 | ||
+ | </pre> | ||
+ | |||
+ | ==Deploying Kubernetes Engine via Helm Charts== | ||
+ | |||
+ | Ensure your user account has the cluster-admin role in your cluster. | ||
+ | <pre> | ||
+ | $ kubectl create clusterrolebinding user-admin-binding \ | ||
+ | --clusterrole=cluster-admin \ | ||
+ | --user=$(gcloud config get-value account) | ||
+ | </pre> | ||
+ | |||
+ | * Create a Kubernetes service account that is Tiller - the server side of Helm, can be used for deploying charts. | ||
+ | <pre> | ||
+ | $ kubectl create serviceaccount tiller --namespace kube-system | ||
+ | </pre> | ||
+ | |||
+ | * Grant the Tiller service account the cluster-admin role in your cluster: | ||
+ | <pre> | ||
+ | $ kubectl create clusterrolebinding tiller-admin-binding \ | ||
+ | --clusterrole=cluster-admin \ | ||
+ | --serviceaccount=kube-system:tiller | ||
+ | </pre> | ||
+ | |||
+ | * Execute the following commands to initialize Helm using the service account: | ||
+ | <pre> | ||
+ | $ helm init --service-account=tiller | ||
+ | |||
+ | $ kubectl -n kube-system get pods | grep ^tiller | ||
+ | tiller-deploy-8548d8bd7c-l548r 1/1 Running 0 18s | ||
+ | |||
+ | $ helm repo update | ||
+ | |||
+ | $ helm version | ||
+ | Client: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"} | ||
+ | Server: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"} | ||
+ | </pre> | ||
+ | |||
+ | Execute the following command to deploy a set of resources to create a Redis service on the active context cluster: | ||
+ | <pre> | ||
+ | $ helm install stable/redis | ||
+ | </pre> | ||
+ | |||
+ | A Helm chart is a package of resource configuration files, along with configurable parameters. This single command deployed a collection of resources. | ||
+ | |||
+ | A Kubernetes Service defines a set of Pods and a stable endpoint by which network traffic can access them. In Cloud Shell, execute the following command to view Services that were deployed through the Helm chart: | ||
+ | <pre> | ||
+ | $ kubectl get services | ||
+ | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | ||
+ | kubernetes ClusterIP 10.12.0.1 <none> 443/TCP 3m24s | ||
+ | opining-wolverine-redis-headless ClusterIP None <none> 6379/TCP 11s | ||
+ | opining-wolverine-redis-master ClusterIP 10.12.5.246 <none> 6379/TCP 11s | ||
+ | opining-wolverine-redis-slave ClusterIP 10.12.14.196 <none> 6379/TCP 11s | ||
+ | </pre> | ||
+ | |||
+ | A Kubernetes StatefulSet manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods. In Cloud Shell, execute the following commands to view a StatefulSet that was deployed through the Helm chart: | ||
+ | <pre> | ||
+ | $ kubectl get statefulsets | ||
+ | NAME DESIRED CURRENT AGE | ||
+ | opining-wolverine-redis-master 1 1 59s | ||
+ | opining-wolverine-redis-slave 2 2 59s | ||
+ | </pre> | ||
+ | |||
+ | A Kubernetes ConfigMap lets you storage and manage configuration artifacts, so that they are decoupled from container-image content. In Cloud Shell, execute the following commands to view ConfigMaps that were deployed through the Helm chart: | ||
+ | <pre> | ||
+ | $ kubectl get configmaps | ||
+ | NAME DATA AGE | ||
+ | opining-wolverine-redis 3 95s | ||
+ | opining-wolverine-redis-health 6 95s | ||
+ | </pre> | ||
+ | |||
+ | A Kubernetes Secret, like a ConfigMap, lets you store and manage configuration artifacts, but it's specially intended for sensitive information such as passwords and authorization keys. In Cloud Shell, execute the following commands to view some of the Secret that was deployed through the Helm chart: | ||
+ | <pre> | ||
+ | $ kubectl get secrets | ||
+ | NAME TYPE DATA AGE | ||
+ | opining-wolverine-redis Opaque 1 2m5s | ||
+ | </pre> | ||
+ | |||
+ | You can inspect the Helm chart directly using the following command: | ||
+ | <pre> | ||
+ | $ helm inspect stable/redis | ||
+ | </pre> | ||
+ | |||
+ | If you want to see the templates that the Helm chart deploys you can use the following command: | ||
+ | <pre> | ||
+ | $ helm install stable/redis --dry-run --debug | ||
+ | </pre> | ||
+ | |||
+ | ; Test Redis functionality | ||
+ | |||
+ | You store and retrieve values in the new Redis deployment running in your Kubernetes Engine cluster. | ||
+ | |||
+ | Execute the following command to store the service ip-address for the Redis cluster in an environment variable: | ||
+ | <pre> | ||
+ | $ export REDIS_IP=$(kubectl get services -l app=redis -o json | jq -r '.items[].spec | select(.selector.role=="master")' | jq -r '.clusterIP') | ||
+ | </pre> | ||
+ | |||
+ | Retrieve the Redis password and store it in an environment variable: | ||
+ | <pre> | ||
+ | $ export REDIS_PW=$(kubectl get secret -l app=redis -o jsonpath="{.items[0].data.redis-password}" | base64 --decode) | ||
+ | </pre> | ||
+ | |||
+ | * Display the Redis cluster address and password: | ||
+ | <pre> | ||
+ | $ echo Redis Cluster Address : $REDIS_IP | ||
+ | $ echo Redis auth password : $REDIS_PW | ||
+ | </pre> | ||
+ | |||
+ | * Open an interactive shell to a temporary Pod, passing in the cluster address and password as environment variables: | ||
+ | <pre> | ||
+ | $ kubectl run redis-test --rm --tty -i --restart='Never' \ | ||
+ | --env REDIS_PW=$REDIS_PW \ | ||
+ | --env REDIS_IP=$REDIS_IP \ | ||
+ | --image docker.io/bitnami/redis:4.0.12 -- bash | ||
+ | </pre> | ||
+ | |||
+ | * Connect to the Redis cluster: | ||
+ | <pre> | ||
+ | # redis-cli -h $REDIS_IP -a $REDIS_PW | ||
+ | </pre> | ||
+ | |||
+ | * Set a key value: | ||
+ | <pre> | ||
+ | set mykey this_amazing_value | ||
+ | </pre> | ||
+ | This will display OK if successful. | ||
+ | |||
+ | * Retrieve the key value: | ||
+ | <pre> | ||
+ | get mykey | ||
+ | </pre> | ||
+ | |||
+ | This will return the value you stored indicating that the Redis cluster can successfully store and retrieve data. | ||
+ | |||
+ | ==Network security== | ||
+ | |||
+ | ===Network policy=== | ||
+ | |||
+ | A Pod-level firewall restricting access to other Pods and Services. (Disabled by default in GKE.) | ||
+ | |||
+ | Must be enabled: | ||
+ | * Requires at least 2 nodes of n1-standard-1 or higher (recommended minimum of 3 nodes) | ||
+ | * Requires nodes to be recreated | ||
+ | * Enable network policy for a new cluster: | ||
+ | <pre> | ||
+ | $ gcloud container clusters create <name> \ | ||
+ | --enable-network-policy | ||
+ | </pre> | ||
+ | * Enable a network policy for an existing cluster: | ||
+ | <pre> | ||
+ | $ gcloud container clusters update <name> \ | ||
+ | --update-addons-NetworkPolicy=ENABLED | ||
+ | $ gcloud container cluster update <name> \ | ||
+ | --enable-network-policy | ||
+ | </pre> | ||
+ | * Disabling a network policy: | ||
+ | <pre> | ||
+ | $ gcloud container clusters create <name> \ | ||
+ | --no-enable-network-policy | ||
+ | </pre> | ||
+ | |||
+ | ; Writing a network policy | ||
+ | <pre> | ||
+ | apiVersion: networking.k8s.io/v1 | ||
+ | kind: NetworkPolicy | ||
+ | metadata: | ||
+ | name: demo-network-policy | ||
+ | namespace: default | ||
+ | spec: | ||
+ | podSelector: | ||
+ | matchLabels: | ||
+ | role: demo-app | ||
+ | policyTypes: | ||
+ | - Ingress | ||
+ | - Egress | ||
+ | ingress: | ||
+ | - from: | ||
+ | - ipBlock: | ||
+ | cidr: 172.17.0.0/16 | ||
+ | except: | ||
+ | - 172.17.1.0/16 | ||
+ | - namespaceSelector: | ||
+ | matchLabels: | ||
+ | project: myproject | ||
+ | - podSelector: | ||
+ | matchLabels: | ||
+ | role: frontend | ||
+ | ports: | ||
+ | - protocol: TCP | ||
+ | port: 6379 | ||
+ | |||
+ | egress: | ||
+ | - to: | ||
+ | - ipBlock: | ||
+ | cidr: 10.0.0.0/24 | ||
+ | ports: | ||
+ | - protocol: TCP | ||
+ | port: 5978 | ||
+ | </pre> | ||
+ | |||
+ | ; Network policy defaults | ||
+ | |||
+ | * Pros: | ||
+ | ** Limits "attack surface" of Pods in your cluster. | ||
+ | * Cons: | ||
+ | ** A lot of work to manage (use Istio instead) | ||
+ | |||
+ | <pre> | ||
+ | metadata: | ||
+ | name: default-deny | ||
+ | spec: | ||
+ | podSelector: {} | ||
+ | policyTypes: | ||
+ | - Ingress | ||
+ | </pre> | ||
+ | |||
+ | <pre> | ||
+ | metadata: | ||
+ | name: default-deny | ||
+ | spec: | ||
+ | podSelector: {} | ||
+ | policyTypes: | ||
+ | - Egress | ||
+ | </pre> | ||
+ | |||
+ | <pre> | ||
+ | metadata: | ||
+ | name: default-deny | ||
+ | spec: | ||
+ | podSelector: {} | ||
+ | policyTypes: | ||
+ | - Ingress | ||
+ | - Egress | ||
+ | </pre> | ||
+ | |||
+ | <pre> | ||
+ | metadata: | ||
+ | name: allow-all | ||
+ | spec: | ||
+ | podSelector: {} | ||
+ | policyTypes: | ||
+ | - Ingress | ||
+ | ingress: | ||
+ | - {} | ||
+ | </pre> | ||
+ | |||
+ | <pre> | ||
+ | metadata: | ||
+ | name: allow-all | ||
+ | spec: | ||
+ | podSelector: {} | ||
+ | policyTypes: | ||
+ | - Egress | ||
+ | egress: | ||
+ | - {} | ||
+ | </pre> | ||
+ | |||
+ | ===Setup a private GKE cluster=== | ||
+ | |||
+ | In the Cloud Shell, enter the following command to review the details of your new cluster: | ||
+ | <pre> | ||
+ | $ gcloud container clusters describe private-cluster --region us-central1-a | ||
+ | </pre> | ||
+ | |||
+ | * The following values appear only under the private cluster: | ||
+ | ;<code>privateEndpoint</code> : an internal IP address. Nodes use this internal IP address to communicate with the cluster master. | ||
+ | ;<code>publicEndpoint</code> : an external IP address. External services and administrators can use the external IP address to communicate with the cluster master. | ||
+ | |||
+ | * You have several options to lock down your cluster to varying degrees: | ||
+ | ** The whole cluster can have external access. | ||
+ | ** The whole cluster can be private. | ||
+ | ** The nodes can be private while the cluster master is public, and you can limit which external networks are authorized to access the cluster master. | ||
+ | |||
+ | Without public IP addresses, code running on the nodes cannot access the public Internet unless you configure a NAT gateway such as Cloud NAT. | ||
+ | |||
+ | You might use private clusters to provide services such as internal APIs that are meant only to be accessed by resources inside your network. For example, the resources might be private tools that only your company uses. Or they might be backend services accessed by your frontend services, and perhaps only those frontend services are accessed directly by external customers or users. In such cases, private clusters are a good way to reduce the surface area of attack for your application. | ||
+ | |||
+ | ===Restrict incoming traffic to Pods=== | ||
+ | |||
+ | First, we will create a GKE cluster to use for the demos below. | ||
+ | |||
+ | ; Create a GKE cluster | ||
+ | |||
+ | * In Cloud Shell, type the following command to set the environment variable for the zone and cluster name: | ||
+ | <pre> | ||
+ | export my_zone=us-central1-a | ||
+ | export my_cluster=standard-cluster-1 | ||
+ | </pre> | ||
+ | |||
+ | * Configure kubectl tab completion in Cloud Shell: | ||
+ | <pre> | ||
+ | source <(kubectl completion bash) | ||
+ | </pre> | ||
+ | |||
+ | * Create a Kubernetes cluster (note that this command adds the additional flag <code>--enable-network-policy</code>. This flag allows this cluster to use cluster network policies): | ||
+ | <pre> | ||
+ | $ gcloud container clusters create $my_cluster \ | ||
+ | --num-nodes 2 \ | ||
+ | --enable-ip-alias \ | ||
+ | --zone $my_zone \ | ||
+ | --enable-network-policy | ||
+ | </pre> | ||
+ | |||
+ | * Configure access to your cluster for the <code>kubectl</code> command-line tool: | ||
+ | <pre> | ||
+ | $ gcloud container clusters get-credentials $my_cluster --zone $my_zone | ||
+ | </pre> | ||
+ | |||
+ | Run a simple web server application with the label <code>app=hello</code>, and expose the web application internally in the cluster: | ||
+ | <pre> | ||
+ | $ kubectl run hello-web --labels app=hello \ | ||
+ | --image=gcr.io/google-samples/hello-app:1.0 --port 8080 --expose | ||
+ | </pre> | ||
+ | <!-- | ||
+ | In Cloud Shell enter the following command to clone the repository to the lab Cloud Shell. | ||
+ | |||
+ | $ git clone https://github.com/GoogleCloudPlatformTraining/training-data-analyst | ||
+ | |||
+ | Change to the directory that contains the sample files for this lab. | ||
+ | |||
+ | $ cd ~/training-data-analyst/courses/ak8s/09_GKE_Networks/ | ||
+ | --> | ||
+ | |||
+ | ; Restrict incoming traffic to Pods | ||
+ | |||
+ | * The following <code>NetworkPolicy</code> manifest file defines an ingress policy that allows access to Pods labeled <code>app: hello</code> from Pods labeled <code>app: foo</code>: | ||
+ | <pre> | ||
+ | $ cat << EOF > hello-allow-from-foo.yaml | ||
+ | kind: NetworkPolicy | ||
+ | apiVersion: networking.k8s.io/v1 | ||
+ | metadata: | ||
+ | name: hello-allow-from-foo | ||
+ | spec: | ||
+ | policyTypes: | ||
+ | - Ingress | ||
+ | podSelector: | ||
+ | matchLabels: | ||
+ | app: hello | ||
+ | ingress: | ||
+ | - from: | ||
+ | - podSelector: | ||
+ | matchLabels: | ||
+ | app: foo | ||
+ | EOF | ||
+ | |||
+ | $ kubectl apply -f hello-allow-from-foo.yaml | ||
+ | |||
+ | $ kubectl get networkpolicy | ||
+ | NAME POD-SELECTOR AGE | ||
+ | hello-allow-from-foo app=hello 7s | ||
+ | </pre> | ||
+ | |||
+ | ; Validate the ingress policy | ||
+ | |||
+ | * Run a temporary Pod called <code>test-1</code> with the label <code>app=foo</code> and get a shell in the Pod: | ||
+ | <pre> | ||
+ | $ kubectl run test-1 --labels app=foo --image=alpine --restart=Never --rm --stdin --tty | ||
+ | </pre> | ||
+ | |||
+ | The kubectl switches used here in conjunction with the run command are important to note: | ||
+ | ;<code>--stdin</code> (alternatively <code>-i</code>) : creates an interactive session attached to STDIN on the container. | ||
+ | ;<code>--tty</code> (alternatively <code>-t</code>) : allocates a TTY for each container in the pod. | ||
+ | ;<code>--rm</code> : instructs Kubernetes to treat this as a temporary Pod that will be removed as soon as it completes its startup task. As this is an interactive session it will be removed as soon as the user exits the session. | ||
+ | ;<code>--label</code> (alternatively <code>-l</code>) : adds a set of labels to the pod. | ||
+ | ;<code>--restart</code> : defines the restart policy for the Pod | ||
+ | |||
+ | * Make a request to the <code>hello-web:8080</code> endpoint to verify that the incoming traffic is allowed: | ||
+ | <pre> | ||
+ | / # wget -qO- --timeout=2 http://hello-web:8080 | ||
+ | Hello, world! | ||
+ | Version: 1.0.0 | ||
+ | Hostname: hello-web-75f66f69d-qgzjb | ||
+ | / # | ||
+ | </pre> | ||
+ | |||
+ | * Now, run a different Pod using the same Pod name but using a label, app=other, that does not match the podSelector in the active network policy. This Pod should ''not'' have the ability to access the hello-web application: | ||
+ | <pre> | ||
+ | $ kubectl run test-1 --labels app=other --image=alpine --restart=Never --rm --stdin --tty | ||
+ | </pre> | ||
+ | |||
+ | * Make a request to the hello-web:8080 endpoint to verify that the incoming traffic is not allowed: | ||
+ | <pre> | ||
+ | / # wget -qO- --timeout=2 http://hello-web:8080 | ||
+ | wget: download timed out | ||
+ | / # | ||
+ | </pre> | ||
+ | The request times out. | ||
+ | |||
+ | ===Restrict outgoing traffic from the Pods=== | ||
+ | |||
+ | You can restrict outgoing (egress) traffic as you do incoming traffic. However, in order to query internal hostnames (such as <code>hello-web</code>) or external hostnames (such as <code>www.example.com</code>), you must allow DNS resolution in your egress network policies. DNS traffic occurs on port 53, using TCP and UDP protocols. | ||
+ | |||
+ | The following NetworkPolicy manifest file defines a policy that permits Pods with the label <code>app: foo</code> to communicate with Pods labeled <code>app: hello</code> on any port number, and allows the Pods labeled <code>app: foo</code> to communicate to any computer on UDP port 53, which is used for DNS resolution. Without the DNS port open, you will not be able to resolve the hostnames: | ||
+ | <pre> | ||
+ | $ cat << EOF > foo-allow-to-hello.yaml | ||
+ | kind: NetworkPolicy | ||
+ | apiVersion: networking.k8s.io/v1 | ||
+ | metadata: | ||
+ | name: foo-allow-to-hello | ||
+ | spec: | ||
+ | policyTypes: | ||
+ | - Egress | ||
+ | podSelector: | ||
+ | matchLabels: | ||
+ | app: foo | ||
+ | egress: | ||
+ | - to: | ||
+ | - podSelector: | ||
+ | matchLabels: | ||
+ | app: hello | ||
+ | - to: | ||
+ | ports: | ||
+ | - protocol: UDP | ||
+ | port: 53 | ||
+ | EOF | ||
+ | |||
+ | $ kubectl apply -f foo-allow-to-hello.yaml | ||
+ | |||
+ | $ kubectl get networkpolicy | ||
+ | NAME POD-SELECTOR AGE | ||
+ | foo-allow-to-hello app=foo 7s | ||
+ | hello-allow-from-foo app=hello 5m | ||
+ | </pre> | ||
+ | |||
+ | ; Validate the egress policy | ||
+ | |||
+ | * Deploy a new web application called <code>hello-web-2</code> and expose it internally in the cluster: | ||
+ | <pre> | ||
+ | $ kubectl run hello-web-2 --labels app=hello-2 \ | ||
+ | --image=gcr.io/google-samples/hello-app:1.0 --port 8080 --expose | ||
+ | </pre> | ||
+ | |||
+ | * Run a temporary Pod with the <code>app=foo</code> label and get a shell prompt inside the container: | ||
+ | <pre> | ||
+ | $ kubectl run test-3 --labels app=foo --image=alpine --restart=Never --rm --stdin --tty | ||
+ | </pre> | ||
+ | |||
+ | * Verify that the Pod can establish connections to <code>hello-web:8080</code>: | ||
+ | <pre> | ||
+ | / # wget -qO- --timeout=2 http://hello-web:8080 | ||
+ | Hello, world! | ||
+ | Version: 1.0.0 | ||
+ | Hostname: hello-web-75f66f69d-qgzjb | ||
+ | / # | ||
+ | </pre> | ||
+ | |||
+ | * Verify that the Pod cannot establish connections to <code>hello-web-2:8080</code> | ||
+ | <pre> | ||
+ | wget -qO- --timeout=2 http://hello-web-2:8080 | ||
+ | </pre> | ||
+ | This fails because none of the Network policies you have defined allow traffic to Pods labelled <code>app: hello-2</code>. | ||
+ | |||
+ | * Verify that the Pod cannot establish connections to external websites, such as <code>www.example.com</code>: | ||
+ | <pre> | ||
+ | wget -qO- --timeout=2 http://www.example.com | ||
+ | </pre> | ||
+ | This fails because the network policies do not allow external http traffic (tcp port 80). | ||
+ | |||
+ | <pre> | ||
+ | / # ping -c3 8.8.8.8 | ||
+ | PING 8.8.8.8 (8.8.8.8): 56 data bytes | ||
+ | |||
+ | --- 8.8.8.8 ping statistics --- | ||
+ | 3 packets transmitted, 0 packets received, 100% packet loss | ||
+ | </pre> | ||
+ | |||
+ | ==Creating Services and Ingress Resources== | ||
+ | |||
+ | ; Create Pods and services to test DNS resolution | ||
+ | |||
+ | * Create a service called <code>dns-demo</code> with two sample application Pods called <code>dns-demo-1</code> and <code>dns-demo-2</code>: | ||
+ | <pre> | ||
+ | $ cat << EOF > dns-demo.yaml | ||
+ | apiVersion: v1 | ||
+ | kind: Service | ||
+ | metadata: | ||
+ | name: dns-demo | ||
+ | spec: | ||
+ | selector: | ||
+ | name: dns-demo | ||
+ | clusterIP: None | ||
+ | ports: | ||
+ | - name: dns-demo | ||
+ | port: 1234 | ||
+ | targetPort: 1234 | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | name: dns-demo-1 | ||
+ | labels: | ||
+ | name: dns-demo | ||
+ | spec: | ||
+ | hostname: dns-demo-1 | ||
+ | subdomain: dns-demo | ||
+ | containers: | ||
+ | - name: nginx | ||
+ | image: nginx | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | name: dns-demo-2 | ||
+ | labels: | ||
+ | name: dns-demo | ||
+ | spec: | ||
+ | hostname: dns-demo-2 | ||
+ | subdomain: dns-demo | ||
+ | containers: | ||
+ | - name: nginx | ||
+ | image: nginx | ||
+ | EOF | ||
+ | |||
+ | $ kubectl apply -f dns-demo.yaml | ||
+ | |||
+ | $ kubectl get pods | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | dns-demo-1 1/1 Running 0 19s | ||
+ | dns-demo-2 1/1 Running 0 19s | ||
+ | </pre> | ||
+ | |||
+ | ; Access Pods and services by FQDN | ||
+ | |||
+ | * Test name resolution for pods and services from the Cloud Shell and from Pods running inside your cluster (note: you can find the IP address for <code>dns-demo-2</code> by displaying the details of the Pod): | ||
+ | <pre> | ||
+ | $ kubectl describe pods dns-demo-2 | ||
+ | </pre> | ||
+ | |||
+ | You will see the IP address in the first section of the below the status, before the details of the individual containers: | ||
+ | <pre> | ||
+ | kubectl describe pods dns-demo-2 | ||
+ | Name: dns-demo-2 | ||
+ | Namespace: default | ||
+ | Priority: 0 | ||
+ | PriorityClassName: <none> | ||
+ | Node: gke-standard-cluster-1-default-pool-a6c9108e-05m2/10.128.0.5 | ||
+ | Start Time: Mon, 19 Aug 2019 16:58:11 -0700 | ||
+ | Labels: name=dns-demo | ||
+ | Annotations: [...] | ||
+ | Status: Running | ||
+ | IP: 10.8.2.5 | ||
+ | Containers: | ||
+ | nginx: | ||
+ | </pre> | ||
+ | |||
+ | In the example above, the Pod IP address was 10.8.2.8. You can query just the Pod IP address on its own using the following syntax for the <code>kubectl describe pods</code> command: | ||
+ | <pre> | ||
+ | $ echo $(kubectl get pod dns-demo-2 --template={{.status.podIP}}) | ||
+ | 10.8.2.5 | ||
+ | </pre> | ||
+ | |||
+ | The format of the FQDN of a Pod is <code>hostname.subdomain.namespace.svc.cluster.local</code>. The last three pieces (<code>svc.cluster.local</code>) stay constant in any cluster, however, the first three pieces are specific to the Pod that you are trying to access. In this case, the hostname is <code>dns-demo-2</code>, the subdomain is <code>dns-demo</code>, and the namespace is <code>default</code>, because we did not specify a non-default namespace. The FQDN of the <code>dns-demo-2</code> Pod is therefore <code>dns-demo-2.dns-demo.default.svc.cluster.local</code>. | ||
+ | |||
+ | * Ping <code>dns-demo-2</code> from your local machine (or from the Cloud Shell): | ||
+ | <pre> | ||
+ | $ ping dns-demo-2.dns-demo.default.svc.cluster.local | ||
+ | ping: dns-demo-2.dns-demo.default.svc.cluster.local: Name or service not known | ||
+ | </pre> | ||
+ | |||
+ | The ping fails because we are not inside the cluster itself. | ||
+ | |||
+ | To get inside the cluster, open an interactive session to Bash running from <code>dns-demo-1</code>. | ||
+ | <pre> | ||
+ | $ kubectl exec -it dns-demo-1 /bin/bash | ||
+ | </pre> | ||
+ | |||
+ | Now that we are inside a container in the cluster, our commands run from that context. However, we do not have a tool to ping in this container, so the ping command will not work. | ||
+ | |||
+ | * Update apt-get and install a ping tool (from within the container): | ||
+ | <pre> | ||
+ | root@dns-demo-1:/# apt-get update && apt-get install -y iputils-ping | ||
+ | </pre> | ||
+ | |||
+ | * Ping dns-demo-2: | ||
+ | <pre> | ||
+ | root@dns-demo-1:/# ping -c3 dns-demo-2.dns-demo.default.svc.cluster.local | ||
+ | PING dns-demo-2.dns-demo.default.svc.cluster.local (10.8.2.5) 56(84) bytes of data. | ||
+ | 64 bytes from dns-demo-2.dns-demo.default.svc.cluster.local (10.8.2.5): icmp_seq=1 ttl=62 time=1.46 ms | ||
+ | 64 bytes from dns-demo-2.dns-demo.default.svc.cluster.local (10.8.2.5): icmp_seq=2 ttl=62 time=0.397 ms | ||
+ | 64 bytes from dns-demo-2.dns-demo.default.svc.cluster.local (10.8.2.5): icmp_seq=3 ttl=62 time=0.387 ms | ||
+ | |||
+ | --- dns-demo-2.dns-demo.default.svc.cluster.local ping statistics --- | ||
+ | 3 packets transmitted, 3 received, 0% packet loss, time 16ms | ||
+ | rtt min/avg/max/mdev = 0.387/0.748/1.461/0.504 ms | ||
+ | </pre> | ||
+ | |||
+ | This ping should succeed and report that the target has the IP address you found earlier for the <code>dns-demo-2</code> Pod. | ||
+ | |||
+ | * Ping the <code>dns-demo</code> service's FQDN, instead of a specific Pod inside the service: | ||
+ | <pre> | ||
+ | ping dns-demo.default.svc.cluster.local | ||
+ | </pre> | ||
+ | This ping should also succeed but it will return a response from the FQDN of one of the two <code>demo-dns</code> Pods. This Pod might be either <code>demo-dns-1</code> or <code>demo-dns-2</code>. | ||
+ | |||
+ | When you deploy applications, your application code runs inside a container in the cluster, and thus your code can access other services by using the FQDNs of those services. This approach is better than using IP addresses or even Pod names because those are more likely to change. | ||
+ | |||
+ | ===Deploy a sample workload and a ClusterIP service=== | ||
+ | |||
+ | In this section, we will create a deployment for a set of Pods within the cluster and then expose them using a ClusterIP service. | ||
+ | |||
+ | ; Deploy a sample web application to your GKE cluster | ||
+ | |||
+ | * Deploy a sample web application container image that listens on an HTTP server on port 8080: | ||
+ | <pre> | ||
+ | $ cat << EOF > hello-v1.yaml | ||
+ | apiVersion: extensions/v1beta1 | ||
+ | kind: Deployment | ||
+ | metadata: | ||
+ | name: hello-v1 | ||
+ | spec: | ||
+ | replicas: 3 | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | run: hello-v1 | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | run: hello-v1 | ||
+ | name: hello-v1 | ||
+ | spec: | ||
+ | containers: | ||
+ | - image: gcr.io/google-samples/hello-app:1.0 | ||
+ | name: hello-v1 | ||
+ | ports: | ||
+ | - containerPort: 8080 | ||
+ | protocol: TCP | ||
+ | EOF | ||
+ | |||
+ | $ kubectl create -f hello-v1.yaml | ||
+ | |||
+ | $ kubectl get deployments | ||
+ | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE | ||
+ | hello-v1 3 3 3 3 10s | ||
+ | </pre> | ||
+ | |||
+ | ; Define service types in the manifest | ||
+ | |||
+ | * Deploy a Service using a ClusterIP: | ||
+ | <pre> | ||
+ | $ cat << EOF > hello-svc.yaml | ||
+ | apiVersion: v1 | ||
+ | kind: Service | ||
+ | metadata: | ||
+ | name: hello-svc | ||
+ | spec: | ||
+ | type: ClusterIP | ||
+ | selector: | ||
+ | name: hello-v1 | ||
+ | ports: | ||
+ | - protocol: TCP | ||
+ | port: 80 | ||
+ | targetPort: 8080 | ||
+ | EOF | ||
+ | |||
+ | $ kubectl apply -f ./hello-svc.yaml | ||
+ | </pre> | ||
+ | |||
+ | This manifest defines a ClusterIP service and applies it to Pods that correspond to the selector. In this case, the manifest is applied to the <code>hello-v1</code> Pods that we deployed. This service will automatically be applied to any other deployments with the <code>name: hello-v1</code> label. | ||
+ | |||
+ | * Verify that the Service was created and that a Cluster-IP was allocated: | ||
+ | <pre> | ||
+ | $ kubectl get service hello-svc | ||
+ | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | ||
+ | hello-svc ClusterIP 10.12.1.159 <none> 80/TCP 29s | ||
+ | </pre> | ||
+ | |||
+ | No external IP is allocated for this service. Because the Kubernetes Cluster IP addresses are not externally accessible by default, creating this Service does not make your application accessible outside of the cluster. | ||
+ | |||
+ | ; Test your application | ||
+ | |||
+ | * Attempt to open an HTTP session to the new Service using the following command: | ||
+ | <pre> | ||
+ | $ curl hello-svc.default.svc.cluster.local | ||
+ | curl: (6) Could not resolve host: hello-svc.default.svc.cluster.local | ||
+ | </pre> | ||
+ | |||
+ | The connection should fail because that service is not exposed outside of the cluster. | ||
+ | |||
+ | Now, test the Service from ''inside'' the cluster using the interactive shell you have running on the <code>dns-demo-1</code> Pod. Return to your first Cloud Shell window, which is currently redirecting the STDIN and STDOUT of the <code>dns-demo-1</code> Pod. | ||
+ | |||
+ | * Install curl so you can make calls to web services from the command line: | ||
+ | <pre> | ||
+ | $ apt-get install -y curl | ||
+ | </pre> | ||
+ | |||
+ | * Use the following command to test the HTTP connection between the Pods: | ||
+ | <pre> | ||
+ | $ curl hello-svc.default.svc.cluster.local | ||
+ | Hello, world! | ||
+ | Version: 1.0.0 | ||
+ | Hostname: hello-v1-5574c4bff6-72wzc | ||
+ | </pre> | ||
+ | This connection should succeed and provide a response similar to the output below. Your hostname might be different from the example output. | ||
+ | |||
+ | ; Convert the service to use NodePort | ||
+ | |||
+ | In this section, we will convert our existing <code>ClusterIP</code> service to a <code>NodePort</code> service and then retest access to the service from inside and outside the cluster. | ||
+ | |||
+ | * Apply a modified version of our previous <code>hello-svc</code> Service manifest: | ||
+ | <pre> | ||
+ | $ cat << EOF > hello-nodeport-svc.yaml | ||
+ | apiVersion: v1 | ||
+ | kind: Service | ||
+ | metadata: | ||
+ | name: hello-svc | ||
+ | spec: | ||
+ | type: NodePort | ||
+ | selector: | ||
+ | name: hello-v1 | ||
+ | ports: | ||
+ | - protocol: TCP | ||
+ | port: 80 | ||
+ | targetPort: 8080 | ||
+ | nodePort: 30100 | ||
+ | EOF | ||
+ | |||
+ | $ kubectl apply -f ./hello-nodeport-svc.yaml | ||
+ | </pre> | ||
+ | |||
+ | This manifest redefines <code>hello-svc</code> as a <code>NodePort</code> service and assigns the service port 30100 on each node of the cluster for that service. | ||
+ | |||
+ | * Verify that the service type has changed to <code>NodePort</code>: | ||
+ | <pre> | ||
+ | $ kubectl get service hello-svc | ||
+ | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | ||
+ | hello-svc NodePort 10.12.1.159 <none> 80:30100/TCP 5m30s | ||
+ | </pre> | ||
+ | |||
+ | Note that there is still no external IP allocated for this service. | ||
+ | |||
+ | ; Test the application | ||
+ | |||
+ | * Attempt to open an HTTP session to the new service: | ||
+ | <pre> | ||
+ | $ curl hello-svc.default.svc.cluster.local | ||
+ | curl: (6) Could not resolve host: hello-svc.default.svc.cluster.local | ||
+ | </pre> | ||
+ | |||
+ | The connection should fail because that service is not exposed outside of the cluster. | ||
+ | |||
+ | Return to your first Cloud Shell window, which is currently redirecting the STDIN and STDOUT of the <code>dns-test</code> Pod. | ||
+ | |||
+ | * Test the HTTP connection between the Pods: | ||
+ | <pre> | ||
+ | $ curl hello-svc.default.svc.cluster.local | ||
+ | |||
+ | Hello, world! | ||
+ | Version: 1.0.0 | ||
+ | Hostname: hello-v1-5574c4bff6-72wzc | ||
+ | </pre> | ||
+ | |||
+ | ; Deploy a new set of Pods and a LoadBalancer service | ||
+ | |||
+ | We will now deploy a new set of Pods running a different version of the application so that we can easily differentiate the two services. We will then expose the new Pods as a <code>LoadBalancer</code> Service and access the service from ''outside'' the cluster. | ||
+ | |||
+ | * Create a new deployment that runs version 2 of the sample "hello" application on port 8080: | ||
+ | <pre> | ||
+ | $ cat << EOF > hello-v2.yaml | ||
+ | apiVersion: extensions/v1beta1 | ||
+ | kind: Deployment | ||
+ | metadata: | ||
+ | name: hello-v2 | ||
+ | spec: | ||
+ | replicas: 3 | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | run: hello-v2 | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | run: hello-v2 | ||
+ | name: hello-v2 | ||
+ | spec: | ||
+ | containers: | ||
+ | - image: gcr.io/google-samples/hello-app:2.0 | ||
+ | name: hello-v2 | ||
+ | ports: | ||
+ | - containerPort: 8080 | ||
+ | protocol: TCP | ||
+ | EOF | ||
+ | |||
+ | $ kubectl create -f hello-v2.yaml | ||
+ | |||
+ | $ kubectl get deployments | ||
+ | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE | ||
+ | hello-v1 3 3 3 3 8m22s | ||
+ | hello-v2 3 3 3 3 6s | ||
+ | </pre> | ||
+ | |||
+ | ; Define service types in the manifest | ||
+ | |||
+ | * Deploy a <code>LoadBalancer</code> Service: | ||
+ | <pre> | ||
+ | apiVersion: v1 | ||
+ | kind: Service | ||
+ | metadata: | ||
+ | name: hello-lb-svc | ||
+ | spec: | ||
+ | type: LoadBalancer | ||
+ | selector: | ||
+ | name: hello-v2 | ||
+ | ports: | ||
+ | - protocol: TCP | ||
+ | port: 80 | ||
+ | targetPort: 8080 | ||
+ | </pre> | ||
+ | |||
+ | This manifest defines a <code>LoadBalancer</code> Service, which deploys a GCP Network Load Balancer to provide external access to the service. This service is only applied to the Pods with the <code>name: hello-v2</code> selector. | ||
+ | |||
+ | <pre> | ||
+ | $ kubectl apply -f ./hello-lb-svc.yaml | ||
+ | $ kubectl get services | ||
+ | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | ||
+ | dns-demo ClusterIP None <none> 1234/TCP 18m | ||
+ | hello-lb-svc LoadBalancer 10.12.3.30 35.193.235.140 80:30980/TCP 95s | ||
+ | hello-svc NodePort 10.12.1.159 <none> 80:30100/TCP 10m | ||
+ | kubernetes ClusterIP 10.12.0.1 <none> 443/TCP 21m | ||
+ | |||
+ | $ export LB_EXTERNAL_IP=35.193.235.140 | ||
+ | </pre> | ||
+ | |||
+ | Notice that the new <code>LoadBalancer</code> Service has an external IP. This is implemented using a GCP load balancer and will take a few minutes to create. This external IP address makes the service accessible from outside the cluster. Take note of this External IP address for use below. | ||
+ | |||
+ | ; Test your application | ||
+ | |||
+ | * Attempt to open an HTTP session to the new service: | ||
+ | <pre> | ||
+ | $ curl hello-lb-svc.default.svc.cluster.local | ||
+ | curl: (6) Could not resolve host: hello-lb-svc.default.svc.cluster.local | ||
+ | </pre> | ||
+ | |||
+ | The connection should fail because that service name is not exposed outside of the cluster. This occurs because the external IP address is not registered with this hostname. | ||
+ | |||
+ | * Try the connection again using the External IP address associated with the service: | ||
+ | <pre> | ||
+ | $ curl ${LB_EXTERNAL_IP} | ||
+ | Hello, world! | ||
+ | Version: 2.0.0 | ||
+ | Hostname: hello-v2-7db7758bf4-998gf | ||
+ | </pre> | ||
+ | |||
+ | This time the connection does ''not'' fail because the LoadBalancer's external IP address can be reached from outside GCP. | ||
+ | |||
+ | Return to your first Cloud Shell window, which is currently redirecting the STDIN and STDOUT of the <code>dns-demo-1</code> Pod. | ||
+ | |||
+ | * Use the following command to test the HTTP connection between the Pods. | ||
+ | <pre> | ||
+ | root@dns-demo-1:/# curl hello-lb-svc.default.svc.cluster.local | ||
+ | Hello, world! | ||
+ | Version: 2.0.0 | ||
+ | Hostname: hello-v2-7db7758bf4-qkb42 | ||
+ | </pre> | ||
+ | |||
+ | The internal DNS name works within the Pod, and you can see that you are accessing the same v2 version of the application as you were from outside of the cluster using the external IP address. | ||
+ | |||
+ | Try the connection again within the Pod using the External IP address associated with the service (replace the IP with the external IP of the service created above): | ||
+ | <pre> | ||
+ | root@dns-demo-1:/# curl 35.193.235.140 | ||
+ | Hello, world! | ||
+ | Version: 2.0.0 | ||
+ | Hostname: hello-v2-7db7758bf4-crxzf | ||
+ | </pre> | ||
+ | |||
+ | The external IP also works from inside Pods running in the cluster and returns a result from the same v2 version of the applications. | ||
+ | |||
+ | ===Deploy an Ingress resource=== | ||
+ | |||
+ | We have two services in our cluster for the "hello" application. One service is hosting version 1.0 via a NodePort service, while the other service is hosting version 2.0 via a LoadBalancer service. We will now deploy an Ingress resource that will direct traffic to both services based on the URL entered by the user. | ||
+ | |||
+ | ; Create an Ingress resource | ||
+ | |||
+ | ''Ingress'' is a Kubernetes resource that encapsulates a collection of rules and configuration for routing external HTTP(S) traffic to internal services. | ||
+ | |||
+ | On GKE, Ingress is implemented using Cloud Load Balancing. When you create an Ingress resource in your cluster, GKE creates an HTTP(S) load balancer and configures it to route traffic to your application. | ||
+ | |||
+ | * Define and deploy an Ingress resource that directs traffic to our web services based on the path entered: | ||
+ | <pre> | ||
+ | $ cat << EOF > hello-ingress.yaml | ||
+ | apiVersion: extensions/v1beta1 | ||
+ | kind: Ingress | ||
+ | metadata: | ||
+ | name: hello-ingress | ||
+ | annotations: | ||
+ | nginx.ingress.kubernetes.io/rewrite-target: / | ||
+ | spec: | ||
+ | rules: | ||
+ | - http: | ||
+ | Paths: | ||
+ | - path: /v1 | ||
+ | backend: | ||
+ | serviceName: hello-svc | ||
+ | servicePort: 80 | ||
+ | - path: /v2 | ||
+ | backend: | ||
+ | serviceName: hello-lb-svc | ||
+ | servicePort: 80 | ||
+ | EOF | ||
+ | |||
+ | $ kubectl apply -f hello-ingress.yaml | ||
+ | </pre> | ||
+ | |||
+ | When we deploy this manifest, Kubernetes creates an ingress resource on your cluster. The ingress controller running in your cluster is responsible for creating an HTTP(S) load balancer to route all external HTTP traffic (on port 80) to the web NodePort service and the LoadBalancer service that we exposed. | ||
+ | |||
+ | ; Test your application | ||
+ | |||
+ | * Get the external IP address of the load balancer serving our application: | ||
+ | <pre> | ||
+ | $ kubectl describe ingress hello-ingress | ||
+ | |||
+ | Name: hello-ingress | ||
+ | Namespace: default | ||
+ | Address: 35.244.213.159 | ||
+ | Default backend: default-http-backend:80 (10.8.1.6:8080) | ||
+ | Rules: | ||
+ | Host Path Backends | ||
+ | ---- ---- -------- | ||
+ | * | ||
+ | /v1 hello-svc:80 (<none>) | ||
+ | /v2 hello-lb-svc:80 (<none>) | ||
+ | Annotations: | ||
+ | [...] | ||
+ | ingress.kubernetes.io/backends: {"k8s-be-30013--59854b80169ba7aa":"HEALTHY","k8s-be-30100--59854b80169ba7aa":"HEALTHY","k8s-be-30980--59854b80169ba7aa":"HEALTHY"} | ||
+ | [...] | ||
+ | Events: | ||
+ | Type Reason Age From Message | ||
+ | ---- ------ ---- ---- ------- | ||
+ | Normal ADD 6m34s loadbalancer-controller default/hello-ingress | ||
+ | Normal CREATE 5m16s loadbalancer-controller ip: 35.244.213.159 | ||
+ | </pre> | ||
+ | |||
+ | You may have to wait for a few minutes for the load balancer to become active, and for the health checks to succeed, before the external address will be displayed. Repeat the command every few minutes to check if the Ingress resource has finished initializing. | ||
+ | |||
+ | Use the External IP address associated with the Ingress resource, and type the following command, substituting [external_IP] with the Ingress resource's external IP address. Be sure to include the <code>/v1</code> in the URL path: | ||
+ | <pre> | ||
+ | $ curl 35.244.213.159/v1 | ||
+ | Hello, world! | ||
+ | Version: 1.0.0 | ||
+ | Hostname: hello-v1-5574c4bff6-mbn5 | ||
+ | </pre> | ||
+ | |||
+ | The v1 URL is configured in <code>hello-ingress.yaml</code> to point to the <code>hello-svc</code> NodePort service that directs traffic to the v1 application Pods. | ||
+ | |||
+ | Note: GKE might take a few minutes to set up forwarding rules until the Global load balancer used for the Ingress resource is ready to serve your application. In the meantime, you might get errors such as HTTP 404 or HTTP 500 until the load balancer configuration is propagated across the globe. | ||
+ | |||
+ | * Now, test the v2 URL path from Cloud Shell. Use the External IP address associated with the Ingress resource, and type the following command, substituting <code>[external_IP]</code> with the Ingress resource's external IP address. Be sure to include the <code>/v2</code> in the URL path. | ||
+ | <pre> | ||
+ | $ curl [external_IP]/v2 | ||
+ | Hello, world! | ||
+ | Version: 2.0.0 | ||
+ | Hostname: hello-v2-7db7758bf4-998gf | ||
+ | </pre> | ||
+ | |||
+ | ; Inspect the changes to your networking resources in the GCP Console | ||
+ | |||
+ | There are two load balancers listed: | ||
+ | |||
+ | # One was created for the external IP of the <code>hello-lb-svc</code> service. This typically has a UID style name and is configured to load balance TCP port 80 traffic to the cluster nodes. | ||
+ | # The second was created for the Ingress object and is a full HTTP(S) load balancer that includes host and path rules that match the Ingress configuration. This will have hello-ingress in its name. | ||
+ | |||
+ | Click the load balancer with hello-ingress in the name. This will display the summary information about the protocols, ports, paths and backend services of the Ingress load balancer. | ||
+ | |||
+ | The v2 URL is configured in <code>hello-ingress.yaml</code> to point to the hello-lb-svc LoadBalancer service that directs traffic to the v2 application Pods. | ||
+ | |||
+ | ==Load balancing objects in GKE== | ||
+ | |||
+ | <div style="float:left; margin:0px 20px 20px 0px;"> | ||
+ | {| align="center" style="border: 1px solid #999; background-color:#FFFFFF" | ||
+ | |-align="center" bgcolor="#1188ee" | ||
+ | ! Kubernetes object | ||
+ | ! How implemented in GKE | ||
+ | ! Typical usage scenario | ||
+ | |- | ||
+ | | Service of type ClusterIP | ||
+ | | GKE networking | ||
+ | | Cluster-internal applications and microservices | ||
+ | |--bgcolor="#eeeeee" | ||
+ | | Service of type LoadBalancer | ||
+ | | GCP [https://cloud.google.com/load-balancing/docs/network/ Network Load Balancer] (regional) | ||
+ | | Application front ends | ||
+ | |- | ||
+ | | Ingress object, backed by a Service of type NodePort | ||
+ | | GCP [https://cloud.google.com/load-balancing/docs/https/ HTTP(S) Load Balancer] (global) | ||
+ | | Application front ends; gives access to advanced features like Cloud Armor, Identity-Aware Proxy (beta) | ||
+ | |} | ||
+ | </div> | ||
+ | <br clear="all"/> | ||
+ | |||
+ | ==Persistent Data and Storage== | ||
+ | |||
+ | * Volume types: | ||
+ | ** emptyDir: Ephemeral. Shares Pod's lifecycle. | ||
+ | ** ConfigMap: Object can be referenced in a volume. | ||
+ | ** Secret: Stores sensitive info, such as passwords. | ||
+ | ** downwardAPI: Makes data about Pods available to containers. | ||
+ | |||
+ | ; Creating a Pod with an NFS Volume | ||
+ | <pre> | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | name: web | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: web | ||
+ | image: nginx | ||
+ | volumeMounts: | ||
+ | - mountPath: /mnt/vol | ||
+ | name: nfs | ||
+ | volumes: | ||
+ | - name: nfs | ||
+ | server: 10.1.2.3 | ||
+ | path: "/" | ||
+ | readOnly: false | ||
+ | </pre> | ||
+ | |||
+ | ; Creating and using a compute engine persistent disk | ||
+ | |||
+ | NOTE: This is the old way of mounting persistent volumes. It is no longer a best practice to do the following. Showing here for completeness. | ||
+ | |||
+ | <pre> | ||
+ | $ gcloud compute disks create \ | ||
+ | --size=100GB \ | ||
+ | --zone=us-west2-a demo-disk | ||
+ | </pre> | ||
+ | |||
+ | <pre> | ||
+ | [...] | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: demo-container | ||
+ | image: gcr.io/hello-app:1.0 | ||
+ | volumeMounts: | ||
+ | - mountPath: /demo-pod | ||
+ | name: pd-volume | ||
+ | volumes: | ||
+ | - name: pd-volume | ||
+ | gcePersistentDisk: | ||
+ | pdName: demo-disk # <- must match gcloud | ||
+ | fsType: ext4 | ||
+ | </pre> | ||
+ | |||
+ | A better way is to abstract the persistent volume (PV) from the Pod by separating the PV from a Persistent Volume Claim (PVC). | ||
+ | |||
+ | <pre> | ||
+ | kind: PersistentVolume | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: pd-volume | ||
+ | spec: | ||
+ | storageClassName: "standard" | ||
+ | capacity: | ||
+ | storage: 100G | ||
+ | accessModes: | ||
+ | - ReadWriteOnce: | ||
+ | gcePersistentDisk: | ||
+ | pdName: demo-disk | ||
+ | fsType: ext4 | ||
+ | </pre> | ||
+ | |||
+ | Note: PVC <code>StorageClassName</code> ''must'' match the PV <code>StorageClassName</code>. | ||
+ | |||
+ | <pre> | ||
+ | kind: StorageClass | ||
+ | apiVersion: storage.k8s.io/v1 | ||
+ | metadata: | ||
+ | name: standard | ||
+ | provisioner: kubernetes.io/gce-pd | ||
+ | parameters: | ||
+ | type: pd-standard | ||
+ | replication-type: none | ||
+ | </pre> | ||
+ | |||
+ | In GKE, a PVC with not defined storage class will use the above (default) storage class. | ||
+ | |||
+ | * Example using SSD: | ||
+ | <pre> | ||
+ | kind: PersistentVolume | ||
+ | [...] | ||
+ | spec: | ||
+ | storageClassName: "ssd" | ||
+ | --- | ||
+ | kind: StorageClass | ||
+ | [...] | ||
+ | metadata: | ||
+ | name: ssd | ||
+ | parameters: | ||
+ | type: pd-ssd | ||
+ | </pre> | ||
+ | |||
+ | ; Volume Access Modes | ||
+ | |||
+ | Access Modes determine how the Volume will read or write. The types of access modes that are available depend on the volume type. | ||
+ | |||
+ | * <code>ReadWriteOnce</code>: mounts the volume as read/write to a single node; | ||
+ | * <code>ReadOnlyMany</code>: mounts a volume as read-only to many nodes; and | ||
+ | * <code>ReadWriteMany</code>: mounts volumes as read/write to many nodes. | ||
+ | |||
+ | For most applications, persistent disks are mounted as <code>ReadWriteOnce</code>. | ||
+ | |||
+ | Note: GCP persistent disks do not support <code>ReadWriteMany</code>. However, NFS does. | ||
+ | |||
+ | * Example Persistent Volume Claim (PVC): | ||
+ | <pre> | ||
+ | kind: PersistentVolumeClaim | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: pd-volume-claim | ||
+ | spec: | ||
+ | storageClassName: "standard" | ||
+ | accessModes: | ||
+ | - ReadWriteOnce: | ||
+ | resources: | ||
+ | requests: | ||
+ | storage: 100G | ||
+ | </pre> | ||
+ | |||
+ | * Use the above PVC in a Pod (i.e., mount it): | ||
+ | <pre> | ||
+ | kind: Pod | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: demo-pod | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: demo-container | ||
+ | image: gcr.io/hello-app:1.0 | ||
+ | volumeMounts: | ||
+ | - mountPath: /demo-pod | ||
+ | name: pd-volume | ||
+ | volumes: | ||
+ | - name: pd-volume | ||
+ | PersistentVolumeClaim: | ||
+ | claimName: pd-volume-claim | ||
+ | </pre> | ||
+ | |||
+ | The above method abstracts... | ||
+ | |||
+ | * An alternative option is "Dynamic Provisioning". | ||
+ | |||
+ | * Retain the volume: | ||
+ | <pre> | ||
+ | [...] | ||
+ | spec: | ||
+ | persistentVolumeReclaimPolicy: Retain | ||
+ | </pre> | ||
+ | |||
+ | ; Regional persistent disks | ||
+ | |||
+ | Increases availability by replicating data between zones: | ||
+ | <pre> | ||
+ | kind: StorageClass | ||
+ | apiVersion: storage.k8s.io/v1 | ||
+ | metadata: | ||
+ | name: ssd | ||
+ | provisioner: kubernetes.io/gce-pd | ||
+ | parameters: | ||
+ | type: pd-ssd | ||
+ | '''replication-type: regional-pd''' | ||
+ | zones: us-west1-a, us-west1-b | ||
+ | </pre> | ||
+ | |||
+ | In the above example, if there was an outage in one of the zones, GKE will automatically failover to the other (still up) zone. | ||
+ | |||
+ | You can also use persistent volumes for other controllers, such as deployments and stateful sets. Remember, a deployment is simply a Pod template that runs and maintains a set of identical pods, commonly known as replicas. You can use these deployments for stateless applications. Deployment replicas can share an existing persistent volume using <code>ReadOnlyMany</code> or <code>ReadWriteMany</code> access mode. <code>ReadWriteMany</code> access mode can only be used for storage types that support it, such as NFS systems. | ||
+ | |||
+ | The <code>ReadWriteOnce</code> access mode is not recommended for Deployments because the replicas need to attach and reattach to persistent volumes dynamically. If a first pod needs to detach itself, the second pod needs to be attached first. However, the second pod cannot attach because the first pod is already attached. This creates a deadlock. So neither pod can make progress. Stateful sets resolve this deadlock. Whenever your application needs to maintain state in persistent volumes, managing it with a stateful set rather than a deployment is the way to go. | ||
+ | |||
+ | ==Configuring Persistent Storage for Kubernetes Engine== | ||
+ | |||
+ | ===Create PVs and PVCs=== | ||
+ | |||
+ | In this section, we will create a PVC, which triggers Kubernetes to automatically create a PV. | ||
+ | |||
+ | ; Create and apply a manifest with a PVC | ||
+ | |||
+ | Most of the time, you do not need to directly configure PV objects or create Compute Engine persistent disks. Instead, you can create a PVC, and Kubernetes automatically provisions a persistent disk for you. | ||
+ | |||
+ | * Check that there are currently not PVCs defined in our cluster: | ||
+ | <pre> | ||
+ | $ kubectl get persistentvolumeclaim | ||
+ | No resource found. | ||
+ | </pre> | ||
+ | |||
+ | * Create a manifest that creates a 30 gigabyte PVC called <code>hello-web-disk</code>, which can be mounted as read-write volume on a single node at a time: | ||
+ | <pre> | ||
+ | $ cat << EOF > pvc-demo.yaml | ||
+ | apiVersion: v1 | ||
+ | kind: PersistentVolumeClaim | ||
+ | metadata: | ||
+ | name: hello-web-disk | ||
+ | spec: | ||
+ | accessModes: | ||
+ | - ReadWriteOnce | ||
+ | resources: | ||
+ | requests: | ||
+ | storage: 30Gi | ||
+ | EOF | ||
+ | |||
+ | $ kubectl apply -f pvc-demo.yaml | ||
+ | |||
+ | $ kubectl get persistentvolumeclaim | ||
+ | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE | ||
+ | hello-web-disk Bound pvc-b26e69ea-c38a-11e9-8f0d-42010a8001e8 30Gi RWO standard 4s | ||
+ | </pre> | ||
+ | |||
+ | ===Mount and verify GCP persistent disk PVCs in Pods=== | ||
+ | |||
+ | In this section, we will attach our persistent disk PVC to a Pod. You mount the PVC as a volume as part of the manifest for the Pod. | ||
+ | |||
+ | ; Mount the PVC to a Pod | ||
+ | |||
+ | The following manifest deploys an [[Nginx]] container, attaches the <code>pvc-demo-volume</code> to the Pod, and mounts that volume to the path <code>/var/www/html</code> inside the Nginx container. Files saved to this directory inside the container will be saved to the persistent volume and persist even if the Pod and the container are shutdown and recreated: | ||
+ | <pre> | ||
+ | $ cat << EOF > | ||
+ | kind: Pod | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: pvc-demo-pod | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: frontend | ||
+ | image: nginx | ||
+ | volumeMounts: | ||
+ | - mountPath: "/var/www/html" | ||
+ | name: pvc-demo-volume | ||
+ | volumes: | ||
+ | - name: pvc-demo-volume | ||
+ | persistentVolumeClaim: | ||
+ | claimName: hello-web-disk | ||
+ | EOF | ||
+ | |||
+ | $ kubectl apply -f pod-volume-demo.yaml | ||
+ | |||
+ | $ kubectl get pods | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | pvc-demo-pod 0/1 ContainerCreating 0 13s | ||
+ | </pre> | ||
+ | |||
+ | If you do above quickly after creating the Pod, you will see the status listed as "ContainerCreating" while the volume is mounted before the status changes to "Running". | ||
+ | |||
+ | * Verify the PVC is accessible within the Pod: | ||
+ | <pre> | ||
+ | $ kubectl exec -it pvc-demo-pod -- sh | ||
+ | </pre> | ||
+ | |||
+ | * Create a simple text message as a web page in the Pod: | ||
+ | <pre> | ||
+ | # echo "Test webpage in a persistent volume!" > /var/www/html/index.html | ||
+ | # chmod +x /var/www/html/index.html | ||
+ | </pre> | ||
+ | |||
+ | ; Test the persistence of the PV | ||
+ | |||
+ | Let's delete the Pod from the cluster, confirm that the PV still exists, then redeploy the Pod and verify the contents of the PV remain intact. | ||
+ | |||
+ | * Delete the <code>pvc-demo-pod</code>: | ||
+ | <pre> | ||
+ | $ kubectl delete pod pvc-demo-pod | ||
+ | <pre> | ||
+ | |||
+ | * List the Pods in the cluster: | ||
+ | <pre> | ||
+ | $ kubectl get pods | ||
+ | No resources found. | ||
+ | </pre> | ||
+ | |||
+ | <pre> | ||
+ | $ kubectl get persistentvolumeclaim | ||
+ | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE | ||
+ | hello-web-disk Bound pvc-b26e69ea-c38a-11e9-8f0d-42010a8001e8 30Gi RWO standard 3m55s | ||
+ | </pre> | ||
+ | Our PVC still exists, and was not deleted when the Pod was deleted. | ||
+ | |||
+ | * Redeploy the <code>pvc-demo-pod</code>: | ||
+ | <pre> | ||
+ | $ kubectl apply -f pod-volume-demo.yaml | ||
+ | $ kubectl get pods | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | pvc-demo-pod 1/1 Running 0 3m48s | ||
+ | </pre> | ||
+ | |||
+ | The Pod will deploy and the status will change to "Running" faster this time because the PV already exists and does not need to be created. | ||
+ | |||
+ | * Verify the PVC is still accessible within the Pod,: | ||
+ | <pre> | ||
+ | $ kubectl exec -it pvc-demo-pod -- sh | ||
+ | # cat /var/www/html/index.html | ||
+ | Test webpage in a persistent volume! | ||
+ | </pre> | ||
+ | |||
+ | The contents of the persistent volume were not removed, even though the Pod was deleted from the cluster and recreated. | ||
+ | |||
+ | ===Create StatefulSets with PVCs=== | ||
+ | |||
+ | In this section, we use our PVC in a StatefulSet. A StatefulSet is like a Deployment, except that the Pods are given unique identifiers. | ||
+ | |||
+ | ; Release the PVC | ||
+ | |||
+ | * Before we can use the PVC with the StatefulSet, we must delete the Pod that is currently using it: | ||
+ | <pre> | ||
+ | $ kubectl delete pod pvc-demo-pod | ||
+ | </pre> | ||
+ | |||
+ | ; Create a StatefulSet | ||
+ | |||
+ | * Create a StatefulSet that includes a LoadBalancer service and three replicas of a Pod containing an Nginx container and a volumeClaimTemplate for 30 gigabyte PVCs with the name <code>hello-web-disk</code>. The Nginx containers mount the PVC called <code>hello-web-disk</code> at <code>/var/www/html</code> as in the previous task: | ||
+ | <pre> | ||
+ | $ cat << EOF > statefulset-demo.yaml | ||
+ | kind: Service | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: statefulset-demo-service | ||
+ | spec: | ||
+ | ports: | ||
+ | - protocol: TCP | ||
+ | port: 80 | ||
+ | targetPort: 9376 | ||
+ | type: LoadBalancer | ||
+ | --- | ||
+ | |||
+ | kind: StatefulSet | ||
+ | apiVersion: apps/v1 | ||
+ | metadata: | ||
+ | name: statefulset-demo | ||
+ | spec: | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | app: MyApp | ||
+ | serviceName: statefulset-demo-service | ||
+ | replicas: 3 | ||
+ | updateStrategy: | ||
+ | type: RollingUpdate | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: MyApp | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: stateful-set-container | ||
+ | image: nginx | ||
+ | ports: | ||
+ | - containerPort: 80 | ||
+ | name: http | ||
+ | volumeMounts: | ||
+ | - name: hello-web-disk | ||
+ | mountPath: "/var/www/html" | ||
+ | volumeClaimTemplates: | ||
+ | - metadata: | ||
+ | name: hello-web-disk | ||
+ | spec: | ||
+ | accessModes: [ "ReadWriteOnce" ] | ||
+ | resources: | ||
+ | requests: | ||
+ | storage: 30Gi | ||
+ | EOF | ||
+ | |||
+ | $ kubectl apply -f statefulset-demo.yaml | ||
+ | </pre> | ||
+ | |||
+ | You now have a StatefulSet running behind a service named <code>statefulset-demo-service</code>. | ||
+ | |||
+ | ; Verify the connection of Pods in StatefulSets | ||
+ | |||
+ | * View the details of the StatefulSet: | ||
+ | <pre> | ||
+ | $ kubectl describe statefulset statefulset-demo | ||
+ | </pre> | ||
+ | |||
+ | Note the event status at the end of the output. The Service and StatefulSet created successfully. | ||
+ | <pre> | ||
+ | $ kubectl get pods | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | statefulset-demo-0 1/1 Running 0 110s | ||
+ | statefulset-demo-1 1/1 Running 0 86s | ||
+ | statefulset-demo-2 1/1 Running 0 65s | ||
+ | </pre> | ||
+ | |||
+ | * List the PVCs associated with the above StatefulSet: | ||
+ | <pre> | ||
+ | $ kubectl get pvc | ||
+ | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE | ||
+ | hello-web-disk Bound pvc-b26e69ea-c38a-11e9-8f0d-42010a8001e8 30Gi RWO standard 10m | ||
+ | hello-web-disk-statefulset-demo-0 Bound pvc-d41e3ebd-c38b-11e9-8f0d-42010a8001e8 30Gi RWO standard 2m13s | ||
+ | hello-web-disk-statefulset-demo-1 Bound pvc-e1fa6ed4-c38b-11e9-8f0d-42010a8001e8 30Gi RWO standard 109s | ||
+ | hello-web-disk-statefulset-demo-2 Bound pvc-ee789c40-c38b-11e9-8f0d-42010a8001e8 30Gi RWO standard 88s | ||
+ | </pre> | ||
+ | |||
+ | The original <code>hello-web-disk</code> is still there and you can now see the individual PVCs that were created for each Pod in the new StatefulSet Pod. | ||
+ | |||
+ | * View the details of the first PVC in the StatefulSet: | ||
+ | <pre> | ||
+ | $ kubectl describe pvc hello-web-disk-statefulset-demo-0 | ||
+ | </pre> | ||
+ | |||
+ | ; Verify the persistence of Persistent Volume connections to Pods managed by StatefulSets | ||
+ | |||
+ | In this section, we will verify the connection of Pods in StatefulSets to particular PVs as the Pods are stopped and restarted. | ||
+ | |||
+ | * Verify that the PVC is accessible within the Pod: | ||
+ | <pre> | ||
+ | $ kubectl exec -it statefulset-demo-0 -- sh | ||
+ | </pre> | ||
+ | |||
+ | * Verify that there is no <code>index.html</code> text file in the <code>/var/www/html directory</code>: | ||
+ | <pre> | ||
+ | # cat /var/www/html/index.html | ||
+ | cat: /var/www/html/index.html: No such file or directory | ||
+ | </pre> | ||
+ | |||
+ | * Create a simple text message as a web page in the Pod: | ||
+ | <pre> | ||
+ | $ echo "Test webpage in a persistent volume!" > /var/www/html/index.html | ||
+ | $ chmod +x /var/www/html/index.html | ||
+ | </pre> | ||
+ | |||
+ | * Delete the Pod where you updated the file on the PVC: | ||
+ | <pre> | ||
+ | kubectl delete pod statefulset-demo-0 | ||
+ | </pre> | ||
+ | |||
+ | * List the Pods in the cluster: | ||
+ | <pre> | ||
+ | $ kubectl get pods | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | statefulset-demo-0 0/1 ContainerCreating 0 11s | ||
+ | statefulset-demo-1 1/1 Running 0 6m1s | ||
+ | statefulset-demo-2 1/1 Running 0 5m40s | ||
+ | </pre> | ||
+ | |||
+ | You will see that the StatefulSet is automatically restarting the <code>statefulset-demo-0</code> Pod. Wait until the Pod status shows that it is running again. | ||
+ | |||
+ | * Connect to the shell on the new <code>statefulset-demo-0</code> Pod: | ||
+ | <pre> | ||
+ | $ kubectl exec -it statefulset-demo-0 -- sh | ||
+ | # cat /var/www/html/index.html | ||
+ | Test webpage in a persistent volume! | ||
+ | </pre> | ||
+ | |||
+ | The StatefulSet restarts the Pod and reconnects the existing dedicated PVC to the new Pod ensuring that the data for that Pod is preserved. | ||
+ | |||
+ | ==StatefulSets== | ||
+ | |||
+ | Stateful sets are useful for stateful applications. Stateful sets run and maintain a set of pods just like deployments do. A stateful set object defines the desired state and its controller achieves it. However, unlike deployments, stateful sets maintain a persistent identity for each pod. Each pod in a stateful set maintains a persistent identity and has an ordinal index with the relevant pod name, a stable hostname and stably identified persistent storage that is linked to the ordinal index. | ||
+ | |||
+ | An ordinal index is just a unique sequential number that is assigned to each pod in the stateful set. This number defines the pod's position in the sets sequence of pods. Deployment, scaling, and updates are ordered using the ordinal index of the pods within a stateful set. For example, if a stateful set named Demo launches three replicas, it will launch pod names Demo-0, Demo-1, and Demo-2 sequentially. This means that all of its predecessors must be running and ready before an action is taken on a newer pod. For example, if Demo-0 is not running and ready, Demo-1 will not be launched. If Demo-0 fails after Demo-1 is running and ready, but before the creation of Demo-2, Demo-2 will not be launched until Demo-0 is relaunched and becomes running and ready. Scaling and ruling updates happen in reverse order. Which means Demo-2 would be changed first. This depends on the pod management policy being set to the default. Ordered ready state. If you want to launch pods in parallel without waiting for the pods to maintain running and ready state, change the pod management policy to parallel. As the name suggests stateful sets are useful for stateful applications. With stable storage, stateful sets use a unique persistent volume claim for each pod. So that each pod can maintain its own individual state. It must have reliable long-term storage to which no other pods write. These persistent volume claims use read write once access mode for applications. | ||
+ | |||
+ | * StatefulSet Example (with associated Service): | ||
+ | <pre> | ||
+ | kind: Service | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: demo-service | ||
+ | labels: | ||
+ | app: demo | ||
+ | spec: | ||
+ | ports: | ||
+ | - port: 80 | ||
+ | name: web | ||
+ | clusterIP: None | ||
+ | selector: | ||
+ | app: demo | ||
+ | --- | ||
+ | kind: StatefulSet | ||
+ | apiVersion: apps/v1 | ||
+ | metadata: | ||
+ | name: demo-statefulset | ||
+ | spec: | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | app: demo | ||
+ | serviceName: demo-service | ||
+ | replicas: 3 | ||
+ | updateStrategy: | ||
+ | type: RollingUpdate | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: demo | ||
+ | [...] | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: demo-container | ||
+ | image: k8s.gcr.io/demo:0.1 | ||
+ | ports: | ||
+ | - containerPort: 80 | ||
+ | name: web | ||
+ | volumeMounts: | ||
+ | - name: www | ||
+ | mountPath: /usr/share/web | ||
+ | volumeClaimTemplates: | ||
+ | - metadata: | ||
+ | name: demo-pvc | ||
+ | spec: | ||
+ | accessModes: ["ReadWriteOnce"] | ||
+ | resources: | ||
+ | requests: | ||
+ | storage: 1Gi | ||
+ | </pre> | ||
+ | |||
+ | In the above example, we are defining a "headless service" by specifying "None" for the <code>clusterIP</code>. | ||
+ | |||
+ | ==ConfigMaps and Secrets== | ||
+ | |||
+ | ===ConfigMaps=== | ||
+ | <pre> | ||
+ | $ mkdir -p demo/ | ||
+ | $ wget https://example.com/color.properties -O demo/color.properties | ||
+ | $ wget https://example.com/ui.properties -O demo/ui.properties | ||
+ | $ kubectl create configmap demo --from-file=demo/ | ||
+ | </pre> | ||
+ | |||
+ | <pre> | ||
+ | kind: ConfigMap | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: demo | ||
+ | data: | ||
+ | color.properties: |- | ||
+ | color.good=green | ||
+ | color.bad=red | ||
+ | ui.properties: |- | ||
+ | resolution=high | ||
+ | </pre> | ||
+ | |||
+ | * Using a ConfigMap in Pod commands: | ||
+ | <pre> | ||
+ | kind: Pod | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: demo-pod | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: demo-container | ||
+ | image: k8s.gcr.io/busybox | ||
+ | command: ["/bin/sh", "-c", "echo $(VARIABLE_DEMO)"] | ||
+ | env: | ||
+ | - name: VARIABLE_DEMO | ||
+ | valueFrom: | ||
+ | configMapKeyRef: | ||
+ | name: demo | ||
+ | key: my.key | ||
+ | </pre> | ||
+ | |||
+ | * Using a ConfigMap by creating a Volume: | ||
+ | <pre> | ||
+ | kind: Pod | ||
+ | [...] | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: demo-container | ||
+ | image: k8s.gcr.io/busybox | ||
+ | volumeMounts: | ||
+ | - name: config-volume | ||
+ | mountPath: /etc/config | ||
+ | volumes: | ||
+ | - name: config-volume | ||
+ | configMap: | ||
+ | name: demo | ||
+ | </pre> | ||
+ | |||
+ | ===Secrets=== | ||
+ | |||
+ | ; Types of Secrets | ||
+ | |||
+ | * Generic: used when creating secrets from files directories or literal values. | ||
+ | * TLS: uses an existing public-private encryption key pair. To create one of these, you must give k8s the public key certificate encoded in PEM format, and you must also supply the private key of that certificate. | ||
+ | * Docker registry: used to pass credentials for an image registry to kubelet so it can pull a private image from the Docker registry on behalf of your Pod. | ||
+ | |||
+ | In GKE, the Google Container Registry (GCR) integrates with Cloud Identity and Access Management, so you may not need to use the "Docker registry" Secret type. | ||
+ | |||
+ | ; Creating a generic Secret | ||
+ | |||
+ | * Create a Secret using literal values: | ||
+ | <pre> | ||
+ | $ kubectl create secret generic demo \ | ||
+ | --from-literal user=admin \ | ||
+ | --from-literal password=1234 | ||
+ | </pre> | ||
+ | |||
+ | * Create a Secret using files: | ||
+ | <pre> | ||
+ | $ kubectl create secret generic demo \ | ||
+ | --from-file=./username.txt \ | ||
+ | --from-file=./password.txt | ||
+ | </pre> | ||
+ | |||
+ | * Create a Secret using naming keys: | ||
+ | <pre> | ||
+ | $ kubectl create secret generic demo \ | ||
+ | --from-file=User=./username.txt \ | ||
+ | --from-file=Password=./password.txt | ||
+ | </pre> | ||
+ | |||
+ | ; Using a Secret | ||
+ | |||
+ | * Secret environment variable: | ||
+ | <pre> | ||
+ | [...] | ||
+ | kind: Pod | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: mycontainer | ||
+ | image: redis | ||
+ | env: | ||
+ | - name: SECRET_USERNAME | ||
+ | valueFrom: | ||
+ | secretKeyRef: | ||
+ | name: demo-secret | ||
+ | key: username | ||
+ | - name: SECRET_PASSWORD | ||
+ | valueFrom: | ||
+ | secretKeyRef: | ||
+ | name: demo-secret | ||
+ | key: password | ||
+ | </pre> | ||
+ | |||
+ | * Secret volume: | ||
+ | <pre> | ||
+ | [...] | ||
+ | kind: Pod | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: mycontainer | ||
+ | image: redis | ||
+ | volumeMounts: | ||
+ | - name: storagesecrets | ||
+ | mountPath: "/etc/secrets" | ||
+ | readOnly: true | ||
+ | volumes: | ||
+ | - name: storagesecrets | ||
+ | secret: | ||
+ | secretName: demo-secret | ||
+ | </pre> | ||
+ | |||
+ | ===Working with Kubernetes Engine Secrets and ConfigMaps=== | ||
+ | |||
+ | ; Set up Cloud Pub/Sub and deploy an application to read from the topic | ||
+ | |||
+ | * Set the environment variables for the pub/sub components. | ||
+ | <pre> | ||
+ | $ export my_pubsub_topic=echo | ||
+ | $ export my_pubsub_subscription=echo-read | ||
+ | </pre> | ||
+ | |||
+ | * Create a Cloud Pub/Sub topic named "echo" and a subscription named "echo-read" that is associated with that topic: | ||
+ | <pre> | ||
+ | $ gcloud pubsub topics create $my_pubsub_topic | ||
+ | $ gcloud pubsub subscriptions create $my_pubsub_subscription \ | ||
+ | --topic=$my_pubsub_topic | ||
+ | </pre> | ||
+ | |||
+ | ; Deploy an application to read from Cloud Pub/Sub topics | ||
+ | |||
+ | First, create a deployment with a container that can read from Cloud Pub/Sub topics. Since specific permissions are required to subscribe to and read from Cloud Pub/Sub topics, this container needs to be provided with credentials in order to successfully connect to Cloud Pub/Sub. | ||
+ | |||
+ | * Create a Deployment for use with our Cloud Pub/Sub topic: | ||
+ | <pre> | ||
+ | $ cat << EOF > pubsub.yaml | ||
+ | apiVersion: apps/v1 | ||
+ | kind: Deployment | ||
+ | metadata: | ||
+ | name: pubsub | ||
+ | spec: | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | app: pubsub | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: pubsub | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: subscriber | ||
+ | image: gcr.io/google-samples/pubsub-sample:v1 | ||
+ | EOF | ||
+ | |||
+ | $ kubectl apply -f pubsub.yaml | ||
+ | $ kubectl get pods -l app=pubsub | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | pubsub-65dbdb56f5-5xjp4 0/1 Error 2 36s | ||
+ | </pre> | ||
+ | |||
+ | Notice the status of the Pod. It has an error and has restarted several times. | ||
+ | |||
+ | * Inspect the logs for the Pod: | ||
+ | <pre> | ||
+ | $ kubectl logs -l app=pubsub | ||
+ | StatusCode.PERMISSION_DENIED, User not authorized to perform this action. | ||
+ | </pre> | ||
+ | |||
+ | The error message displayed at the end of the log indicates that the application does not have permissions to query the Cloud Pub/Sub service. | ||
+ | |||
+ | ; Create service account credentials | ||
+ | |||
+ | To fix the above permission issue, create a new service account and grant it access to the pub/sub subscription that the test application is attempting to use. Instead of changing the service account of the GKE cluster nodes, generate a JSON key for the service account, and then securely pass the JSON key to the Pod via Kubernetes Secrets. | ||
+ | |||
+ | * In the GCP Console, on the Navigation menu, click '''IAM & admin > Service Accounts'''. | ||
+ | * Click '''+ Create Service Account'''. | ||
+ | * In the Service Account Name text box, enter <code>pubsub-app</code> and then click '''Create'''. | ||
+ | * In the Role drop-down list, select '''Pub/Sub > Pub/Sub Subscriber'''. | ||
+ | * Confirm the role is listed, and then click '''Continue'''. | ||
+ | * Click '''+ Create Key'''. | ||
+ | * Select '''JSON''' as the key type, and then click '''Create'''. | ||
+ | |||
+ | A JSON key file containing the credentials of the service account will download to your computer. You can see the file in the download bar at the bottom of your screen. We will use this key file to configure the sample application to authenticate to Cloud Pub/Sub API. | ||
+ | |||
+ | * Click '''Close''' and then click '''Done'''. | ||
+ | |||
+ | On your hard drive, locate the JSON key that you just downloaded and rename the file to <code>credentials.json</code>. | ||
+ | |||
+ | * Create a Kubernetes Secret named <code>pubsub-key</code> using the downloaded credentials (JSON file): | ||
+ | <pre> | ||
+ | $ kubectl create secret generic pubsub-key \ | ||
+ | --from-file=key.json=$HOME/credentials.json | ||
+ | </pre> | ||
+ | |||
+ | This command creates a Secret named <code>pubsub-key</code> that has a <code>key.json</code> value containing the contents of the private key that you downloaded from the GCP Console. | ||
+ | |||
+ | ; Configure the application with the secret | ||
+ | |||
+ | Update the deployment to include the following changes: | ||
+ | * Add a volume to the Pod specification. This volume contains the secret. | ||
+ | * The secrets volume is mounted in the application container. | ||
+ | * The <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable is set to point to the key file in the secret volume mount. | ||
+ | * The <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable is automatically recognized by Cloud Client Libraries, in this case, the Cloud Pub/Sub client for Python. | ||
+ | |||
+ | * Update the previous Deployment: | ||
+ | <pre> | ||
+ | $ cat << EOF > pubsub-secret.yaml | ||
+ | apiVersion: apps/v1 | ||
+ | kind: Deployment | ||
+ | metadata: | ||
+ | name: pubsub | ||
+ | spec: | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | app: pubsub | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: pubsub | ||
+ | spec: | ||
+ | volumes: | ||
+ | - name: google-cloud-key | ||
+ | secret: | ||
+ | secretName: pubsub-key | ||
+ | containers: | ||
+ | - name: subscriber | ||
+ | image: gcr.io/google-samples/pubsub-sample:v1 | ||
+ | volumeMounts: | ||
+ | - name: google-cloud-key | ||
+ | mountPath: /var/secrets/google | ||
+ | env: | ||
+ | - name: GOOGLE_APPLICATION_CREDENTIALS | ||
+ | value: /var/secrets/google/key.json | ||
+ | EOF | ||
+ | |||
+ | $ kubectl apply -f pubsub-secret.yaml | ||
+ | |||
+ | $ kubectl get pods -l app=pubsub | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | pubsub-687959fd65-kwhb5 1/1 Running 0 40s | ||
+ | </pre> | ||
+ | |||
+ | ; Test receiving Cloud Pub/Sub messages | ||
+ | |||
+ | Now that we configured the application, we can publish a message to the Cloud Pub/Sub topic we created earlier in the lab: | ||
+ | <pre> | ||
+ | $ gcloud pubsub topics publish $my_pubsub_topic --message="Hello, world!" | ||
+ | |||
+ | messageIds: | ||
+ | - '697037622972840' | ||
+ | </pre> | ||
+ | |||
+ | Within a few seconds, the message should be picked up by the application and printed to the output stream. | ||
+ | |||
+ | * Inspect the logs from the deployed Pod: | ||
+ | <pre> | ||
+ | $ kubectl logs -l app=pubsub | ||
+ | Pulling messages from Pub/Sub subscription... | ||
+ | [2019-08-20 21:46:18.395126] Received message: ID=697037622972840 Data=b'Hello, world!' | ||
+ | [2019-08-20 21:46:18.395205] Processing: 697037622972840 | ||
+ | [2019-08-20 21:46:21.398350] Processed: 697037622972840 | ||
+ | </pre> | ||
+ | |||
+ | ===Working with ConfigMaps=== | ||
+ | |||
+ | ConfigMaps bind configuration files, command-line arguments, environment variables, port numbers, and other configuration artifacts to your Pods' containers and system components at runtime. ConfigMaps enable you to separate your configurations from your Pods and components. However, ConfigMaps are not encrypted, making them inappropriate for credentials. This is the difference between Secrets and ConfigMaps: secrets are better suited for confidential or sensitive information, such as credentials. ConfigMaps are better suited for general configuration information, such as port numbers. | ||
+ | |||
+ | ; Use the kubectl command to create ConfigMaps | ||
+ | |||
+ | You use kubectl to create ConfigMaps by following the pattern kubectl create configmap [NAME] [DATA] and adding a flag for file (--from-file) or literal (--from-literal) | ||
+ | |||
+ | * Start with a simple literal in the following kubectl command: | ||
+ | <pre> | ||
+ | kubectl create configmap sample --from-literal=message=hello | ||
+ | </pre> | ||
+ | |||
+ | * See how Kubernetes ingested the ConfigMap: | ||
+ | <pre> | ||
+ | $ kubectl describe configmaps sample | ||
+ | |||
+ | Name: sample | ||
+ | Namespace: default | ||
+ | Labels: <none> | ||
+ | Annotations: <none> | ||
+ | |||
+ | Data | ||
+ | ==== | ||
+ | message: | ||
+ | ---- | ||
+ | hello | ||
+ | Events: <none> | ||
+ | </pre> | ||
+ | |||
+ | * Create a ConfigMap from a file: | ||
+ | <pre> | ||
+ | $ cat << EOF >sample2.properties | ||
+ | message2=world | ||
+ | foo=bar | ||
+ | meaningOfLife=42 | ||
+ | EOF | ||
+ | |||
+ | $ kubectl create configmap sample2 --from-file=sample2.properties | ||
+ | |||
+ | $ kubectl describe configmaps sample2 | ||
+ | |||
+ | Name: sample2 | ||
+ | Namespace: default | ||
+ | Labels: <none> | ||
+ | Annotations: <none> | ||
+ | |||
+ | Data | ||
+ | ==== | ||
+ | sample2.properties: | ||
+ | ---- | ||
+ | message2=world | ||
+ | foo=bar | ||
+ | meaningOfLife=42 | ||
+ | |||
+ | Events: <none> | ||
+ | </pre> | ||
+ | |||
+ | ; Use manifest files to create ConfigMaps | ||
+ | |||
+ | You can also use a YAML configuration file to create a ConfigMap. | ||
+ | |||
+ | * Create a ConfigMap definition called <code>sample3</code> (we will use this ConfigMap later to demonstrate two different ways to expose the data inside a container): | ||
+ | <pre> | ||
+ | $ cat << EOF > config-map-3.yaml | ||
+ | apiVersion: v1 | ||
+ | data: | ||
+ | airspeed: africanOrEuropean | ||
+ | meme: testAllTheThings | ||
+ | kind: ConfigMap | ||
+ | metadata: | ||
+ | name: sample3 | ||
+ | namespace: default | ||
+ | selfLink: /api/v1/namespaces/default/configmaps/sample3 | ||
+ | |||
+ | $ kubectl apply -f config-map-3.yaml | ||
+ | $ kubectl describe configmaps sample3 | ||
+ | Name: sample3 | ||
+ | Namespace: default | ||
+ | Labels: <none> | ||
+ | Annotations: kubectl.kubernetes.io/last-applied-configuration: | ||
+ | {"apiVersion":"v1","data":{"airspeed":"africanOrEuropean","meme":"testAllTheThings"},"kind":"ConfigMap","metadata":{"annotations":{},"name... | ||
+ | |||
+ | Data | ||
+ | ==== | ||
+ | airspeed: | ||
+ | ---- | ||
+ | africanOrEuropean | ||
+ | meme: | ||
+ | ---- | ||
+ | testAllTheThings | ||
+ | Events: <none> | ||
+ | </pre> | ||
+ | |||
+ | Now we have some non-secret, unencrypted, configuration information properly separated from our application and available to our cluster. We have done this using ConfigMaps in three different ways to demonstrate the various options, however, in practice, you typically pick one method, most likely the YAML configuration file approach. Configuration files provide a record of the values that you have stored so that you can easily repeat the process in the future. | ||
+ | |||
+ | Next, let's access this information from within our application. | ||
+ | |||
+ | ; Use environment variables to consume ConfigMaps in containers | ||
+ | |||
+ | In order to access ConfigMaps from inside Containers using environment variables, the Pod definition must be updated to include one or more configMapKeyRefs. | ||
+ | |||
+ | Below is an updated version of the Cloud Pub/Sub demo Deployment that includes an additional <code>env:</code> setting at the end of the file to import environmental variables from the ConfigMap into the container: | ||
+ | <pre> | ||
+ | - name: INSIGHTS | ||
+ | valueFrom: | ||
+ | configMapKeyRef: | ||
+ | name: sample3 | ||
+ | key: meme | ||
+ | </pre> | ||
+ | |||
+ | * Reapply the updated configuration file: | ||
+ | <pre> | ||
+ | kubectl apply -f pubsub-configmap.yaml | ||
+ | </pre> | ||
+ | |||
+ | Now our application has access to an environment variable called <code>INSIGHTS</code>, which has a value of <code>testAllTheThings</code>. | ||
+ | |||
+ | * Verify that the environment variable has the correct value: | ||
+ | <pre> | ||
+ | $ kubectl get pods | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | pubsub-6549d6dffc-w7lbd 1/1 Running 0 35s | ||
+ | |||
+ | $ kubectl exec -it pubsub-6549d6dffc-w7lbd -- sh | ||
+ | # printenv | grep ^INSIGHTS | ||
+ | INSIGHTS=testAllTheThings | ||
+ | </pre> | ||
+ | |||
+ | ; Use mounted volumes to consume ConfigMaps in containers | ||
+ | |||
+ | You can populate a volume with the ConfigMap data instead of (or in addition to) storing it in an environment variable. | ||
+ | |||
+ | In this Deployment, the ConfigMap named <code>sample-3</code> that we created earlier is also added as a volume called <code>config-3</code> in the Pod spec. The <code>config-3</code> volume is then mounted inside the container on the path <code>/etc/config</code>. The original method using Environment variables to import ConfigMaps is also configured. | ||
+ | |||
+ | * Update the Deployment: | ||
+ | <pre> | ||
+ | $ cat << EOF > pubsub-configmap2.yaml | ||
+ | apiVersion: apps/v1 | ||
+ | kind: Deployment | ||
+ | metadata: | ||
+ | name: pubsub | ||
+ | spec: | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | app: pubsub | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: pubsub | ||
+ | spec: | ||
+ | volumes: | ||
+ | - name: google-cloud-key | ||
+ | secret: | ||
+ | secretName: pubsub-key | ||
+ | - name: config-3 | ||
+ | configMap: | ||
+ | name: sample3 | ||
+ | containers: | ||
+ | - name: subscriber | ||
+ | image: gcr.io/google-samples/pubsub-sample:v1 | ||
+ | volumeMounts: | ||
+ | - name: google-cloud-key | ||
+ | mountPath: /var/secrets/google | ||
+ | - name: config-3 | ||
+ | mountPath: /etc/config | ||
+ | env: | ||
+ | - name: GOOGLE_APPLICATION_CREDENTIALS | ||
+ | value: /var/secrets/google/key.json | ||
+ | - name: INSIGHTS | ||
+ | valueFrom: | ||
+ | configMapKeyRef: | ||
+ | name: sample3 | ||
+ | key: meme | ||
+ | EOF | ||
+ | |||
+ | $ kubectl apply -f pubsub-configmap2.yaml | ||
+ | </pre> | ||
+ | |||
+ | * Reconnect to the container's shell session to see if the value in the ConfigMap is accessible (note: the Pod names will have changed): | ||
+ | <pre> | ||
+ | $ kubectl get pods | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | pubsub-5fcc8df7b6-p5d9x 1/1 Running 0 5s | ||
+ | pubsub-6549d6dffc-w7lbd 1/1 Terminating 0 3m | ||
+ | |||
+ | $ kubectl exec -it pubsub-5fcc8df7b6-p5d9x -- sh | ||
+ | # cd /etc/config | ||
+ | # ls | ||
+ | airspeed meme | ||
+ | # cat airspeed | ||
+ | africanOrEuropean | ||
+ | </pre> | ||
+ | |||
+ | ==Access Control and Security in Kubernetes and Google Kubernetes Engine (GKE)== | ||
+ | |||
+ | There are two main ways to authorize in GKE (and you really need both): | ||
+ | # Cloud IAM: Project and cluster level access | ||
+ | # RBAC: Cluster and namespace level access | ||
+ | |||
+ | The API server authenticates in different ways: | ||
+ | * OpenID connect tokens [recommended] | ||
+ | * x509 client certificates [suggest disabling] | ||
+ | * Static passwords [suggest disabling] | ||
+ | |||
+ | In GKE, x509 and static passwords are disabled by default (in k8s v1.12+). OpenID Connect is enabled by default. | ||
+ | |||
+ | ; Cloud IAM | ||
+ | |||
+ | Three elements are defined in Cloud IAM access control: | ||
+ | # Who? - Identity of the person making the request | ||
+ | # What? - Set of permissions that are granted | ||
+ | # Which? - Which resources this policy applies to | ||
+ | |||
+ | ; GKE predefined Cloud IAM roles | ||
+ | |||
+ | Provides granular access to Kubernetes resources. | ||
+ | |||
+ | * GKE Viewer: Read-only permissions to cluster and k8s resources | ||
+ | * GKE Developer: Full access to k8s cluster within resources | ||
+ | * GKE Admin: Full access to clusters and their k8s resources | ||
+ | * GKE Cluster Admin: Create/delete/update/view clusters. No access to k8s resources. | ||
+ | * GKE Host Service Agent User: Only for service account. Manage network resource in a shared VPC. | ||
+ | |||
+ | ===RBAC=== | ||
+ | |||
+ | Three k8s RBAC concepts: | ||
+ | # Subjects (Who?) | ||
+ | # Resources (Which?) | ||
+ | # Verbs (What?) | ||
+ | |||
+ | Roles connect ''Resources'' to ''Verbs''. Role Bindings connect roles to subjects. | ||
+ | |||
+ | * A role contains rules that represent a set of permissions. For example: | ||
+ | <pre> | ||
+ | kind: Role | ||
+ | apiVersion: rbac.authorization.k8s.io/v1 | ||
+ | metadata: | ||
+ | namespace: default | ||
+ | name: demo-role | ||
+ | rules: | ||
+ | - apiGroups: [""] | ||
+ | resource: ["pods"] | ||
+ | verbs: ["get", "list", "watch"] | ||
+ | </pre> | ||
+ | |||
+ | Note: Only one namespace per role is allowed. | ||
+ | |||
+ | * A Cluster Role grants permissions at the cluster level. For example: | ||
+ | <pre> | ||
+ | kind: ClusterRole | ||
+ | apiVersion: rbac.authorization.k8s.io/v1 | ||
+ | metadata: | ||
+ | name: demo-clusterrole | ||
+ | rules: | ||
+ | - apiGroups: [""] | ||
+ | resource: ["storageclasses"] | ||
+ | verbs: ["get", "list", "watch"] | ||
+ | </pre> | ||
+ | |||
+ | Note: No need to define namespace in Cluster Role, since it applies at the cluster-level. | ||
+ | |||
+ | * More examples: | ||
+ | <pre> | ||
+ | rules: | ||
+ | - apiGroups: [""] | ||
+ | resources: ["pods"] | ||
+ | verbs: ["get", "list", "watch"] | ||
+ | --- | ||
+ | rules: | ||
+ | - apiGroups: [""] | ||
+ | resources: ["pods", "pods/log"] | ||
+ | verbs: ["get", "list", "watch"] | ||
+ | --- | ||
+ | rules: | ||
+ | - apiGroups: [""] | ||
+ | resources: ["pods"] | ||
+ | resourceNames: ["demo-pod"] | ||
+ | verbs: ["patch", "update"] | ||
+ | --- | ||
+ | rule: | ||
+ | - nonResourceURLs: ["metrics/", "/metrics/*"] | ||
+ | verbs: ["get", "post"] | ||
+ | </pre> | ||
+ | |||
+ | Note: The last example ("<code>nonResourceURLs</code>") is a rule unique to ClusterRoles. | ||
+ | |||
+ | ; Attach Roles to Role Bindings | ||
+ | |||
+ | <pre> | ||
+ | kind: RoleBinding | ||
+ | apiVersion: rbac.authorization.k8s.io/v1 | ||
+ | metadata: | ||
+ | namespace: default | ||
+ | name: demo-rolebinding | ||
+ | subjects: | ||
+ | - kind: User | ||
+ | name: "bob@example.com" | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | roleRef: | ||
+ | kind: Role | ||
+ | name: demo-role | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | </pre> | ||
+ | |||
+ | * Example Cluster Role Binding: | ||
+ | <pre> | ||
+ | kind: ClusterRoleBinding | ||
+ | apiVersion: rbac.authorization.k8s.io/v1 | ||
+ | metadata: | ||
+ | name: demo-clusterrolebinding | ||
+ | subjects: | ||
+ | - kind: User | ||
+ | name: "admin@example.com" | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | roleRef: | ||
+ | kind: ClusterRole | ||
+ | name: demo-clusterrole | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | </pre> | ||
+ | |||
+ | * Example of how to refer to different subject types: | ||
+ | <pre> | ||
+ | subjects: | ||
+ | - kind: User | ||
+ | name: "bob@example.com" | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | --- | ||
+ | subjects: | ||
+ | - kind: Group | ||
+ | name: "Developers" | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | --- | ||
+ | subjects: | ||
+ | - kind: ServiceAccount | ||
+ | name: default | ||
+ | namespace: kube-system | ||
+ | --- | ||
+ | subjects: | ||
+ | - kind: Group | ||
+ | name: system.serviceaccounts | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | --- | ||
+ | subjects: | ||
+ | - kind: Group | ||
+ | name: system.serviceaccounts | ||
+ | apiGroup: rbac.authorization.k8s.io:qa # <- "qa" namespace | ||
+ | --- | ||
+ | subjects: | ||
+ | - kind: Group | ||
+ | name: system.authenticated | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | </pre> | ||
+ | |||
+ | Note: Not all resources are namespaced: | ||
+ | <pre> | ||
+ | $ kubectl api-resources --namespaced=true --output=name | head | ||
+ | bindings | ||
+ | configmaps | ||
+ | endpoints | ||
+ | events | ||
+ | |||
+ | $ kubectl api-resources --namespaced=false --output=name | head -4 | ||
+ | componentstatuses | ||
+ | namespaces | ||
+ | nodes | ||
+ | persistentvolumes | ||
+ | </pre> | ||
+ | |||
+ | ===Kubernetes Control Plane Security=== | ||
+ | |||
+ | * Initiate credential rotation: | ||
+ | <pre> | ||
+ | $ gcloud container clusters update <name> \ | ||
+ | --start-credential-rotation | ||
+ | </pre> | ||
+ | |||
+ | * Complete credential rotation: | ||
+ | <pre> | ||
+ | $ gcloud container clusters update <name> \ | ||
+ | --complete-credential-rotation | ||
+ | </pre> | ||
+ | |||
+ | * Initiate IP rotation: | ||
+ | <pre> | ||
+ | $ gcloud container clusters update <name> \ | ||
+ | --start-ip-rotation | ||
+ | </pre> | ||
+ | |||
+ | * Complete IP rotation: | ||
+ | <pre> | ||
+ | $ gcloud container clusters update <name> \ | ||
+ | --complete-ip-rotation | ||
+ | </pre> | ||
+ | |||
+ | ; Protect your metadata | ||
+ | |||
+ | * Restrict <code>compute.instances.get</code> permission for nodes. | ||
+ | * Disable legacy Compute Engine API endpoint. (note: Compute Engine API endpoints using versions 0.1 and V1 beta-1, support querying of metadata.) | ||
+ | *: The V1 APIs restrict the retrieval of metadata. Starting from GKE version 1.12, the legacy Compute Engine metadata endpoints are disabled by default and with earlier versions, they can only be disabled by creating a new cluster or adding a new node port to an existing cluster. | ||
+ | * Enable metadata concealment (temporary). | ||
+ | *: This is basically a firewall that prevents Pods from accessing a node's metadata. It does this by restricting access to Kube ENV, which contains kube credentials and the virtual machines instance identity token. Note that this is a temporary solution that will be deprecated as better security improvements are developed in the future. | ||
+ | |||
+ | SEE: "Protecting Cluster Metadata" | ||
+ | |||
+ | ===Pod Security=== | ||
+ | |||
+ | Use security context to limit privileges to containers. | ||
+ | <pre> | ||
+ | kind: Pod | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: security-context-demo | ||
+ | spec: | ||
+ | securityContext: | ||
+ | runAsUser: 1000 | ||
+ | fsGroup: 2000 | ||
+ | ... | ||
+ | </pre> | ||
+ | |||
+ | Use a Pod security policy to apply security contexts: | ||
+ | * A policy is a set of restrictions, requirements, and defaults. | ||
+ | * For a Pod to be admitted to the cluster, all conditions must be fulfilled for a Pod to be created or updated. (note: rules are ''only'' applied when the Pod is being created or updated.) | ||
+ | * PodSecurityPolicy controller is an admission controller. | ||
+ | * The controller validates and modifies requests against one or more PodSecurityPolicies. | ||
+ | |||
+ | There is also an extra step called "admission control". A validating or non-mutating admission controller just validates requests. A mutating and mission controller can modify requests if necessary and can also validate requests. A request can be passed through multiple controllers, and if the request fails at any point the entire request is rejected immediately and the end-user receives an error. The pod security policy admission controller acts on the creation and modification of pods and determines whether the pod should be admitted based on the requested security context, and the available pod security policies. Note that these policies are enforced during the creation or update of a pod, but a security context is enforced by the Container Runtime. | ||
+ | |||
+ | * Pod security policy example: | ||
+ | <pre> | ||
+ | kind: PodSecurityPolicy | ||
+ | apiVersion: policy/v1beta1 | ||
+ | metadata: | ||
+ | name: demo-psp | ||
+ | spec: | ||
+ | privileged: false | ||
+ | allowPriviligeEscalation: false | ||
+ | volumes: | ||
+ | - 'configMap' | ||
+ | - 'emptyDir' | ||
+ | - 'projected' | ||
+ | - 'secret' | ||
+ | - 'persistentVolumeClaim' | ||
+ | hostNetwork: false | ||
+ | hostIPC: false | ||
+ | runAsUser: | ||
+ | rule: 'MustRunAsNonRoot' | ||
+ | seLinux: | ||
+ | rule: 'RunAsAny' | ||
+ | readOnlyRootFilesystem: false | ||
+ | </pre> | ||
+ | |||
+ | * Authorize (the above) Pod security policy: | ||
+ | <pre> | ||
+ | kind: ClusterRole | ||
+ | apiVersion: rbac.authorization.k8s.io/v1 | ||
+ | metadata: | ||
+ | name: psp-clusterrole | ||
+ | rules: | ||
+ | - apiGroups: | ||
+ | - policy | ||
+ | resources: | ||
+ | - podsecuritypolicies | ||
+ | resourceNames: | ||
+ | - demo-psp | ||
+ | verbs: | ||
+ | - use | ||
+ | </pre> | ||
+ | |||
+ | * Now, define a Role Binding: | ||
+ | <pre> | ||
+ | kind: RoleBinding | ||
+ | apiVersion: rbac.authorization.k8s.io/v1 | ||
+ | metadata: | ||
+ | name: psp-rolebinding | ||
+ | namespace: demo | ||
+ | roleRef: | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | kind: ClusterRole | ||
+ | name: psp-clusterrole | ||
+ | subjects: | ||
+ | - apiGroup: rbac.authorization.k8s.io | ||
+ | kind: Group | ||
+ | name: system:serviceaccounts | ||
+ | - kind: ServiceAccount | ||
+ | name: service@example.com | ||
+ | namespace: demo | ||
+ | </pre> | ||
+ | |||
+ | A Pod Security Policy controller must be enabled on a GKE cluster: | ||
+ | <pre> | ||
+ | $ gcloud container clusters update <name> \ | ||
+ | --enable-pod-security-policy | ||
+ | </pre> | ||
+ | |||
+ | '''WARNING:''' Careful, the order here matters. If you enable the pod security policy controller before defining any policies, you have just commanded that nothing is allowed to be deployed. | ||
+ | |||
+ | ; GKE recommended best practices | ||
+ | |||
+ | * Use container-optimized OS (COS) | ||
+ | * Enable automatic node upgrades (to run the latest available version of k8s) | ||
+ | * Use private cluster and master authorized networks(i.e., they do not contain external IP addresses) | ||
+ | * Use encrypted Secrets for sensitive info | ||
+ | * Assign roles to groups, not users. | ||
+ | * Do '''not''' enable Kubernetes Dashboard | ||
+ | |||
+ | ==Implementing Role-Based Access Control With Kubernetes Engine== | ||
+ | <!-- | ||
+ | In Cloud Shell enter the following command to clone the lab repository to the lab Cloud Shell. | ||
+ | |||
+ | $ git clone https://github.com/GoogleCloudPlatformTraining/training-data-analyst | ||
+ | $ cd ~/training-data-analyst/courses/ak8s/15_RBAC/ | ||
+ | --> | ||
+ | |||
+ | * List the current namespaces in the cluster: | ||
+ | <pre> | ||
+ | $ kubectl get namespaces | ||
+ | NAME STATUS AGE | ||
+ | default Active 77s | ||
+ | kube-public Active 77s | ||
+ | kube-system Active 77s | ||
+ | </pre> | ||
+ | |||
+ | * Create a Namespace called "production": | ||
+ | <pre> | ||
+ | $ cat << EOF > my-namespace.yaml | ||
+ | kind: Namespace | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: production | ||
+ | EOF | ||
+ | |||
+ | $ kubectl create -f ./my-namespace.yaml | ||
+ | |||
+ | $ kubectl get namespaces | ||
+ | NAME STATUS AGE | ||
+ | default Active 2m16s | ||
+ | kube-public Active 2m16s | ||
+ | kube-system Active 2m16s | ||
+ | production Active 7s | ||
+ | |||
+ | $ kubectl describe namespaces production | ||
+ | Name: production | ||
+ | Labels: <none> | ||
+ | Annotations: <none> | ||
+ | Status: Active | ||
+ | |||
+ | No resource quota. | ||
+ | |||
+ | No resource limits. | ||
+ | </pre> | ||
+ | |||
+ | ; Create a Resource in a Namespace | ||
+ | |||
+ | If you do not specify the namespace of a Pod it will use the namespace <code>default</code>. | ||
+ | |||
+ | * Create a Pod that contains an Nginx container and specify which namespace to deploy it to: | ||
+ | <pre> | ||
+ | $ cat << EOF > my-pod.yaml | ||
+ | kind: Pod | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: nginx | ||
+ | labels: | ||
+ | name: nginx | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: nginx | ||
+ | image: nginx | ||
+ | ports: | ||
+ | - containerPort: 80 | ||
+ | EOF | ||
+ | |||
+ | $ kubectl apply -f ./my-pod.yaml --namespace=production | ||
+ | </pre> | ||
+ | |||
+ | Alternatively, we could have specified the namespace in the YAML file: | ||
+ | <pre> | ||
+ | kind: Pod | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: nginx | ||
+ | labels: | ||
+ | name: nginx | ||
+ | namespace: production | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: nginx | ||
+ | image: nginx | ||
+ | ports: | ||
+ | - containerPort: 80 | ||
+ | </pre> | ||
+ | |||
+ | * Try using the following command to view your Pod: | ||
+ | <pre> | ||
+ | $ kubectl get pods | ||
+ | No resources found. | ||
+ | </pre> | ||
+ | |||
+ | You will not see your Pod because kubectl checked the <code>default</code> namespace (by default) instead of our new namespace. | ||
+ | |||
+ | * Run the command again, but this time specify the new namespace: | ||
+ | <pre> | ||
+ | $ kubectl get pods --namespace=production | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | nginx 1/1 Running 0 54s | ||
+ | </pre> | ||
+ | |||
+ | Now you should see your newly created Pod. | ||
+ | |||
+ | ===About Roles and RoleBindings=== | ||
+ | |||
+ | In this section, we will create a sample custom role, and then create a RoleBinding that grants Username 2 the editor role in the production namespace. | ||
+ | |||
+ | * Defines a role called <code>pod-reader</code> that provides create, get, list, and watch permission for Pod objects in the production namespace. Note that this role cannot delete Pods: | ||
+ | <pre> | ||
+ | $ cat << EOF > pod-reader-role.yaml | ||
+ | kind: Role | ||
+ | apiVersion: rbac.authorization.k8s.io/v1 | ||
+ | metadata: | ||
+ | namespace: production | ||
+ | name: pod-reader | ||
+ | rules: | ||
+ | - apiGroups: [""] | ||
+ | resources: ["pods"] | ||
+ | verbs: ["create", "get", "list", "watch"] | ||
+ | </pre> | ||
+ | |||
+ | ; Create a custom Role | ||
+ | |||
+ | Before you can create a Role, your account must have the permissions granted in the role being assigned. For cluster administrators, this can be easily accomplished by creating the following RoleBinding to grant your own user account the cluster-admin role. | ||
+ | |||
+ | To grant the Username 1 account cluster-admin privileges, run the following command, replacing <code>[USERNAME_1_EMAIL]</code> with the email address of the Username 1 account: | ||
+ | <pre> | ||
+ | $ kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user [USERNAME_1_EMAIL] | ||
+ | </pre> | ||
+ | |||
+ | Now, create the role (defined above): | ||
+ | <pre> | ||
+ | $ kubectl apply -f pod-reader-role.yaml | ||
+ | |||
+ | $ kubectl get roles --namespace production | ||
+ | NAME AGE | ||
+ | pod-reader 8s | ||
+ | </pre> | ||
+ | |||
+ | ; Create a RoleBinding | ||
+ | |||
+ | The role is used to assign privileges, but by itself it does nothing. The role must be bound to a user and an object, which is done in the RoleBinding. | ||
+ | |||
+ | * Creates a RoleBinding called <code>username2-editor</code> for the second lab user to the pod-reader role we created earlier. That role can create and view Pods but cannot delete them: | ||
+ | <pre> | ||
+ | $ cat << EOF > username2-editor-binding.yaml | ||
+ | kind: RoleBinding | ||
+ | apiVersion: rbac.authorization.k8s.io/v1 | ||
+ | metadata: | ||
+ | name: username2-editor | ||
+ | namespace: production | ||
+ | subjects: | ||
+ | - kind: User | ||
+ | name: [USERNAME_2_EMAIL] | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | roleRef: | ||
+ | kind: Role | ||
+ | name: pod-reader | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | EOF | ||
+ | </pre> | ||
+ | This file contains a placeholder, <code>[USERNAME_2_EMAIL]</code>, that we must replace with the email address of Username 2 before we apply it. | ||
+ | |||
+ | * Use [[sed]] to replace the placeholder in the file with the value of the environment variable: | ||
+ | <pre> | ||
+ | sed -i "s/\[USERNAME_2_EMAIL\]/${USER2}/" username2-editor-binding.yaml | ||
+ | </pre> | ||
+ | |||
+ | * Confirm that the correct change has been made: | ||
+ | <pre> | ||
+ | $ cat username2-editor-binding.yaml | ||
+ | subjects: | ||
+ | - kind: User | ||
+ | name: gcpstaginguser68_student@qwiklabs.net | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | </pre> | ||
+ | |||
+ | We will apply this RoleBinding later. | ||
+ | |||
+ | ; Test Access | ||
+ | |||
+ | Now we will test whether Username 2 can create a Pod in the <code>production</code> namespace by using Username 2 to create a Pod. This manifest deploys a simple Pod with a single Nginx container: | ||
+ | <pre> | ||
+ | $ cat << EOF > production-pod.yaml | ||
+ | kind: Pod | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: production-pod | ||
+ | labels: | ||
+ | name: production-pod | ||
+ | namespace: production | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: production-pod | ||
+ | image: nginx | ||
+ | ports: | ||
+ | - containerPort: 8080 | ||
+ | EOF | ||
+ | </pre> | ||
+ | |||
+ | Switch back to the Username 2 GCP Console tab. Make sure you are on the Username 2 GCP Console tab. | ||
+ | |||
+ | In Cloud Shell for Username 2, type the following command to set the environment variable for the zone and cluster name. | ||
+ | <pre> | ||
+ | $ export my_zone=us-central1-a | ||
+ | $ export my_cluster=standard-cluster-1 | ||
+ | $ source <(kubectl completion bash) | ||
+ | $ gcloud container clusters get-credentials $my_cluster --zone $my_zone | ||
+ | </pre> | ||
+ | |||
+ | Check if Username 2 can see the production namespace: | ||
+ | <pre> | ||
+ | $ kubectl get namespaces | ||
+ | NAME STATUS AGE | ||
+ | default Active 11m | ||
+ | kube-public Active 11m | ||
+ | kube-system Active 11m | ||
+ | production Active 9m8s | ||
+ | </pre> | ||
+ | |||
+ | The production namespace appears at the bottom of the list, so we can continue. | ||
+ | |||
+ | * Create the resource in the namespace called production: | ||
+ | <pre> | ||
+ | $ kubectl apply -f ./production-pod.yaml | ||
+ | Error from server (Forbidden): error when creating "./production-pod.yaml": pods is forbidden: User "student-c2126354c28c@qwiklabs.net" cannot create resource "pods" in API group "" in the namespace "production" | ||
+ | </pre> | ||
+ | |||
+ | The above command fails, indicating that Username 2 does not have the correct permission to create Pods. Username 2 only has the viewer permissions it started the lab with at this point because you have not bound any other role to that account yet. You will now change that. | ||
+ | |||
+ | Switch back to the Username 1 GCP Console tab. | ||
+ | Make sure you are on the Username 1 GCP Console tab. | ||
+ | |||
+ | In the Cloud Shell for Username 1, execute the following command to create the RoleBinding that grants Username 2 the pod-reader role that includes the permission to create Pods in the production namespace: | ||
+ | <pre> | ||
+ | $ kubectl apply -f username2-editor-binding.yaml | ||
+ | </pre> | ||
+ | |||
+ | In the Cloud Shell for Username 1, execute the following command to look for the new role binding: | ||
+ | <pre> | ||
+ | $ kubectl get rolebinding | ||
+ | No resources found. | ||
+ | </pre> | ||
+ | |||
+ | The rolebinding does not appear because kubectl is showing the default namespace. | ||
+ | |||
+ | In the Cloud Shell for Username 1, execute the following command with the production namespace specified: | ||
+ | <pre> | ||
+ | $ kubectl get rolebinding --namespace production | ||
+ | NAME AGE | ||
+ | username2-editor 49s | ||
+ | </pre> | ||
+ | |||
+ | Switch back to the Username 2 GCP Console tab. | ||
+ | Make sure you are on the Username 2 GCP Console tab. | ||
+ | |||
+ | In the Cloud Shell for Username 2, execute the following command to create the resource in the namespace called production: | ||
+ | <pre> | ||
+ | $ kubectl apply -f ./production-pod.yaml | ||
+ | pod/production-pod created | ||
+ | </pre> | ||
+ | |||
+ | This should now succeed as Username 2 now has the Create permission for Pods in the production namespace. | ||
+ | |||
+ | * Verify the Pod deployed properly in the production namespace: | ||
+ | <pre> | ||
+ | $ kubectl get pods --namespace production | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | nginx 1/1 Running 0 11m | ||
+ | production-pod 1/1 Running 0 26s | ||
+ | </pre> | ||
+ | |||
+ | Verify that only the specific RBAC permissions granted by the <code>pod-reader</code> role are in effect for Username 2 by attempting to delete the <code>production-pod</code>: | ||
+ | <pre> | ||
+ | $ kubectl delete pod production-pod --namespace production | ||
+ | Error from server (Forbidden): pods "production-pod" is forbidden: User "student-c2126354c28c@qwiklabs.net" cannot delete resource "pods" in API group "" in the namespace "production" | ||
+ | </pre> | ||
+ | |||
+ | This fails because Username 2 does not have the delete permission for Pods. | ||
+ | |||
+ | ==Pod security policies== | ||
+ | |||
+ | ===Creating a Pod Security Policy=== | ||
+ | |||
+ | <!-- | ||
+ | In Cloud Shell enter the following command to clone the repository to the lab Cloud Shell. | ||
+ | $ git clone https://github.com/GoogleCloudPlatformTraining/training-data-analyst | ||
+ | $ cd ~/training-data-analyst/courses/ak8s/14_IAM/ | ||
+ | --> | ||
+ | |||
+ | In this section, we will create a Pod Security Policy. This policy does not allow privileged Pods and restricts <code>runAsUser</code> to non-root accounts only, preventing the user of the Pod from escalating to root: | ||
+ | <pre> | ||
+ | $ cat << EOF > restricted-psp.yaml | ||
+ | kind: PodSecurityPolicy | ||
+ | apiVersion: policy/v1beta1 | ||
+ | metadata: | ||
+ | name: restricted-psp | ||
+ | spec: | ||
+ | privileged: false # Don't allow privileged pods! | ||
+ | seLinux: | ||
+ | rule: RunAsAny | ||
+ | supplementalGroups: | ||
+ | rule: RunAsAny | ||
+ | runAsUser: | ||
+ | rule: MustRunAsNonRoot | ||
+ | fsGroup: | ||
+ | rule: RunAsAny | ||
+ | volumes: | ||
+ | - '*' | ||
+ | EOF | ||
+ | |||
+ | $ kubectl apply -f restricted-psp.yaml | ||
+ | |||
+ | $ kubectl get podsecuritypolicy restricted-psp | ||
+ | NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES | ||
+ | restricted-psp false RunAsAny MustRunAsNonRoot RunAsAny RunAsAny false * | ||
+ | </pre> | ||
+ | NOTE: This policy has no effect until a cluster role is created and bound to a user or service account with the permission to "use" the policy. | ||
+ | |||
+ | ; Create a ClusterRole to a Pod Security Policy | ||
+ | |||
+ | * Create a ClusterRole that includes the resource we created in the last section (<code>restricted-psp</code>), and grant the subject the ability to use the <code>restricted-psp</code> resource. The subject is the user or service account that is bound to this role. We will bind an account to this role later to enable the use of the policy: | ||
+ | <pre> | ||
+ | $ cat << EOF > psp-cluster-role.yaml | ||
+ | kind: ClusterRole | ||
+ | apiVersion: rbac.authorization.k8s.io/v1 | ||
+ | metadata: | ||
+ | name: restricted-pods-role | ||
+ | rules: | ||
+ | - apiGroups: | ||
+ | - extensions | ||
+ | resources: | ||
+ | - podsecuritypolicies | ||
+ | resourceNames: | ||
+ | - restricted-psp | ||
+ | verbs: | ||
+ | - use | ||
+ | EOF | ||
+ | </pre> | ||
+ | |||
+ | However, before we can create a Role, the account we use to create the role must already have the permissions granted in the role being assigned. For cluster administrators, this can be easily accomplished by creating the necessary RoleBinding to grant your own user account the cluster-admin role. | ||
+ | |||
+ | * To grant your user account cluster-admin privileges, run the following command, replacing [USERNAME_1_EMAIL] with the email address of the Username 1 account: | ||
+ | <pre> | ||
+ | $ kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user [USERNAME_1_EMAIL] | ||
+ | </pre> | ||
+ | |||
+ | * Create the ClusterRole with access to the security policy: | ||
+ | <pre> | ||
+ | $ kubectl apply -f psp-cluster-role.yaml | ||
+ | |||
+ | $ kubectl get clusterrole restricted-pods-role | ||
+ | NAME AGE | ||
+ | restricted-pods-role 7s | ||
+ | </pre> | ||
+ | |||
+ | The ClusterRole is ready, but it is not yet bound to a subject, and therefore is not yet active. | ||
+ | |||
+ | ; Create a ClusterRoleBinding for the Pod Security Policy | ||
+ | |||
+ | The next step in the process involves binding the ClusterRole to a subject, a user or service account, that would be responsible for creating Pods in the target namespace. Typically these policies are assigned to service accounts because Pods are typically deployed by replicationControllers in Deployments rather than as one-off executions by a human user. | ||
+ | |||
+ | * Bind the <code>restricted-pods-role</code> (created in the last section) to the <code>system:serviceaccounts</code> group in the <code>default</code> Namespace: | ||
+ | <pre> | ||
+ | $ cat << EOF > psp-cluster-role-binding.yaml | ||
+ | kind: RoleBinding | ||
+ | apiVersion: rbac.authorization.k8s.io/v1 | ||
+ | metadata: | ||
+ | name: restricted-pod-rolebinding | ||
+ | namespace: default | ||
+ | roleRef: | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | kind: ClusterRole | ||
+ | name: restricted-pods-role | ||
+ | subjects: | ||
+ | # Example: All service accounts in default namespace | ||
+ | - apiGroup: rbac.authorization.k8s.io | ||
+ | kind: Group | ||
+ | name: system:serviceaccounts | ||
+ | EOF | ||
+ | |||
+ | $ kubectl apply -f psp-cluster-role-binding.yaml | ||
+ | </pre> | ||
+ | |||
+ | ; Activate Security Policy | ||
+ | |||
+ | The PodSecurityPolicy controller must be enabled to affect the admission control of new Pods in the cluster. | ||
+ | |||
+ | '''Caution!''' If you do not define and authorize policies prior to enabling the PodSecurityPolicy controller, no Pods will be permitted to execute on the cluster. | ||
+ | |||
+ | * Enable the PodSecurityPolicy controller: | ||
+ | <pre> | ||
+ | $ gcloud beta container clusters update $my_cluster --zone $my_zone --enable-pod-security-policy | ||
+ | </pre> | ||
+ | This process takes several minutes to complete. | ||
+ | |||
+ | Note: The PodSecurityPolicy controller, can be disabled by running this command: | ||
+ | <pre> | ||
+ | $ gcloud beta container clusters update [CLUSTER_NAME] --no-enable-pod-security-policy | ||
+ | </pre> | ||
+ | |||
+ | ; Test the Pod Security Policy | ||
+ | |||
+ | The final step in the process involves testing to see if the Policy is active. This Pod attempts to start an nginx container in a privileged context: | ||
+ | <pre> | ||
+ | $ cat << EOF > privileged-pod.yaml | ||
+ | kind: Pod | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: privileged-pod | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: privileged-pod | ||
+ | image: nginx | ||
+ | securityContext: | ||
+ | privileged: true | ||
+ | EOF | ||
+ | |||
+ | $ kubectl apply -f privileged-pod.yaml | ||
+ | |||
+ | Error from server (Forbidden): error when creating "privileged-pod.yaml": pods "privileged-pod-1" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed] | ||
+ | </pre> | ||
+ | You should not be able to deploy the privileged Pod. | ||
+ | |||
+ | Edit the <code>privileged-pod.yaml</code> manifest and remove the two lines at the bottom that invoke the privileged container security context. The file should now look as follows: | ||
+ | <pre> | ||
+ | kind: Pod | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: privileged-pod | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: privileged-pod | ||
+ | image: nginx | ||
+ | </pre> | ||
+ | |||
+ | * Re-deploy the privileged Pod: | ||
+ | <pre> | ||
+ | $ kubectl apply -f privileged-pod.yaml | ||
+ | </pre> | ||
+ | |||
+ | The command now succeeds because the container no longer requires a privileged security context. | ||
+ | |||
+ | ===Rotate IP Address and Credentials=== | ||
+ | |||
+ | In this section, we will perform IP and credential rotation on our cluster. It is a security best practice to do so regularly to reduce credential lifetimes. While there are separate commands to rotate the serving IP and credentials, rotating credentials additionally rotates the IP as well. | ||
+ | |||
+ | * Update the GKE cluster to start the credential rotation process: | ||
+ | <pre> | ||
+ | $ gcloud container clusters update $my_cluster --zone $my_zone --start-credential-rotation | ||
+ | </pre> | ||
+ | |||
+ | After the command completes, the cluster will initiate the process to update each of the nodes. That process can take up to 15 minutes for your cluster. The process also automatically updates the kubeconfig entry for the current user. | ||
+ | |||
+ | The cluster master now temporarily serves the new IP address in addition to the original address. | ||
+ | |||
+ | Note: You must update the kubeconfig file on any other system that uses kubectl or API to access the master before completing rotation process to avoid losing access. | ||
+ | |||
+ | * Complete the credential and IP rotation process: | ||
+ | <pre> | ||
+ | $ gcloud container clusters update $my_cluster --zone $my_zone --complete-credential-rotation | ||
+ | </pre> | ||
+ | |||
+ | This finalizes the rotation processes and removes the original cluster IP address. | ||
+ | |||
+ | ==Stackdriver== | ||
+ | |||
+ | ; Metrics vs. Events | ||
+ | |||
+ | * Metrics: Represent system performance (e.g., CPU or disk usage). These can be values that change up or down over time called gauge values or values that increase over time called counters. | ||
+ | *: Returns numerical values | ||
+ | * Events: Represent actions, such as Pod restarts or scale-in/scale-out activity. | ||
+ | *: Returns "success", "warning", or "failure". | ||
+ | |||
+ | ===Logging=== | ||
+ | |||
+ | Logging is often viewed as a passive form of systems monitoring. | ||
+ | |||
+ | Stackdriver stores logs for 30 days (default) and up to 50GB is free. | ||
+ | |||
+ | After 30 days, Stackdriver purges your logs. If you wish to keep these logs, export them to BigQuery or Cloud Storage for long-term storage (longer than 30 days). | ||
+ | |||
+ | Node log files (stored in <code>/var/log</code> on each node) that are older than 1 day or that reach 100 Mb will be compressed and rotated (using standard Linux log rotate). Only the 5 most recent log files are kept on the node. However, all logs are streamed to Stackdriver (in JSON format) for stored for 30 days. | ||
+ | |||
+ | GKE installs a logging agent on every node in a cluster. This streams the logs of every container/pod into Stackdriver, using FluentD (running as a DaemonSet). The configuration of FluentD is managed via ConfigMaps. | ||
+ | |||
+ | ===Monitoring=== | ||
+ | |||
+ | In GKE, monitoring is divided into 2 domains: | ||
+ | # Cluster-level: | ||
+ | #* Master nodes (api-server, etcd, scheduler, controller-manager, cloud-controller-manager) | ||
+ | #* Worker nodes | ||
+ | #* Number of nodes, node utilization, pods/deployments running, errors and failures. | ||
+ | # Pods: | ||
+ | #* container metrics | ||
+ | #* application metrics | ||
+ | #* system metrics | ||
+ | |||
+ | ===Probes=== | ||
+ | |||
+ | The best practice is to apply additional health checks to your (microservices) Pods: | ||
+ | * Liveness probes: | ||
+ | ** Is the container running? | ||
+ | ** If not, restart the container (if RestartPolicy is set to <code>Always</code> or <code>OnFailure</code>) | ||
+ | * Readiness probes: | ||
+ | ** Is the container ready to accept requests? | ||
+ | ** If not, remove the Pod's IP address from all Service endpoints (by the endpoint controller) | ||
+ | |||
+ | These probes can be defined using three types of handlers: | ||
+ | # command; | ||
+ | # HTTP; and | ||
+ | # TCP | ||
+ | |||
+ | * Example of a command probe handler: | ||
+ | <pre> | ||
+ | kind: Pod | ||
+ | apiVersion: v1 | ||
+ | metadata: | ||
+ | name: demo-pod | ||
+ | namespace: default | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: liveness | ||
+ | livenessProbe: | ||
+ | exec: | ||
+ | command: | ||
+ | - cat | ||
+ | - /tmp/ready | ||
+ | </pre> | ||
+ | |||
+ | If <code>cat /tmp/ready</code> returns an exit code of <code>0</code>, the liveness probe reports that the container is successful. | ||
+ | |||
+ | * Example of an HTTP probe handler: | ||
+ | <pre> | ||
+ | [...] | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: liveness | ||
+ | livenessProbe: | ||
+ | httpGet: | ||
+ | path: /healthz | ||
+ | port: 8080 | ||
+ | </pre> | ||
+ | |||
+ | If returns 200-400 => good, otherwise it will kill the container. | ||
+ | |||
+ | * Example of a TCP probe handler: | ||
+ | <pre> | ||
+ | [...] | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: liveness | ||
+ | livenessProbe: | ||
+ | tcpSocket: | ||
+ | port: 8080 | ||
+ | # optional: | ||
+ | initialDelaySeconds: 15 | ||
+ | periodSeconds: 10 | ||
+ | timeoutSeconds: 1 | ||
+ | successThreshold: 1 | ||
+ | failureThreshold: 3 | ||
+ | </pre> | ||
+ | |||
+ | If the connection is established, the container is considered healthy. | ||
+ | |||
+ | ===Using Prometheus monitoring with Stackdriver=== | ||
+ | |||
+ | ; Set up Prometheus monitoring with GKE and Stackdriver | ||
+ | |||
+ | When you configure Stackdriver Kubernetes Monitoring with Prometheus support, then services that expose metrics in the Prometheus data model can be exported from the cluster and made visible as external metrics in Stackdriver. | ||
+ | |||
+ | In this task, you create the Prometheus service-account and a cluster role called prometheus and then use those when you deploy the container for the Prometheus service to provide the permissions that Prometheus requires. | ||
+ | |||
+ | The file rbac-setup.yml that is included in the [https://github.com/GoogleCloudPlatformTraining/training-data-analyst source repository] is a Kubernetes manifest file that creates the Kubernetes service account and cluster role for you. | ||
+ | |||
+ | In the Cloud Shell, execute the following command to set up the Kubernetes service account and cluster role ( both are named "prometheus") for the collector: | ||
+ | <pre> | ||
+ | $ git clone https://github.com/GoogleCloudPlatformTraining/training-data-analyst | ||
+ | $ cd ~/training-data-analyst/courses/ak8s/16_Logging/ | ||
+ | $ kubectl apply -f rbac-setup.yml --as=admin --as-group=system:masters | ||
+ | </pre> | ||
+ | |||
+ | A basic Prometheus configuration file called prometheus-service.yml has also been provided for you. This creates a Kubernetes Namespace called Stackdriver, a Deployment that creates a single replica of the Stackdriver Prometheus container, and a ConfigMap that defines the configuration of the Prometheus collector. You modify values in the ConfigMap section of prometheus-service.yml so that it will monitor the GKE cluster you created for this lab. | ||
+ | |||
+ | * Replace the placeholder variable in the prometheus-service.yml file with your current project ID: | ||
+ | <pre> | ||
+ | sed -i 's/prometheus-to-sd/'"${GOOGLE_CLOUD_PROJECT}"'/g'\ | ||
+ | prometheus-service.yml | ||
+ | </pre> | ||
+ | |||
+ | * Replace the placeholder variable in the prometheus-service.yml file with your current cluster name: | ||
+ | <pre> | ||
+ | sed -i 's/prom-test-cluster-2/'"${my_cluster}"'/g'\ | ||
+ | prometheus-service.yml | ||
+ | </pre> | ||
+ | |||
+ | * Replace the placeholder variable in the prometheus-service.yml file with the GCP zone for the cluster: | ||
+ | <pre> | ||
+ | sed -i 's/us-central1-a/'"${my_zone}"'/g' prometheus-service.yml | ||
+ | </pre> | ||
+ | |||
+ | * Start the prometheus server using your modified configuration: | ||
+ | <pre> | ||
+ | $ kubectl apply -f prometheus-service.yml | ||
+ | </pre> | ||
+ | |||
+ | After configuring Prometheus, run the following command to validate the installation: | ||
+ | <pre> | ||
+ | $ kubectl get deployment,service -n stackdriver | ||
+ | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE | ||
+ | deployment.extensions/prometheus 1 1 1 1 8s | ||
+ | |||
+ | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | ||
+ | service/prometheus ClusterIP 10.12.2.179 <none> 9090/TCP 8s | ||
+ | </pre> | ||
+ | |||
+ | ===Using Liveness and Readiness probes for GKE Pods=== | ||
+ | |||
+ | In this section, we will deploy a liveness probe to detect applications that have transitioned from a running state to a broken state. Sometimes, applications are temporarily unable to serve traffic. For example, an application might need to load large data or configuration files during startup. In such cases, you do not want to kill the application, but you do not want to send it requests either. Kubernetes provides readiness probes to detect and mitigate these situations. A Pod with containers reporting that they are not ready does not receive traffic through Kubernetes Services. | ||
+ | |||
+ | Readiness probes are configured similarly to liveness probes. The only difference is that you use the <code>readinessProbe</code> field instead of the <code>livenessProbe</code> field. | ||
+ | |||
+ | * Define and deploy a simple container called liveness running Busybox and a liveness probe that uses the cat command against the file /tmp/healthy within the container to test for liveness every 5 seconds. The startup script for the liveness container creates the /tmp/healthy on startup and then deletes it 30 seconds later to simulate an outage that the Liveness probe can detect: | ||
+ | <pre> | ||
+ | $ cat << EOF > exec-liveness.yaml | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | labels: | ||
+ | test: liveness | ||
+ | name: liveness-exec | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: liveness | ||
+ | image: k8s.gcr.io/busybox | ||
+ | args: | ||
+ | - /bin/sh | ||
+ | - -c | ||
+ | - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 | ||
+ | livenessProbe: | ||
+ | exec: | ||
+ | command: | ||
+ | - cat | ||
+ | - /tmp/healthy | ||
+ | initialDelaySeconds: 5 | ||
+ | periodSeconds: 5 | ||
+ | EOF | ||
+ | |||
+ | $ kubectl create -f exec-liveness.yaml | ||
+ | </pre> | ||
+ | |||
+ | * Within 30 seconds, view the Pod events: | ||
+ | <pre> | ||
+ | $ kubectl describe pod liveness-exec | ||
+ | |||
+ | Type: Secret (a volume populated by a Secret) | ||
+ | SecretName: default-token-wq52t | ||
+ | Optional: false | ||
+ | QoS Class: Burstable | ||
+ | Node-Selectors: <none> | ||
+ | Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s | ||
+ | node.kubernetes.io/unreachable:NoExecute for 300s | ||
+ | |||
+ | Events: | ||
+ | Type Reason Age ... Message | ||
+ | ---- ------ ---- ... ------- | ||
+ | Normal Scheduled 11s ... Successfully assigned liveness-e ... | ||
+ | Normal Su...ntVolume 10s ... MountVolume.SetUp succeeded for ... | ||
+ | Normal Pulling 10s ... pulling image "k8s.gcr.io/busybox" | ||
+ | Normal Pulled 9s ... Successfully pulled image "k8s.g ... | ||
+ | Normal Created 9s ... Created container | ||
+ | Normal Started 9s ... Started container | ||
+ | </pre> | ||
+ | The output indicates that no liveness probes have failed yet. | ||
+ | |||
+ | After 35 seconds, view the Pod events again: | ||
+ | <pre> | ||
+ | $ kubectl describe pod liveness-exec | ||
+ | </pre> | ||
+ | |||
+ | At the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated: | ||
+ | <pre> | ||
+ | Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory | ||
+ | ... | ||
+ | Killing container with id docker://liveness:Container failed liveness probe.. Container will be killed and recreated. | ||
+ | </pre> | ||
+ | |||
+ | <pre> | ||
+ | $ kubectl get pod liveness-exec | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | liveness-exec 1/1 Running 2 2m15s | ||
+ | </pre> | ||
+ | |||
+ | ===Use Stackdriver Logging with GKE=== | ||
+ | |||
+ | In this section, we will deploy a GKE cluster and demo application using Terraform that creates sample Stackdriver logging events. You view the logs for GKE resources in Logging and then create and monitor a custom monitoring metric created using a Stackdriver log filter. | ||
+ | |||
+ | <!-- | ||
+ | Install Terraform | ||
+ | You download and unzip Terraform and add the directory containing the Terraform executable to the search path. | ||
+ | |||
+ | In Cloud Shell, change to your home directory and download the ZIP-formatted Terraform distribution: | ||
+ | <pre> | ||
+ | cd | ||
+ | wget https://releases.hashicorp.com/terraform/0.12.3/terraform_0.12.3_linux_amd64.zip | ||
+ | unzip terraform_0.12.3_linux_amd64.zip | ||
+ | export PATH=$PATH:$PWD | ||
+ | </pre> | ||
+ | --> | ||
+ | |||
+ | ; Download Sample Logging Tool | ||
+ | |||
+ | We will download a Terraform configuration that creates a GKE cluster and then deploy a sample web application to that cluster to generate Logging events. | ||
+ | |||
+ | * Setup: | ||
+ | <pre> | ||
+ | $ mkdir ~/terraform-demo | ||
+ | $ cd ~/terraform-demo | ||
+ | $ git clone https://github.com/GoogleCloudPlatformTraining/gke-logging-sinks-demo | ||
+ | $ cd ~/terraform-demo/gke-logging-sinks-demo/ | ||
+ | </pre> | ||
+ | |||
+ | ; Deploy The Sample Logging Tool | ||
+ | |||
+ | We will now deploy the GKE Stackdriver Logging demo using Terraform. | ||
+ | |||
+ | * Set your zone and region: | ||
+ | <pre> | ||
+ | $ gcloud config set compute/region us-central1 | ||
+ | $ gcloud config set compute/zone us-central1-a | ||
+ | </pre> | ||
+ | |||
+ | ( Instruct Terraform to run the sample logging tool: | ||
+ | <pre> | ||
+ | $ make create | ||
+ | </pre> | ||
+ | |||
+ | This process takes 2-3 minutes to complete. When complete you will see the message: | ||
+ | |||
+ | <pre> | ||
+ | Apply complete! Resources: 8 added, 0 changed, 0 destroyed. | ||
+ | </pre> | ||
+ | |||
+ | ==Using Cloud SQL with Kubernetes Engine== | ||
+ | |||
+ | CloudSQL Proxy is set up as a sidecar container running alongside your app container in your Pod. | ||
+ | |||
+ | ===Overview=== | ||
+ | |||
+ | In this section, we will set up a Kubernetes Deployment of WordPress connected to Cloud SQL via the SQL Proxy. The SQL Proxy lets you interact with a Cloud SQL instance as if it were installed locally (<code>localhost:3306</code>), and even though you are on an unsecured port locally, the SQL Proxy makes sure you are secure over the wire to your Cloud SQL Instance. | ||
+ | |||
+ | To complete this section, we will create several components: | ||
+ | * Create a GKE cluster; | ||
+ | * Create a Cloud SQL Instance to connect to, and a Service Account to provide permission for our Pods to access the Cloud SQL Instance; and, finally | ||
+ | * Deploy WordPress on your GKE cluster, with the SQL Proxy as a Sidecar, connected to our Cloud SQL Instance. | ||
+ | |||
+ | ===Objectives=== | ||
+ | |||
+ | In this section, we will perform the following tasks: | ||
+ | * Create a Cloud SQL instance and database for Wordpress | ||
+ | * Create credentials and Kubernetes Secrets for application authentication | ||
+ | * Configure a Deployment with a Wordpress image to use SQL Proxy | ||
+ | * Install SQL Proxy as a sidecar container and use it to provide SSL access to a CloudSQL instance external to the GKE Cluster | ||
+ | |||
+ | ; Create a GKE cluster | ||
+ | |||
+ | * Setup: | ||
+ | <pre> | ||
+ | $ export my_zone=us-central1-a | ||
+ | $ export my_cluster=standard-cluster-1 | ||
+ | $ source <(kubectl completion bash) | ||
+ | </pre> | ||
+ | |||
+ | * Create a VPC-native Kubernetes cluster: | ||
+ | <pre> | ||
+ | $ gcloud container clusters create $my_cluster \ | ||
+ | --num-nodes 3 --enable-ip-alias --zone $my_zone | ||
+ | </pre> | ||
+ | |||
+ | * Configure access to the cluster for kubectl: | ||
+ | <pre> | ||
+ | $ gcloud container clusters get-credentials $my_cluster --zone $my_zone | ||
+ | </pre> | ||
+ | |||
+ | * Get the repository: | ||
+ | <pre> | ||
+ | $ git clone https://github.com/GoogleCloudPlatformTraining/training-data-analyst | ||
+ | $ cd ~/training-data-analyst/courses/ak8s/18_Cloud_SQL/ | ||
+ | </pre> | ||
+ | |||
+ | ; Create a Cloud SQL Instance | ||
+ | |||
+ | * Create the SQL instance: | ||
+ | <pre> | ||
+ | $ gcloud sql instances create sql-instance --tier=db-n1-standard-2 --region=us-central1 | ||
+ | </pre> | ||
+ | |||
+ | * In the GCP Console, navigate to '''SQL'''. | ||
+ | * You should see <code>sql-instance</code> listed, click on the name, and then click on the '''Users''' tab. | ||
+ | *: You will have to wait a few minutes for the Cloud SQL instance to be provisioned. When you see the existing <code>mysql.sys</code> and <code>root</code> users listed you can proceed to the next step. | ||
+ | * Click '''Create User Account''' and create an account, using sqluser as the username and sqlpassword as the password. | ||
+ | * Leave the Hostname option set to Allow any host (%). and click '''Create'''. | ||
+ | * Go back to '''Overview''' tab, still in your instance (sql-instance), and copy your Instance connection name. | ||
+ | *: You will probably need to scroll down a bit to see it. | ||
+ | |||
+ | Create an environment variable to hold your Cloud SQL instance name, substituting the placeholder with the name you copied in the previous step. | ||
+ | <pre> | ||
+ | export SQL_NAME=[Cloud SQL Instance Name] | ||
+ | </pre> | ||
+ | Your command should look similar to the following: | ||
+ | <pre> | ||
+ | export SQL_NAME=xtof-gcp-gcpd-e506927dfe49:us-central1:sql-instance | ||
+ | </pre> | ||
+ | |||
+ | * Connect to your Cloud SQL instance. | ||
+ | <pre> | ||
+ | $ gcloud sql connect sql-instance | ||
+ | </pre> | ||
+ | |||
+ | When prompted to enter the root password press enter. The root SQL user password is blank by default. | ||
+ | The <code>MySQL [(none)]></code> prompt appears, indicating that you are now connected to the Cloud SQL instance using the MySQL client. | ||
+ | |||
+ | * Create the database required for Wordpress (this is called wordpress by default): | ||
+ | <pre> | ||
+ | MySQL [(none)]> create database wordpress; | ||
+ | MySQL [(none)]> use wordpress; | ||
+ | MySQL [wordpress]> show tables; # <- This will report Empty set as you have not created any tables yet. | ||
+ | MySQL [wordpress]> exit; | ||
+ | </pre> | ||
+ | |||
+ | ; Prepare a Service Account with Permission to Access Cloud SQL | ||
+ | |||
+ | * To create a Service Account, in the GCP Console navigate to '''IAM & admin > Service''' accounts. | ||
+ | * Click '''+ Create Service Account'''. | ||
+ | * Specify a Service account name called <code>sql-access</code> then click '''Create'''. | ||
+ | * Click '''Select a role'''. | ||
+ | * Search for '''Cloud SQL''', select '''Cloud SQL Client''' and click '''Continue'''. | ||
+ | * Click '''+Create Key''', and make sure '''JSON''' key type is selected and click '''Create'''. | ||
+ | *: This will create a public/private key pair, and download the private key file automatically to your computer. You will need this JSON file later. | ||
+ | * Click '''Close''' to close the notification dialogue. | ||
+ | * Locate the JSON credential file you downloaded and rename it to <code>credentials.json</code>. | ||
+ | * Click '''Done'''. | ||
+ | |||
+ | ; Create Secrets | ||
+ | |||
+ | We will create two Kubernetes Secrets: one to provide the MySQL credentials and one to provide the Google credentials (the service account). | ||
+ | |||
+ | * Create a Secret for your MySQL credentials: | ||
+ | <pre> | ||
+ | $ kubectl create secret generic sql-credentials \ | ||
+ | --from-literal=username=sqluser\ | ||
+ | --from-literal=password=sqlpassword | ||
+ | </pre> | ||
+ | |||
+ | If you used a different username and password when creating the Cloud SQL user accounts substitute those here. | ||
+ | |||
+ | * Create a Secret for your GCP Service Account credentials: | ||
+ | <pre> | ||
+ | $ kubectl create secret generic google-credentials\ | ||
+ | --from-file=key.json=credentials.json | ||
+ | </pre> | ||
+ | |||
+ | Note that the file is uploaded to the Secret using the name <code>key.json</code>. That is the file name that a container will see when this Secret is attached as a Secret Volume. | ||
+ | |||
+ | ; Deploy the SQL Proxy agent as a sidecar container | ||
+ | |||
+ | A sample deployment manifest file called <code>sql-proxy.yaml</code> has been provided for you that deploys a demo Wordpress application container with the SQL Proxy agent as a sidecar container. | ||
+ | |||
+ | In the Wordpress container environment settings the <code>WORDPRESS_DB_HOST</code> is specified using the localhost IP address. The <code>cloudsql-proxy</code> sidecar container is configured to point to the Cloud SQL instance you created in the previous task. The database username and password are passed to the Wordpress container as secret keys, and the JSON credentials file is passed to the container using a Secret volume. A Service is also created to allow you to connect to the Wordpress instance from the internet. | ||
+ | <pre> | ||
+ | kind: Deployment | ||
+ | apiVersion: apps/v1 | ||
+ | metadata: | ||
+ | name: wordpress | ||
+ | labels: | ||
+ | app: wordpress | ||
+ | spec: | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | app: wordpress | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: wordpress | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: web | ||
+ | image: gcr.io/cloud-marketplace/google/wordpress | ||
+ | ports: | ||
+ | - containerPort: 80 | ||
+ | env: | ||
+ | - name: WORDPRESS_DB_HOST | ||
+ | value: 127.0.0.1:3306 | ||
+ | # These secrets are required to start the pod. | ||
+ | # [START cloudsql_secrets] | ||
+ | - name: WORDPRESS_DB_USER | ||
+ | valueFrom: | ||
+ | secretKeyRef: | ||
+ | name: sql-credentials | ||
+ | key: username | ||
+ | - name: WORDPRESS_DB_PASSWORD | ||
+ | valueFrom: | ||
+ | secretKeyRef: | ||
+ | name: sql-credentials | ||
+ | key: password | ||
+ | # [END cloudsql_secrets] | ||
+ | # Change <INSTANCE_CONNECTION_NAME> here to include your GCP | ||
+ | # project, the region of your Cloud SQL instance and the name | ||
+ | # of your Cloud SQL instance. The format is | ||
+ | # $PROJECT:$REGION:$INSTANCE | ||
+ | # [START proxy_container] | ||
+ | - name: cloudsql-proxy | ||
+ | image: gcr.io/cloudsql-docker/gce-proxy:1.11 | ||
+ | command: ["/cloud_sql_proxy", | ||
+ | "-instances=<INSTANCE_CONNECTION_NAME>=tcp:3306", | ||
+ | "-credential_file=/secrets/cloudsql/key.json"] | ||
+ | # [START cloudsql_security_context] | ||
+ | securityContext: | ||
+ | runAsUser: 2 # non-root user | ||
+ | allowPrivilegeEscalation: false | ||
+ | # [END cloudsql_security_context] | ||
+ | volumeMounts: | ||
+ | - name: cloudsql-instance-credentials | ||
+ | mountPath: /secrets/cloudsql | ||
+ | readOnly: true | ||
+ | # [END proxy_container] | ||
+ | # [START volumes] | ||
+ | volumes: | ||
+ | - name: cloudsql-instance-credentials | ||
+ | secret: | ||
+ | secretName: google-credentials | ||
+ | # [END volumes] | ||
+ | --- | ||
+ | apiVersion: "v1" | ||
+ | kind: "Service" | ||
+ | metadata: | ||
+ | name: "wordpress-service" | ||
+ | namespace: "default" | ||
+ | labels: | ||
+ | app: "wordpress" | ||
+ | spec: | ||
+ | ports: | ||
+ | - protocol: "TCP" | ||
+ | port: 80 | ||
+ | selector: | ||
+ | app: "wordpress" | ||
+ | type: "LoadBalancer" | ||
+ | loadBalancerIP: "" | ||
+ | </pre> | ||
+ | |||
+ | The important sections to note in this manifest are: | ||
+ | |||
+ | * In the Wordpress <code>env</code> section, the variable <code>WORDPRESS_DB_HOST</code> is set to <code>127.0.0.1:3306</code>. This will connect to a container in the same Pod listening on port 3306. This is the port that the SQL-Proxy listens on by default. | ||
+ | * In the Wordpress <code>env</code> section, the variables <code>WORDPRESS_DB_USER</code> and <code>WORDPRESS_DB_PASSWORD</code> are set using values stored in the <code>sql-credential</code> Secret we created in the last section. | ||
+ | * In the <code>cloudsql-proxy</code> container section, the command switch that defines the SQL Connection name, "<code>-instances=<INSTANCE_CONNECTION_NAME>=tcp:3306</code>", contains a placeholder variable that is not configured using a ConfigMap or Secret and so must be updated directly in this example manifest to point to your Cloud SQL instance. | ||
+ | * In the <code>cloudsql-proxy</code> container section, the JSON credential file is mounted using the Secret volume in the directory <code>/secrets/cloudsql/</code>. The command switch "<code>-credential_file=/secrets/cloudsql/key.json</code>" points to the filename in that directory that we specified when creating the <code>google-credentials</code> Secret. | ||
+ | * The Service section at the end creates an external LoadBalancer called "wordpress-service" that allows the application to be accessed from external internet addresses. | ||
+ | |||
+ | Use sed to update the placeholder variable for the SQL Connection name to the instance name of your Cloud SQL instance. | ||
+ | <pre> | ||
+ | sed -i 's/<INSTANCE_CONNECTION_NAME>/'"${SQL_NAME}"'/g'\ | ||
+ | sql-proxy.yaml | ||
+ | </pre> | ||
+ | |||
+ | * Deploy the application: | ||
+ | <pre> | ||
+ | $ kubectl apply -f sql-proxy.yaml | ||
+ | |||
+ | $ kubectl get deployment wordpress | ||
+ | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE | ||
+ | wordpress 1 1 1 1 30s | ||
+ | </pre> | ||
+ | |||
+ | Repeat the above command until you that one instance is available. | ||
+ | |||
+ | * List the services in your GKE cluster: | ||
+ | <pre> | ||
+ | $ kubectl get services | ||
+ | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | ||
+ | kubernetes ClusterIP 10.12.0.1 <none> 443/TCP 20m | ||
+ | wordpress-service LoadBalancer 10.12.7.147 <pending> 80:30239/TCP 55s | ||
+ | </pre> | ||
+ | |||
+ | The external LoadBalancer IP address for the wordpress-service is the address you use to connect to your Wordpress blog. Repeat the command until you get an external address. | ||
+ | |||
+ | ; Connect to your Wordpress instance | ||
+ | |||
+ | * Open a new browser tab and connect to your Wordpress site using the external LoadBalancer IP address. This will start the initial Wordpress installation wizard. | ||
+ | * Select English (United States) and click Continue. | ||
+ | * Enter a sample name for the Site Title. | ||
+ | * Enter a Username and Password to administer the site. | ||
+ | * Enter an email address. | ||
+ | |||
+ | None of these values are particularly important, you will not need to use them. | ||
+ | |||
+ | * Click Install Wordpress. | ||
+ | After a few seconds you will see the Success! Notification. You can log in if you wish to explore the Wordpress admin interface but it is not required for the lab. | ||
+ | |||
+ | The initialization process has created new database tables and data in the wordpress database on your Cloud SQL instance. You will now validate that these new database tables have been created using the SQL proxy container. | ||
+ | |||
+ | * Connect to your Cloud SQL instance: | ||
+ | <pre> | ||
+ | $ gcloud sql connect sql-instance | ||
+ | </pre> | ||
+ | |||
+ | When prompted to enter the root password press enter. The root SQL user password is blank by default. | ||
+ | The MySQL [(none)]> prompt appears indicating that you are now connected to the Cloud SQL instance using the MySQL client. | ||
+ | <pre> | ||
+ | MySQL [(none)]> use wordpress; | ||
+ | MySQL [wordpress]> show tables; | ||
+ | </pre> | ||
+ | This will now show a number of new database tables that were created when Wordpress was initialized demonstrating that the sidecar SQL Proxy container was configured correctly. | ||
+ | |||
+ | <pre> | ||
+ | MySQL [wordpress]> show tables; | ||
+ | +-----------------------+ | ||
+ | | Tables_in_wordpress | | ||
+ | +-----------------------+ | ||
+ | | wp_commentmeta | | ||
+ | | wp_comments | | ||
+ | | wp_links | | ||
+ | | wp_options | | ||
+ | | wp_postmeta | | ||
+ | | wp_posts | | ||
+ | | wp_term_relationships | | ||
+ | | wp_term_taxonomy | | ||
+ | | wp_termmeta | | ||
+ | | wp_terms | | ||
+ | | wp_usermeta | | ||
+ | | wp_users | | ||
+ | +-----------------------+ | ||
+ | 12 rows in set (0.04 sec) | ||
+ | </pre> | ||
+ | |||
+ | * List all of the Wordpress user table entries: | ||
+ | <pre> | ||
+ | MySQL [wordpress]> select * from wp_users; | ||
+ | </pre> | ||
+ | |||
+ | This will list the database record for the Wordpress admin account showing the email you chose when initializing Wordpress. | ||
+ | |||
+ | * Exit the MySQL client: | ||
+ | <pre> | ||
+ | MySQL [wordpress]> exit; | ||
+ | </pre> | ||
==External links== | ==External links== |
Latest revision as of 06:57, 9 September 2021
Google Kubernetes Engine (GKE) is a managed, production-ready environment for deploying containerized applications in Kubernetes.
Contents
- 1 Deployments
- 2 Jobs and CronJobs
- 3 Cluster scaling
- 4 Configuring Pod Autoscaling and NodePools
- 5 Managing node pools
- 6 Deploying Kubernetes Engine via Helm Charts
- 7 Network security
- 8 Creating Services and Ingress Resources
- 9 Load balancing objects in GKE
- 10 Persistent Data and Storage
- 11 Configuring Persistent Storage for Kubernetes Engine
- 12 StatefulSets
- 13 ConfigMaps and Secrets
- 14 Access Control and Security in Kubernetes and Google Kubernetes Engine (GKE)
- 15 Implementing Role-Based Access Control With Kubernetes Engine
- 16 Pod security policies
- 17 Stackdriver
- 18 Using Cloud SQL with Kubernetes Engine
- 19 External links
Deployments
A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, .spec.template
) is changed, for example, if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout.
- Trigger a deployment rollout
- To update the version of nginx in the deployment, execute the following command:
$ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record $ kubectl rollout status deployment.v1.apps/nginx-deployment $ kubectl rollout history deployment nginx-deployment
- Trigger a deployment rollback
To roll back an object's rollout, you can use the kubectl rollout undo
command.
To roll back to the previous version of the nginx deployment, execute the following command:
$ kubectl rollout undo deployments nginx-deployment
- View the updated rollout history of the deployment.
$ kubectl rollout history deployment nginx-deployment deployments "nginx-deployment" REVISION CHANGE-CAUSE 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true 3 <none>
- View the details of the latest deployment revision:
$ kubectl rollout history deployment/nginx-deployment --revision=3
The output should look like the example. Your output might not be an exact match but it will show that the current revision has rolled back to nginx:1.7.9
.
deployments "nginx-deployment" with revision #3 Pod Template: Labels: app=nginx pod-template-hash=3123191453 Containers: nginx: Image: nginx:1.7.9 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none>
Perform a canary deployment
A canary deployment is a separate deployment used to test a new version of your application. A single service targets both the canary and the normal deployments. And it can direct a subset of users to the canary version to mitigate the risk of new releases. The manifest file nginx-canary.yaml
that is provided for you deploys a single pod running a newer version of Nginx than your main deployment. In this task, you create a canary deployment using this new deployment file.
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-canary labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx track: canary Version: 1.9.1 spec: containers: - name: nginx image: nginx:1.9.1 ports: - containerPort: 80
The manifest for the Nginx Service you deployed in the previous task uses a label selector to target the Pods with the app: nginx
label. Both the normal deployment and this new canary deployment have the app: nginx
label. Inbound connections will be distributed by the service to both the normal and canary deployment Pods. The canary deployment has fewer replicas (Pods) than the normal deployment, and thus it is available to fewer users than the normal deployment.
- Create the canary deployment based on the configuration file.
$ kubectl apply -f nginx-canary.yaml
When the deployment is complete, verify that both the nginx and the nginx-canary deployments are present.
$ kubectl get deployments
Switch back to the browser tab that is connected to the external LoadBalancer service IP and refresh the page. You should continue to see the standard "Welcome to nginx" page.
Switch back to the Cloud Shell and scale down the primary deployment to 0 replicas.
$ kubectl scale --replicas=0 deployment nginx-deployment
Verify that the only running replica is now the Canary deployment:
$ kubectl get deployments
Switch back to the browser tab that is connected to the external LoadBalancer service IP and refresh the page. You should continue to see the standard "Welcome to nginx" page showing that the Service is automatically balancing traffic to the canary deployment.
Note: Session affinity
The Service configuration used in the lab does not ensure that all requests from a single client will always connect to the same Pod. Each request is treated separately and can connect to either the normal Nginx deployment or to the nginx-canary deployment. This potential to switch between different versions may cause problems if there are significant changes in functionality in the canary release. To prevent this you can set the sessionAffinity
field to ClientIP
in the specification of the service if you need a client's first request to determine which Pod will be used for all subsequent connections.
For example:
apiVersion: v1 kind: Service metadata: name: nginx spec: type: LoadBalancer sessionAffinity: ClientIP selector: app: nginx ports: - protocol: TCP port: 60000 targetPort: 80
Jobs and CronJobs
- Simple example:
$ kubectl run pi --image perl --restart Never -- perl -Mbignum bpi -wle 'print bpi(2000)'
- Parallel Job with fixed completion count
$ cat << EOF > my-app-job.yaml apiVersion: batch/v1 kind: Job metadata: name: my-app-job spec: completions: 3 parallelism: 2 template: spec: [...] EOF
spec: backoffLimit: 4 activeDeadlineSeconds: 300
- Example#1
- Create and run a Job
You will create a job using a sample deployment manifest called example-job.yaml that has been provided for you. This Job computes the value of Pi to 2,000 places and then prints the result.
apiVersion: batch/v1 kind: Job metadata: # Unique key of the Job instance name: example-job spec: template: metadata: name: example-job spec: containers: - name: pi image: perl command: ["perl"] args: ["-Mbignum=bpi", "-wle", "print bpi(2000)"] # Do not restart containers after they exit restartPolicy: Never
To create a Job from this file, execute the following command:
$ kubectl apply -f example-job.yaml $ kubectl describe job Host Port: <none> Command: perl Args: -Mbignum=bpi -wle print bpi(2000) Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 17s job-controller Created pod: example-job-gtf7w $ kubectl get pods NAME READY STATUS RESTARTS AGE example-job-gtf7w 0/1 Completed 0 43s
- Clean up and delete the Job
When a Job completes, the Job stops creating Pods. The Job API object is not removed when it completes, which allows you to view its status. Pods created by the Job are not deleted, but they are terminated. Retention of the Pods allows you to view their logs and to interact with them.
To get a list of the Jobs in the cluster, execute the following command:
$ kubectl get jobs NAME DESIRED SUCCESSFUL AGE example-job 1 1 2m
To retrieve the log file from the Pod that ran the Job execute the following command. You must replace [POD-NAME] with the node name you recorded in the last task
$ kubectl logs [POD-NAME] 3.141592653589793238...
The output will show that the job wrote the first two thousand digits of pi to the Pod log.
To delete the Job, execute the following command:
$ kubectl delete job example-job
If you try to query the logs again the command will fail as the Pod can no longer be found.
Define and deploy a CronJob manifest
You can create CronJobs to perform finite, time-related tasks that run once or repeatedly at a time that you specify.
In this section, we will create and run a CronJob, and then clean up and delete the Job.
- Create and run a CronJob
The CronJob manifest file example-cronjob.yaml has been provided for you. This CronJob deploys a new container every minute that prints the time, date and "Hello, World!".
apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo "Hello, World!" restartPolicy: OnFailure
Note
CronJobs use the required schedule field, which accepts time in the Unix standard crontab format. All CronJob times are in UTC:
- The first value indicates the minute (between 0 and 59).
- The second value indicates the hour (between 0 and 23).
- The third value indicates the day of the month (between 1 and 31).
- The fourth value indicates the month (between 1 and 12).
- The fifth value indicates the day of the week (between 0 and 6).
The schedule field also accepts
*
and?
as wildcard values. Combining/
with ranges specifies that the task should repeat at a regular interval. In the example,*/1 * * * *
indicates that the task should repeat every minute of every day of every month.
To create a Job from this file, execute the following command:
$ kubectl apply -f example-cronjob.yaml <pre> To check the status of this Job, execute the following command, where [job_name] is the name of your job: <pre> $ kubectl describe job [job_name] Image: busybox Port: <none> Host Port: <none> Args: /bin/sh -c date; echo "Hello, World!" Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 35s job-controller Created pod: hello-1565824980-sgdnn
View the output of the Job by querying the logs for the Pod. Replace <pod-name>
with the name of the Pod you recorded in the last step.
$ kubectl logs <pod-name> Wed Aug 14 23:23:03 UTC 2019 Hello, World!
To view all job resources in your cluster, including all of the Pods created by the CronJob which have completed, execute the following command:
$ kubectl get jobs NAME COMPLETIONS DURATION AGE hello-1565824980 1/1 2s 2m29s hello-1565825040 1/1 2s 89s hello-1565825100 1/1 2s 29s
Your job names might be different from the example output. By default, Kubernetes sets the Job history limits so that only the last three successful and last failed job are retained so this list will only contain the most recent three of four jobs.
- Clean up and delete the Job
In order to stop the CronJob and clean up the Jobs associated with it you must delete the CronJob.
To delete all these jobs, execute the following command:
$ kubectl delete cronjob hello
To verify that the jobs were deleted, execute the following command:
$ kubectl get jobs No resources found.
All the Jobs were removed.
Cluster scaling
Think of cluster scaling as a coarse-grain operation that should happen infrequently in pods scaling with deployments as a fine-grain operation that should happen frequently.
- Pod conditions that prevent node deletion
- Not run by a controller
- e.g., Pods that are not set in a Deployment, ReplicaSet, Job, etc.
- Has local storage
- Restricted by constraint rules
- Pods that have
cluster-autoscaler.kubernetes.io/safe-to-evict
annotation set to False - Pods that have the
RestrictivePodDisruptionBudget
- At the node-level, if the
kubernetes.io/scale-down-disabled
annotation is set to True
- gcloud
- Create a cluster with autoscaling enabled:
$ gcloud container clusters create <cluster-name> \ --num-nodes 30 \ --enable-autoscaling \ --min-nodes 15 \ --max-nodes 50 \ [--zone <compute-zone>]
- Add a node pool with autoscaling enabled:
$ gcloud container node-pools create <pool-name> \ --cluster <cluster-name> \ --enable-autoscaling \ --min-nodes 15 \ --max-nodes 50 \ [--zone <compute-zone>]
- Enable autoscaling for an existing node pool:
$ gcloud container clusters update \ <cluster-name> \ --enable-autoscaling \ --min-nodes 1 \ --max-nodes 10 \ --zone <compute-zone> \ --node-pool <pool-name>
- Disable autoscaling for an existing node pool:
$ gcloud container clusters update \ <cluster-name> \ --no-enable-autoscaling \ --node-pool <pool-name> \ [--zone <compute-zone> --project <project-id>]
Configuring Pod Autoscaling and NodePools
Create a GKE cluster
In Cloud Shell, type the following command to create environment variables for the GCP zone and cluster name that will be used to create the cluster for this lab.
export my_zone=us-central1-a export my_cluster=standard-cluster-1
- Configure tab completion for the kubectl command-line tool.
source <(kubectl completion bash)
- Create a VPC-native Kubernetes cluster:
$ gcloud container clusters create $my_cluster \ --num-nodes 2 --enable-ip-alias --zone $my_zone
- Configure access to your cluster for kubectl:
$ gcloud container clusters get-credentials $my_cluster --zone $my_zone
- Deploy a sample web application to your GKE cluster
Deploy a sample application to your cluster using the web.yaml deployment file that has been created for you:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: web spec: replicas: 1 selector: matchLabels: run: web template: metadata: labels: run: web spec: containers: - image: gcr.io/google-samples/hello-app:1.0 name: web ports: - containerPort: 8080 protocol: TCP
This manifest creates a deployment using a sample web application container image that listens on an HTTP server on port 8080.
- To create a deployment from this file, execute the following command:
$ kubectl create -f web.yaml --save-config 
- Create a service resource of type NodePort on port 8080 for the web deployment:
$ kubectl expose deployment web --target-port=8080 --type=NodePort 
- Verify that the service was created and that a node port was allocated:
$ kubectl get service web NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE web NodePort 10.12.6.154 <none> 8080:30972/TCP 5m4s
Your IP address and port number might be different from the example output.
Configure autoscaling on the cluster
In this section, we will configure the cluster to automatically scale the sample application that we deployed earlier.
- Configure autoscaling
- Get the list of deployments to determine whether your sample web application is still running:
$ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE web 1 1 1 1 94s
- To configure your sample application for autoscaling (and to set the maximum number of replicas to four and the minimum to one, with a CPU utilization target of 1%), execute the following command:
$ kubectl autoscale deployment web --max 4 --min 1 --cpu-percent 1
When you use kubectl autoscale, you specify a maximum and minimum number of replicas for your application, as well as a CPU utilization target.
- Get the list of deployments to verify that there is still only one deployment of the web application:
$ kubectl get deployment
- Inspect the Horizontal Pod Autoscaler object
The kubectl auto-scale command you used in the previous task creates a HorizontalPodAutoscaler object that targets a specified resource, called the scale target, and scales it as needed. The auto-scaler periodically adjusts the number of replicas of the scale target to match the average CPU utilization that you specify when creating the auto-scaler.
- To get the list of Horizontal Pod Autoscaler resources, execute the following command:
$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE web Deployment/web 1%/1% 1 4 1 50s
- To inspect the configuration of Horizontal Pod Autoscaler in YAML form, execute the following command:
$ kubectl describe horizontalpodautoscaler web Name: web Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Thu, 15 Aug 2019 12:32:37 -0700 Reference: Deployment/web Metrics: ( current / target ) resource cpu on pods (as a percentage of request): 1% (1m) / 1% Min replicas: 1 Max replicas: 4 Deployment pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: <none>
- Test the autoscale configuration
You need to create a heavy load on the web application to force it to scale out. You create a configuration file that defines a deployment of four containers that run an infinite loop of HTTP queries against the sample application web server.
You create the load on your web application by deploying the loadgen application using the loadgen.yaml
file that has been provided for you.
apiVersion: apps/v1 kind: Deployment metadata: name: loadgen spec: replicas: 4 selector: matchLabels: app: loadgen template: metadata: labels: app: loadgen spec: containers: - name: loadgen image: k8s.gcr.io/busybox args: - /bin/sh - -c - while true; do wget -q -O- http://web:8080; done
- Get the list of deployments to verify that the load generator is running:
$ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE loadgen 4 4 4 4 11s web 1 1 1 1 9m9s
- Inspect the Horizontal Pod Autoscaler:
$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE web Deployment/web 20%/1% 1 4 1 7m58s
Once the loadgen Pod starts to generate traffic, the web deployment CPU utilization begins to increase. In the example output, the targets are now at 35% CPU utilization compared to the 1% CPU threshold.
- After a few minutes, inspect the Horizontal Pod Autoscaler again:
$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE web Deployment/web 68%/1% 1 4 4 9m39s $ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE loadgen 4 4 4 4 2m44s web 4 4 4 3 11m
- To stop the load on the web application, scale the loadgen deployment to zero replicas.
$ kubectl scale deployment loadgen --replicas 0
- Get the list of deployments to verify that loadgen has scaled down.
$ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE loadgen 0 0 0 0 3m25s web 4 4 4 3 12m
The loadgen deployment should have zero replicas.
Wait 2 to 3 minutes, and then get the list of deployments again to verify that the web application has scaled down to the minimum value of 1 replica that you configured when you deployed the autoscaler.
$ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE loadgen 0 0 0 0 4m web 1 1 1 1 15m
You should now have one deployment of the web application.
Managing node pools
In this section, we will create a new pool of nodes using preemptible instances, and then will constrain the web deployment to run only on the preemptible nodes.
- Add a node pool
- To deploy a new node pool with three preemptible VM instances, execute the following command:
$ gcloud container node-pools create "temp-pool-1" \ --cluster=$my_cluster --zone=$my_zone \ --num-nodes "2" --node-labels=temp=true --preemptible
If you receive an error that no preemptible instances are available you can remove the --preemptible
option to proceed with the lab.
- Get the list of nodes to verify that the new nodes are ready:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION gke-standard-cluster-1-default-pool-61fba731-01mc Ready <none> 21m v1.12.8-gke.10 gke-standard-cluster-1-default-pool-61fba731-bvfx Ready <none> 21m v1.12.8-gke.10 gke-standard-cluster-1-temp-pool-1-e8966c96-nccc Ready <none> 46s v1.12.8-gke.10 gke-standard-cluster-1-temp-pool-1-e8966c96-pk21 Ready <none> 43s v1.12.8-gke.10
You should now have 4 nodes. (Your names will be different from the example output.)
All the nodes that you added have the temp=true label because you set that label when you created the node-pool. This label makes it easier to locate and configure these nodes.
- To list only the nodes with the temp=true label, execute the following command:
$ kubectl get nodes -l temp=true NAME STATUS ROLES AGE VERSION gke-standard-cluster-1-temp-pool-1-e8966c96-nccc Ready <none> 2m1s v1.12.8-gke.10 gke-standard-cluster-1-temp-pool-1-e8966c96-pk21 Ready <none> 118s v1.12.8-gke.10
- Control scheduling with taints and tolerations
To prevent the scheduler from running a Pod on the temporary nodes, you add a taint to each of the nodes in the temp pool. Taints are implemented as a key-value pair with an effect (such as NoExecute) that determines whether Pods can run on a certain node. Only nodes that are configured to tolerate the key-value of the taint are scheduled to run on these nodes.
To add a taint to each of the newly created nodes, execute the following command.
You can use the temp=true
label to apply this change across all the new nodes simultaneously.
$ kubectl taint node -l temp=true nodetype=preemptible:NoExecute node/gke-standard-cluster-1-temp-pool-1-e8966c96-nccc tainted node/gke-standard-cluster-1-temp-pool-1-e8966c96-pk21 tainted $ kubectl describe nodes | grep ^Taints Taints: <none> Taints: <none> Taints: nodetype=preemptible:NoExecute Taints: nodetype=preemptible:NoExecute
To allow application Pods to execute on these tainted nodes, you must add a tolerations key to the deployment configuration.
Edit the web.yaml
file to add the following key in the template's spec
section:
tolerations: - key: "nodetype" operator: Equal value: "preemptible"
The spec
section of the file should look like the following:
... spec: tolerations: - key: "nodetype" operator: Equal value: "preemptible" containers: - image: gcr.io/google-samples/hello-app:1.0 name: web ports: - containerPort: 8080 protocol: TCP
To force the web deployment to use the new node-pool add a nodeSelector
key in the template's spec section. This is parallel to the tolerations key you just added.
nodeSelector: temp: "true"
Note: GKE adds a custom label to each node called cloud.google.com/gke-nodepool
, which contains the name of the node-pool that the node belongs to. This key can also be used as part of a nodeSelector
to ensure Pods are only deployed to suitable nodes.
The full web.yaml
deployment should now look as follows:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: web spec: replicas: 1 selector: matchLabels: run: web template: metadata: labels: run: web spec: tolerations: - key: "nodetype" operator: Equal value: "preemptible" nodeSelector: temp: "true" containers: - image: gcr.io/google-samples/hello-app:1.0 name: web ports: - containerPort: 8080 protocol: TCP
To apply this change, execute the following command:
kubectl apply -f web.yaml
If you have problems editing this file successfully, you can use the pre-prepared sample file called web-tolerations.yaml
instead.
- Get the list of Pods:
$ kubectl get pods NAME READY STATUS RESTARTS AGE web-7cb566bccd-pkfst 1/1 Running 0 1m
To confirm the change, inspect the running web Pod(s) using the following command:
$ kubectl describe pods -l run=web
A Tolerations section with nodetype=preemptible
in the list should appear near the bottom of the (truncated) output.
... Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s nodetype=preemptible Events: ...
The output confirms that the Pods will tolerate the taint value on the new preemptible nodes, and thus that they can be scheduled to execute on those nodes.
To force the web application to scale out again, scale the loadgen deployment back to four replicas:
$ kubectl scale deployment loadgen --replicas 4
You could scale just the web application directly but using the loadgen app will allow you to see how the different taint, toleration and nodeSelector
settings that apply to the web and loadgen applications affect which nodes they are scheduled on.
Get the list of Pods using the wide output format to show the nodes running the Pods:
$ kubectl get pods -o wide
This shows that the loadgen app is running only on default-pool
nodes while the web app is running only the preemptible nodes in temp-pool-1
.
The taint setting prevents Pods from running on the preemptible nodes so the loadgen application only runs on the default pool. The toleration setting allows the web application to run on the preemptible nodes and the nodeSelector forces the web application Pods to run on those nodes.
NAME READY STATUS [...] NODE Loadgen-x0 1/1 Running [...] gke-xx-default-pool-y0 loadgen-x1 1/1 Running [...] gke-xx-default-pool-y2 loadgen-x3 1/1 Running [...] gke-xx-default-pool-y3 loadgen-x4 1/1 Running [...] gke-xx-default-pool-y4 web-x1 1/1 Running [...] gke-xx-temp-pool-1-z1 web-x2 1/1 Running [...] gke-xx-temp-pool-1-z2 web-x3 1/1 Running [...] gke-xx-temp-pool-1-z3 web-x4 1/1 Running [...] gke-xx-temp-pool-1-z4
Deploying Kubernetes Engine via Helm Charts
Ensure your user account has the cluster-admin role in your cluster.
$ kubectl create clusterrolebinding user-admin-binding \ --clusterrole=cluster-admin \ --user=$(gcloud config get-value account)
- Create a Kubernetes service account that is Tiller - the server side of Helm, can be used for deploying charts.
$ kubectl create serviceaccount tiller --namespace kube-system
- Grant the Tiller service account the cluster-admin role in your cluster:
$ kubectl create clusterrolebinding tiller-admin-binding \ --clusterrole=cluster-admin \ --serviceaccount=kube-system:tiller
- Execute the following commands to initialize Helm using the service account:
$ helm init --service-account=tiller $ kubectl -n kube-system get pods | grep ^tiller tiller-deploy-8548d8bd7c-l548r 1/1 Running 0 18s $ helm repo update $ helm version Client: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"}
Execute the following command to deploy a set of resources to create a Redis service on the active context cluster:
$ helm install stable/redis
A Helm chart is a package of resource configuration files, along with configurable parameters. This single command deployed a collection of resources.
A Kubernetes Service defines a set of Pods and a stable endpoint by which network traffic can access them. In Cloud Shell, execute the following command to view Services that were deployed through the Helm chart:
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.12.0.1 <none> 443/TCP 3m24s opining-wolverine-redis-headless ClusterIP None <none> 6379/TCP 11s opining-wolverine-redis-master ClusterIP 10.12.5.246 <none> 6379/TCP 11s opining-wolverine-redis-slave ClusterIP 10.12.14.196 <none> 6379/TCP 11s
A Kubernetes StatefulSet manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods. In Cloud Shell, execute the following commands to view a StatefulSet that was deployed through the Helm chart:
$ kubectl get statefulsets NAME DESIRED CURRENT AGE opining-wolverine-redis-master 1 1 59s opining-wolverine-redis-slave 2 2 59s
A Kubernetes ConfigMap lets you storage and manage configuration artifacts, so that they are decoupled from container-image content. In Cloud Shell, execute the following commands to view ConfigMaps that were deployed through the Helm chart:
$ kubectl get configmaps NAME DATA AGE opining-wolverine-redis 3 95s opining-wolverine-redis-health 6 95s
A Kubernetes Secret, like a ConfigMap, lets you store and manage configuration artifacts, but it's specially intended for sensitive information such as passwords and authorization keys. In Cloud Shell, execute the following commands to view some of the Secret that was deployed through the Helm chart:
$ kubectl get secrets NAME TYPE DATA AGE opining-wolverine-redis Opaque 1 2m5s
You can inspect the Helm chart directly using the following command:
$ helm inspect stable/redis
If you want to see the templates that the Helm chart deploys you can use the following command:
$ helm install stable/redis --dry-run --debug
- Test Redis functionality
You store and retrieve values in the new Redis deployment running in your Kubernetes Engine cluster.
Execute the following command to store the service ip-address for the Redis cluster in an environment variable:
$ export REDIS_IP=$(kubectl get services -l app=redis -o json | jq -r '.items[].spec | select(.selector.role=="master")' | jq -r '.clusterIP')
Retrieve the Redis password and store it in an environment variable:
$ export REDIS_PW=$(kubectl get secret -l app=redis -o jsonpath="{.items[0].data.redis-password}" | base64 --decode)
- Display the Redis cluster address and password:
$ echo Redis Cluster Address : $REDIS_IP $ echo Redis auth password : $REDIS_PW
- Open an interactive shell to a temporary Pod, passing in the cluster address and password as environment variables:
$ kubectl run redis-test --rm --tty -i --restart='Never' \ --env REDIS_PW=$REDIS_PW \ --env REDIS_IP=$REDIS_IP \ --image docker.io/bitnami/redis:4.0.12 -- bash
- Connect to the Redis cluster:
# redis-cli -h $REDIS_IP -a $REDIS_PW
- Set a key value:
set mykey this_amazing_value
This will display OK if successful.
- Retrieve the key value:
get mykey
This will return the value you stored indicating that the Redis cluster can successfully store and retrieve data.
Network security
Network policy
A Pod-level firewall restricting access to other Pods and Services. (Disabled by default in GKE.)
Must be enabled:
- Requires at least 2 nodes of n1-standard-1 or higher (recommended minimum of 3 nodes)
- Requires nodes to be recreated
- Enable network policy for a new cluster:
$ gcloud container clusters create <name> \ --enable-network-policy
- Enable a network policy for an existing cluster:
$ gcloud container clusters update <name> \ --update-addons-NetworkPolicy=ENABLED $ gcloud container cluster update <name> \ --enable-network-policy
- Disabling a network policy:
$ gcloud container clusters create <name> \ --no-enable-network-policy
- Writing a network policy
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: demo-network-policy namespace: default spec: podSelector: matchLabels: role: demo-app policyTypes: - Ingress - Egress ingress: - from: - ipBlock: cidr: 172.17.0.0/16 except: - 172.17.1.0/16 - namespaceSelector: matchLabels: project: myproject - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 6379 egress: - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 5978
- Network policy defaults
- Pros:
- Limits "attack surface" of Pods in your cluster.
- Cons:
- A lot of work to manage (use Istio instead)
metadata: name: default-deny spec: podSelector: {} policyTypes: - Ingress
metadata: name: default-deny spec: podSelector: {} policyTypes: - Egress
metadata: name: default-deny spec: podSelector: {} policyTypes: - Ingress - Egress
metadata: name: allow-all spec: podSelector: {} policyTypes: - Ingress ingress: - {}
metadata: name: allow-all spec: podSelector: {} policyTypes: - Egress egress: - {}
Setup a private GKE cluster
In the Cloud Shell, enter the following command to review the details of your new cluster:
$ gcloud container clusters describe private-cluster --region us-central1-a
- The following values appear only under the private cluster:
privateEndpoint
- an internal IP address. Nodes use this internal IP address to communicate with the cluster master.
publicEndpoint
- an external IP address. External services and administrators can use the external IP address to communicate with the cluster master.
- You have several options to lock down your cluster to varying degrees:
- The whole cluster can have external access.
- The whole cluster can be private.
- The nodes can be private while the cluster master is public, and you can limit which external networks are authorized to access the cluster master.
Without public IP addresses, code running on the nodes cannot access the public Internet unless you configure a NAT gateway such as Cloud NAT.
You might use private clusters to provide services such as internal APIs that are meant only to be accessed by resources inside your network. For example, the resources might be private tools that only your company uses. Or they might be backend services accessed by your frontend services, and perhaps only those frontend services are accessed directly by external customers or users. In such cases, private clusters are a good way to reduce the surface area of attack for your application.
Restrict incoming traffic to Pods
First, we will create a GKE cluster to use for the demos below.
- Create a GKE cluster
- In Cloud Shell, type the following command to set the environment variable for the zone and cluster name:
export my_zone=us-central1-a export my_cluster=standard-cluster-1
- Configure kubectl tab completion in Cloud Shell:
source <(kubectl completion bash)
- Create a Kubernetes cluster (note that this command adds the additional flag
--enable-network-policy
. This flag allows this cluster to use cluster network policies):
$ gcloud container clusters create $my_cluster \ --num-nodes 2 \ --enable-ip-alias \ --zone $my_zone \ --enable-network-policy
- Configure access to your cluster for the
kubectl
command-line tool:
$ gcloud container clusters get-credentials $my_cluster --zone $my_zone
Run a simple web server application with the label app=hello
, and expose the web application internally in the cluster:
$ kubectl run hello-web --labels app=hello \ --image=gcr.io/google-samples/hello-app:1.0 --port 8080 --expose
- Restrict incoming traffic to Pods
- The following
NetworkPolicy
manifest file defines an ingress policy that allows access to Pods labeledapp: hello
from Pods labeledapp: foo
:
$ cat << EOF > hello-allow-from-foo.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: hello-allow-from-foo spec: policyTypes: - Ingress podSelector: matchLabels: app: hello ingress: - from: - podSelector: matchLabels: app: foo EOF $ kubectl apply -f hello-allow-from-foo.yaml $ kubectl get networkpolicy NAME POD-SELECTOR AGE hello-allow-from-foo app=hello 7s
- Validate the ingress policy
- Run a temporary Pod called
test-1
with the labelapp=foo
and get a shell in the Pod:
$ kubectl run test-1 --labels app=foo --image=alpine --restart=Never --rm --stdin --tty
The kubectl switches used here in conjunction with the run command are important to note:
--stdin
(alternatively-i
)- creates an interactive session attached to STDIN on the container.
--tty
(alternatively-t
)- allocates a TTY for each container in the pod.
--rm
- instructs Kubernetes to treat this as a temporary Pod that will be removed as soon as it completes its startup task. As this is an interactive session it will be removed as soon as the user exits the session.
--label
(alternatively-l
)- adds a set of labels to the pod.
--restart
- defines the restart policy for the Pod
- Make a request to the
hello-web:8080
endpoint to verify that the incoming traffic is allowed:
/ # wget -qO- --timeout=2 http://hello-web:8080 Hello, world! Version: 1.0.0 Hostname: hello-web-75f66f69d-qgzjb / #
- Now, run a different Pod using the same Pod name but using a label, app=other, that does not match the podSelector in the active network policy. This Pod should not have the ability to access the hello-web application:
$ kubectl run test-1 --labels app=other --image=alpine --restart=Never --rm --stdin --tty
- Make a request to the hello-web:8080 endpoint to verify that the incoming traffic is not allowed:
/ # wget -qO- --timeout=2 http://hello-web:8080 wget: download timed out / #
The request times out.
Restrict outgoing traffic from the Pods
You can restrict outgoing (egress) traffic as you do incoming traffic. However, in order to query internal hostnames (such as hello-web
) or external hostnames (such as www.example.com
), you must allow DNS resolution in your egress network policies. DNS traffic occurs on port 53, using TCP and UDP protocols.
The following NetworkPolicy manifest file defines a policy that permits Pods with the label app: foo
to communicate with Pods labeled app: hello
on any port number, and allows the Pods labeled app: foo
to communicate to any computer on UDP port 53, which is used for DNS resolution. Without the DNS port open, you will not be able to resolve the hostnames:
$ cat << EOF > foo-allow-to-hello.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: foo-allow-to-hello spec: policyTypes: - Egress podSelector: matchLabels: app: foo egress: - to: - podSelector: matchLabels: app: hello - to: ports: - protocol: UDP port: 53 EOF $ kubectl apply -f foo-allow-to-hello.yaml $ kubectl get networkpolicy NAME POD-SELECTOR AGE foo-allow-to-hello app=foo 7s hello-allow-from-foo app=hello 5m
- Validate the egress policy
- Deploy a new web application called
hello-web-2
and expose it internally in the cluster:
$ kubectl run hello-web-2 --labels app=hello-2 \ --image=gcr.io/google-samples/hello-app:1.0 --port 8080 --expose
- Run a temporary Pod with the
app=foo
label and get a shell prompt inside the container:
$ kubectl run test-3 --labels app=foo --image=alpine --restart=Never --rm --stdin --tty
- Verify that the Pod can establish connections to
hello-web:8080
:
/ # wget -qO- --timeout=2 http://hello-web:8080 Hello, world! Version: 1.0.0 Hostname: hello-web-75f66f69d-qgzjb / #
- Verify that the Pod cannot establish connections to
hello-web-2:8080
wget -qO- --timeout=2 http://hello-web-2:8080
This fails because none of the Network policies you have defined allow traffic to Pods labelled app: hello-2
.
- Verify that the Pod cannot establish connections to external websites, such as
www.example.com
:
wget -qO- --timeout=2 http://www.example.com
This fails because the network policies do not allow external http traffic (tcp port 80).
/ # ping -c3 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes --- 8.8.8.8 ping statistics --- 3 packets transmitted, 0 packets received, 100% packet loss
Creating Services and Ingress Resources
- Create Pods and services to test DNS resolution
- Create a service called
dns-demo
with two sample application Pods calleddns-demo-1
anddns-demo-2
:
$ cat << EOF > dns-demo.yaml apiVersion: v1 kind: Service metadata: name: dns-demo spec: selector: name: dns-demo clusterIP: None ports: - name: dns-demo port: 1234 targetPort: 1234 --- apiVersion: v1 kind: Pod metadata: name: dns-demo-1 labels: name: dns-demo spec: hostname: dns-demo-1 subdomain: dns-demo containers: - name: nginx image: nginx --- apiVersion: v1 kind: Pod metadata: name: dns-demo-2 labels: name: dns-demo spec: hostname: dns-demo-2 subdomain: dns-demo containers: - name: nginx image: nginx EOF $ kubectl apply -f dns-demo.yaml $ kubectl get pods NAME READY STATUS RESTARTS AGE dns-demo-1 1/1 Running 0 19s dns-demo-2 1/1 Running 0 19s
- Access Pods and services by FQDN
- Test name resolution for pods and services from the Cloud Shell and from Pods running inside your cluster (note: you can find the IP address for
dns-demo-2
by displaying the details of the Pod):
$ kubectl describe pods dns-demo-2
You will see the IP address in the first section of the below the status, before the details of the individual containers:
kubectl describe pods dns-demo-2 Name: dns-demo-2 Namespace: default Priority: 0 PriorityClassName: <none> Node: gke-standard-cluster-1-default-pool-a6c9108e-05m2/10.128.0.5 Start Time: Mon, 19 Aug 2019 16:58:11 -0700 Labels: name=dns-demo Annotations: [...] Status: Running IP: 10.8.2.5 Containers: nginx:
In the example above, the Pod IP address was 10.8.2.8. You can query just the Pod IP address on its own using the following syntax for the kubectl describe pods
command:
$ echo $(kubectl get pod dns-demo-2 --template={{.status.podIP}}) 10.8.2.5
The format of the FQDN of a Pod is hostname.subdomain.namespace.svc.cluster.local
. The last three pieces (svc.cluster.local
) stay constant in any cluster, however, the first three pieces are specific to the Pod that you are trying to access. In this case, the hostname is dns-demo-2
, the subdomain is dns-demo
, and the namespace is default
, because we did not specify a non-default namespace. The FQDN of the dns-demo-2
Pod is therefore dns-demo-2.dns-demo.default.svc.cluster.local
.
- Ping
dns-demo-2
from your local machine (or from the Cloud Shell):
$ ping dns-demo-2.dns-demo.default.svc.cluster.local ping: dns-demo-2.dns-demo.default.svc.cluster.local: Name or service not known
The ping fails because we are not inside the cluster itself.
To get inside the cluster, open an interactive session to Bash running from dns-demo-1
.
$ kubectl exec -it dns-demo-1 /bin/bash
Now that we are inside a container in the cluster, our commands run from that context. However, we do not have a tool to ping in this container, so the ping command will not work.
- Update apt-get and install a ping tool (from within the container):
root@dns-demo-1:/# apt-get update && apt-get install -y iputils-ping
- Ping dns-demo-2:
root@dns-demo-1:/# ping -c3 dns-demo-2.dns-demo.default.svc.cluster.local PING dns-demo-2.dns-demo.default.svc.cluster.local (10.8.2.5) 56(84) bytes of data. 64 bytes from dns-demo-2.dns-demo.default.svc.cluster.local (10.8.2.5): icmp_seq=1 ttl=62 time=1.46 ms 64 bytes from dns-demo-2.dns-demo.default.svc.cluster.local (10.8.2.5): icmp_seq=2 ttl=62 time=0.397 ms 64 bytes from dns-demo-2.dns-demo.default.svc.cluster.local (10.8.2.5): icmp_seq=3 ttl=62 time=0.387 ms --- dns-demo-2.dns-demo.default.svc.cluster.local ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 16ms rtt min/avg/max/mdev = 0.387/0.748/1.461/0.504 ms
This ping should succeed and report that the target has the IP address you found earlier for the dns-demo-2
Pod.
- Ping the
dns-demo
service's FQDN, instead of a specific Pod inside the service:
ping dns-demo.default.svc.cluster.local
This ping should also succeed but it will return a response from the FQDN of one of the two demo-dns
Pods. This Pod might be either demo-dns-1
or demo-dns-2
.
When you deploy applications, your application code runs inside a container in the cluster, and thus your code can access other services by using the FQDNs of those services. This approach is better than using IP addresses or even Pod names because those are more likely to change.
Deploy a sample workload and a ClusterIP service
In this section, we will create a deployment for a set of Pods within the cluster and then expose them using a ClusterIP service.
- Deploy a sample web application to your GKE cluster
- Deploy a sample web application container image that listens on an HTTP server on port 8080:
$ cat << EOF > hello-v1.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: hello-v1 spec: replicas: 3 selector: matchLabels: run: hello-v1 template: metadata: labels: run: hello-v1 name: hello-v1 spec: containers: - image: gcr.io/google-samples/hello-app:1.0 name: hello-v1 ports: - containerPort: 8080 protocol: TCP EOF $ kubectl create -f hello-v1.yaml $ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-v1 3 3 3 3 10s
- Define service types in the manifest
- Deploy a Service using a ClusterIP:
$ cat << EOF > hello-svc.yaml apiVersion: v1 kind: Service metadata: name: hello-svc spec: type: ClusterIP selector: name: hello-v1 ports: - protocol: TCP port: 80 targetPort: 8080 EOF $ kubectl apply -f ./hello-svc.yaml
This manifest defines a ClusterIP service and applies it to Pods that correspond to the selector. In this case, the manifest is applied to the hello-v1
Pods that we deployed. This service will automatically be applied to any other deployments with the name: hello-v1
label.
- Verify that the Service was created and that a Cluster-IP was allocated:
$ kubectl get service hello-svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-svc ClusterIP 10.12.1.159 <none> 80/TCP 29s
No external IP is allocated for this service. Because the Kubernetes Cluster IP addresses are not externally accessible by default, creating this Service does not make your application accessible outside of the cluster.
- Test your application
- Attempt to open an HTTP session to the new Service using the following command:
$ curl hello-svc.default.svc.cluster.local curl: (6) Could not resolve host: hello-svc.default.svc.cluster.local
The connection should fail because that service is not exposed outside of the cluster.
Now, test the Service from inside the cluster using the interactive shell you have running on the dns-demo-1
Pod. Return to your first Cloud Shell window, which is currently redirecting the STDIN and STDOUT of the dns-demo-1
Pod.
- Install curl so you can make calls to web services from the command line:
$ apt-get install -y curl
- Use the following command to test the HTTP connection between the Pods:
$ curl hello-svc.default.svc.cluster.local Hello, world! Version: 1.0.0 Hostname: hello-v1-5574c4bff6-72wzc
This connection should succeed and provide a response similar to the output below. Your hostname might be different from the example output.
- Convert the service to use NodePort
In this section, we will convert our existing ClusterIP
service to a NodePort
service and then retest access to the service from inside and outside the cluster.
- Apply a modified version of our previous
hello-svc
Service manifest:
$ cat << EOF > hello-nodeport-svc.yaml apiVersion: v1 kind: Service metadata: name: hello-svc spec: type: NodePort selector: name: hello-v1 ports: - protocol: TCP port: 80 targetPort: 8080 nodePort: 30100 EOF $ kubectl apply -f ./hello-nodeport-svc.yaml
This manifest redefines hello-svc
as a NodePort
service and assigns the service port 30100 on each node of the cluster for that service.
- Verify that the service type has changed to
NodePort
:
$ kubectl get service hello-svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-svc NodePort 10.12.1.159 <none> 80:30100/TCP 5m30s
Note that there is still no external IP allocated for this service.
- Test the application
- Attempt to open an HTTP session to the new service:
$ curl hello-svc.default.svc.cluster.local curl: (6) Could not resolve host: hello-svc.default.svc.cluster.local
The connection should fail because that service is not exposed outside of the cluster.
Return to your first Cloud Shell window, which is currently redirecting the STDIN and STDOUT of the dns-test
Pod.
- Test the HTTP connection between the Pods:
$ curl hello-svc.default.svc.cluster.local Hello, world! Version: 1.0.0 Hostname: hello-v1-5574c4bff6-72wzc
- Deploy a new set of Pods and a LoadBalancer service
We will now deploy a new set of Pods running a different version of the application so that we can easily differentiate the two services. We will then expose the new Pods as a LoadBalancer
Service and access the service from outside the cluster.
- Create a new deployment that runs version 2 of the sample "hello" application on port 8080:
$ cat << EOF > hello-v2.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: hello-v2 spec: replicas: 3 selector: matchLabels: run: hello-v2 template: metadata: labels: run: hello-v2 name: hello-v2 spec: containers: - image: gcr.io/google-samples/hello-app:2.0 name: hello-v2 ports: - containerPort: 8080 protocol: TCP EOF $ kubectl create -f hello-v2.yaml $ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-v1 3 3 3 3 8m22s hello-v2 3 3 3 3 6s
- Define service types in the manifest
- Deploy a
LoadBalancer
Service:
apiVersion: v1 kind: Service metadata: name: hello-lb-svc spec: type: LoadBalancer selector: name: hello-v2 ports: - protocol: TCP port: 80 targetPort: 8080
This manifest defines a LoadBalancer
Service, which deploys a GCP Network Load Balancer to provide external access to the service. This service is only applied to the Pods with the name: hello-v2
selector.
$ kubectl apply -f ./hello-lb-svc.yaml $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dns-demo ClusterIP None <none> 1234/TCP 18m hello-lb-svc LoadBalancer 10.12.3.30 35.193.235.140 80:30980/TCP 95s hello-svc NodePort 10.12.1.159 <none> 80:30100/TCP 10m kubernetes ClusterIP 10.12.0.1 <none> 443/TCP 21m $ export LB_EXTERNAL_IP=35.193.235.140
Notice that the new LoadBalancer
Service has an external IP. This is implemented using a GCP load balancer and will take a few minutes to create. This external IP address makes the service accessible from outside the cluster. Take note of this External IP address for use below.
- Test your application
- Attempt to open an HTTP session to the new service:
$ curl hello-lb-svc.default.svc.cluster.local curl: (6) Could not resolve host: hello-lb-svc.default.svc.cluster.local
The connection should fail because that service name is not exposed outside of the cluster. This occurs because the external IP address is not registered with this hostname.
- Try the connection again using the External IP address associated with the service:
$ curl ${LB_EXTERNAL_IP} Hello, world! Version: 2.0.0 Hostname: hello-v2-7db7758bf4-998gf
This time the connection does not fail because the LoadBalancer's external IP address can be reached from outside GCP.
Return to your first Cloud Shell window, which is currently redirecting the STDIN and STDOUT of the dns-demo-1
Pod.
- Use the following command to test the HTTP connection between the Pods.
root@dns-demo-1:/# curl hello-lb-svc.default.svc.cluster.local Hello, world! Version: 2.0.0 Hostname: hello-v2-7db7758bf4-qkb42
The internal DNS name works within the Pod, and you can see that you are accessing the same v2 version of the application as you were from outside of the cluster using the external IP address.
Try the connection again within the Pod using the External IP address associated with the service (replace the IP with the external IP of the service created above):
root@dns-demo-1:/# curl 35.193.235.140 Hello, world! Version: 2.0.0 Hostname: hello-v2-7db7758bf4-crxzf
The external IP also works from inside Pods running in the cluster and returns a result from the same v2 version of the applications.
Deploy an Ingress resource
We have two services in our cluster for the "hello" application. One service is hosting version 1.0 via a NodePort service, while the other service is hosting version 2.0 via a LoadBalancer service. We will now deploy an Ingress resource that will direct traffic to both services based on the URL entered by the user.
- Create an Ingress resource
Ingress is a Kubernetes resource that encapsulates a collection of rules and configuration for routing external HTTP(S) traffic to internal services.
On GKE, Ingress is implemented using Cloud Load Balancing. When you create an Ingress resource in your cluster, GKE creates an HTTP(S) load balancer and configures it to route traffic to your application.
- Define and deploy an Ingress resource that directs traffic to our web services based on the path entered:
$ cat << EOF > hello-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hello-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: Paths: - path: /v1 backend: serviceName: hello-svc servicePort: 80 - path: /v2 backend: serviceName: hello-lb-svc servicePort: 80 EOF $ kubectl apply -f hello-ingress.yaml
When we deploy this manifest, Kubernetes creates an ingress resource on your cluster. The ingress controller running in your cluster is responsible for creating an HTTP(S) load balancer to route all external HTTP traffic (on port 80) to the web NodePort service and the LoadBalancer service that we exposed.
- Test your application
- Get the external IP address of the load balancer serving our application:
$ kubectl describe ingress hello-ingress Name: hello-ingress Namespace: default Address: 35.244.213.159 Default backend: default-http-backend:80 (10.8.1.6:8080) Rules: Host Path Backends ---- ---- -------- * /v1 hello-svc:80 (<none>) /v2 hello-lb-svc:80 (<none>) Annotations: [...] ingress.kubernetes.io/backends: {"k8s-be-30013--59854b80169ba7aa":"HEALTHY","k8s-be-30100--59854b80169ba7aa":"HEALTHY","k8s-be-30980--59854b80169ba7aa":"HEALTHY"} [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 6m34s loadbalancer-controller default/hello-ingress Normal CREATE 5m16s loadbalancer-controller ip: 35.244.213.159
You may have to wait for a few minutes for the load balancer to become active, and for the health checks to succeed, before the external address will be displayed. Repeat the command every few minutes to check if the Ingress resource has finished initializing.
Use the External IP address associated with the Ingress resource, and type the following command, substituting [external_IP] with the Ingress resource's external IP address. Be sure to include the /v1
in the URL path:
$ curl 35.244.213.159/v1 Hello, world! Version: 1.0.0 Hostname: hello-v1-5574c4bff6-mbn5
The v1 URL is configured in hello-ingress.yaml
to point to the hello-svc
NodePort service that directs traffic to the v1 application Pods.
Note: GKE might take a few minutes to set up forwarding rules until the Global load balancer used for the Ingress resource is ready to serve your application. In the meantime, you might get errors such as HTTP 404 or HTTP 500 until the load balancer configuration is propagated across the globe.
- Now, test the v2 URL path from Cloud Shell. Use the External IP address associated with the Ingress resource, and type the following command, substituting
[external_IP]
with the Ingress resource's external IP address. Be sure to include the/v2
in the URL path.
$ curl [external_IP]/v2 Hello, world! Version: 2.0.0 Hostname: hello-v2-7db7758bf4-998gf
- Inspect the changes to your networking resources in the GCP Console
There are two load balancers listed:
- One was created for the external IP of the
hello-lb-svc
service. This typically has a UID style name and is configured to load balance TCP port 80 traffic to the cluster nodes. - The second was created for the Ingress object and is a full HTTP(S) load balancer that includes host and path rules that match the Ingress configuration. This will have hello-ingress in its name.
Click the load balancer with hello-ingress in the name. This will display the summary information about the protocols, ports, paths and backend services of the Ingress load balancer.
The v2 URL is configured in hello-ingress.yaml
to point to the hello-lb-svc LoadBalancer service that directs traffic to the v2 application Pods.
Load balancing objects in GKE
Kubernetes object | How implemented in GKE | Typical usage scenario |
---|---|---|
Service of type ClusterIP | GKE networking | Cluster-internal applications and microservices |
Service of type LoadBalancer | GCP Network Load Balancer (regional) | Application front ends |
Ingress object, backed by a Service of type NodePort | GCP HTTP(S) Load Balancer (global) | Application front ends; gives access to advanced features like Cloud Armor, Identity-Aware Proxy (beta) |
Persistent Data and Storage
- Volume types:
- emptyDir: Ephemeral. Shares Pod's lifecycle.
- ConfigMap: Object can be referenced in a volume.
- Secret: Stores sensitive info, such as passwords.
- downwardAPI: Makes data about Pods available to containers.
- Creating a Pod with an NFS Volume
apiVersion: v1 kind: Pod metadata: name: web spec: containers: - name: web image: nginx volumeMounts: - mountPath: /mnt/vol name: nfs volumes: - name: nfs server: 10.1.2.3 path: "/" readOnly: false
- Creating and using a compute engine persistent disk
NOTE: This is the old way of mounting persistent volumes. It is no longer a best practice to do the following. Showing here for completeness.
$ gcloud compute disks create \ --size=100GB \ --zone=us-west2-a demo-disk
[...] spec: containers: - name: demo-container image: gcr.io/hello-app:1.0 volumeMounts: - mountPath: /demo-pod name: pd-volume volumes: - name: pd-volume gcePersistentDisk: pdName: demo-disk # <- must match gcloud fsType: ext4
A better way is to abstract the persistent volume (PV) from the Pod by separating the PV from a Persistent Volume Claim (PVC).
kind: PersistentVolume apiVersion: v1 metadata: name: pd-volume spec: storageClassName: "standard" capacity: storage: 100G accessModes: - ReadWriteOnce: gcePersistentDisk: pdName: demo-disk fsType: ext4
Note: PVC StorageClassName
must match the PV StorageClassName
.
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: standard provisioner: kubernetes.io/gce-pd parameters: type: pd-standard replication-type: none
In GKE, a PVC with not defined storage class will use the above (default) storage class.
- Example using SSD:
kind: PersistentVolume [...] spec: storageClassName: "ssd" --- kind: StorageClass [...] metadata: name: ssd parameters: type: pd-ssd
- Volume Access Modes
Access Modes determine how the Volume will read or write. The types of access modes that are available depend on the volume type.
-
ReadWriteOnce
: mounts the volume as read/write to a single node; -
ReadOnlyMany
: mounts a volume as read-only to many nodes; and -
ReadWriteMany
: mounts volumes as read/write to many nodes.
For most applications, persistent disks are mounted as ReadWriteOnce
.
Note: GCP persistent disks do not support ReadWriteMany
. However, NFS does.
- Example Persistent Volume Claim (PVC):
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pd-volume-claim spec: storageClassName: "standard" accessModes: - ReadWriteOnce: resources: requests: storage: 100G
- Use the above PVC in a Pod (i.e., mount it):
kind: Pod apiVersion: v1 metadata: name: demo-pod spec: containers: - name: demo-container image: gcr.io/hello-app:1.0 volumeMounts: - mountPath: /demo-pod name: pd-volume volumes: - name: pd-volume PersistentVolumeClaim: claimName: pd-volume-claim
The above method abstracts...
- An alternative option is "Dynamic Provisioning".
- Retain the volume:
[...] spec: persistentVolumeReclaimPolicy: Retain
- Regional persistent disks
Increases availability by replicating data between zones:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: ssd provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd '''replication-type: regional-pd''' zones: us-west1-a, us-west1-b
In the above example, if there was an outage in one of the zones, GKE will automatically failover to the other (still up) zone.
You can also use persistent volumes for other controllers, such as deployments and stateful sets. Remember, a deployment is simply a Pod template that runs and maintains a set of identical pods, commonly known as replicas. You can use these deployments for stateless applications. Deployment replicas can share an existing persistent volume using ReadOnlyMany
or ReadWriteMany
access mode. ReadWriteMany
access mode can only be used for storage types that support it, such as NFS systems.
The ReadWriteOnce
access mode is not recommended for Deployments because the replicas need to attach and reattach to persistent volumes dynamically. If a first pod needs to detach itself, the second pod needs to be attached first. However, the second pod cannot attach because the first pod is already attached. This creates a deadlock. So neither pod can make progress. Stateful sets resolve this deadlock. Whenever your application needs to maintain state in persistent volumes, managing it with a stateful set rather than a deployment is the way to go.
Configuring Persistent Storage for Kubernetes Engine
Create PVs and PVCs
In this section, we will create a PVC, which triggers Kubernetes to automatically create a PV.
- Create and apply a manifest with a PVC
Most of the time, you do not need to directly configure PV objects or create Compute Engine persistent disks. Instead, you can create a PVC, and Kubernetes automatically provisions a persistent disk for you.
- Check that there are currently not PVCs defined in our cluster:
$ kubectl get persistentvolumeclaim No resource found.
- Create a manifest that creates a 30 gigabyte PVC called
hello-web-disk
, which can be mounted as read-write volume on a single node at a time:
$ cat << EOF > pvc-demo.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: hello-web-disk spec: accessModes: - ReadWriteOnce resources: requests: storage: 30Gi EOF $ kubectl apply -f pvc-demo.yaml $ kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE hello-web-disk Bound pvc-b26e69ea-c38a-11e9-8f0d-42010a8001e8 30Gi RWO standard 4s
Mount and verify GCP persistent disk PVCs in Pods
In this section, we will attach our persistent disk PVC to a Pod. You mount the PVC as a volume as part of the manifest for the Pod.
- Mount the PVC to a Pod
The following manifest deploys an Nginx container, attaches the pvc-demo-volume
to the Pod, and mounts that volume to the path /var/www/html
inside the Nginx container. Files saved to this directory inside the container will be saved to the persistent volume and persist even if the Pod and the container are shutdown and recreated:
$ cat << EOF > kind: Pod apiVersion: v1 metadata: name: pvc-demo-pod spec: containers: - name: frontend image: nginx volumeMounts: - mountPath: "/var/www/html" name: pvc-demo-volume volumes: - name: pvc-demo-volume persistentVolumeClaim: claimName: hello-web-disk EOF $ kubectl apply -f pod-volume-demo.yaml $ kubectl get pods NAME READY STATUS RESTARTS AGE pvc-demo-pod 0/1 ContainerCreating 0 13s
If you do above quickly after creating the Pod, you will see the status listed as "ContainerCreating" while the volume is mounted before the status changes to "Running".
- Verify the PVC is accessible within the Pod:
$ kubectl exec -it pvc-demo-pod -- sh
- Create a simple text message as a web page in the Pod:
# echo "Test webpage in a persistent volume!" > /var/www/html/index.html # chmod +x /var/www/html/index.html
- Test the persistence of the PV
Let's delete the Pod from the cluster, confirm that the PV still exists, then redeploy the Pod and verify the contents of the PV remain intact.
- Delete the
pvc-demo-pod
:
$ kubectl delete pod pvc-demo-pod <pre> * List the Pods in the cluster: <pre> $ kubectl get pods No resources found.
$ kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE hello-web-disk Bound pvc-b26e69ea-c38a-11e9-8f0d-42010a8001e8 30Gi RWO standard 3m55s
Our PVC still exists, and was not deleted when the Pod was deleted.
- Redeploy the
pvc-demo-pod
:
$ kubectl apply -f pod-volume-demo.yaml $ kubectl get pods NAME READY STATUS RESTARTS AGE pvc-demo-pod 1/1 Running 0 3m48s
The Pod will deploy and the status will change to "Running" faster this time because the PV already exists and does not need to be created.
- Verify the PVC is still accessible within the Pod,:
$ kubectl exec -it pvc-demo-pod -- sh # cat /var/www/html/index.html Test webpage in a persistent volume!
The contents of the persistent volume were not removed, even though the Pod was deleted from the cluster and recreated.
Create StatefulSets with PVCs
In this section, we use our PVC in a StatefulSet. A StatefulSet is like a Deployment, except that the Pods are given unique identifiers.
- Release the PVC
- Before we can use the PVC with the StatefulSet, we must delete the Pod that is currently using it:
$ kubectl delete pod pvc-demo-pod
- Create a StatefulSet
- Create a StatefulSet that includes a LoadBalancer service and three replicas of a Pod containing an Nginx container and a volumeClaimTemplate for 30 gigabyte PVCs with the name
hello-web-disk
. The Nginx containers mount the PVC calledhello-web-disk
at/var/www/html
as in the previous task:
$ cat << EOF > statefulset-demo.yaml kind: Service apiVersion: v1 metadata: name: statefulset-demo-service spec: ports: - protocol: TCP port: 80 targetPort: 9376 type: LoadBalancer --- kind: StatefulSet apiVersion: apps/v1 metadata: name: statefulset-demo spec: selector: matchLabels: app: MyApp serviceName: statefulset-demo-service replicas: 3 updateStrategy: type: RollingUpdate template: metadata: labels: app: MyApp spec: containers: - name: stateful-set-container image: nginx ports: - containerPort: 80 name: http volumeMounts: - name: hello-web-disk mountPath: "/var/www/html" volumeClaimTemplates: - metadata: name: hello-web-disk spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 30Gi EOF $ kubectl apply -f statefulset-demo.yaml
You now have a StatefulSet running behind a service named statefulset-demo-service
.
- Verify the connection of Pods in StatefulSets
- View the details of the StatefulSet:
$ kubectl describe statefulset statefulset-demo
Note the event status at the end of the output. The Service and StatefulSet created successfully.
$ kubectl get pods NAME READY STATUS RESTARTS AGE statefulset-demo-0 1/1 Running 0 110s statefulset-demo-1 1/1 Running 0 86s statefulset-demo-2 1/1 Running 0 65s
- List the PVCs associated with the above StatefulSet:
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE hello-web-disk Bound pvc-b26e69ea-c38a-11e9-8f0d-42010a8001e8 30Gi RWO standard 10m hello-web-disk-statefulset-demo-0 Bound pvc-d41e3ebd-c38b-11e9-8f0d-42010a8001e8 30Gi RWO standard 2m13s hello-web-disk-statefulset-demo-1 Bound pvc-e1fa6ed4-c38b-11e9-8f0d-42010a8001e8 30Gi RWO standard 109s hello-web-disk-statefulset-demo-2 Bound pvc-ee789c40-c38b-11e9-8f0d-42010a8001e8 30Gi RWO standard 88s
The original hello-web-disk
is still there and you can now see the individual PVCs that were created for each Pod in the new StatefulSet Pod.
- View the details of the first PVC in the StatefulSet:
$ kubectl describe pvc hello-web-disk-statefulset-demo-0
- Verify the persistence of Persistent Volume connections to Pods managed by StatefulSets
In this section, we will verify the connection of Pods in StatefulSets to particular PVs as the Pods are stopped and restarted.
- Verify that the PVC is accessible within the Pod:
$ kubectl exec -it statefulset-demo-0 -- sh
- Verify that there is no
index.html
text file in the/var/www/html directory
:
# cat /var/www/html/index.html cat: /var/www/html/index.html: No such file or directory
- Create a simple text message as a web page in the Pod:
$ echo "Test webpage in a persistent volume!" > /var/www/html/index.html $ chmod +x /var/www/html/index.html
- Delete the Pod where you updated the file on the PVC:
kubectl delete pod statefulset-demo-0
- List the Pods in the cluster:
$ kubectl get pods NAME READY STATUS RESTARTS AGE statefulset-demo-0 0/1 ContainerCreating 0 11s statefulset-demo-1 1/1 Running 0 6m1s statefulset-demo-2 1/1 Running 0 5m40s
You will see that the StatefulSet is automatically restarting the statefulset-demo-0
Pod. Wait until the Pod status shows that it is running again.
- Connect to the shell on the new
statefulset-demo-0
Pod:
$ kubectl exec -it statefulset-demo-0 -- sh # cat /var/www/html/index.html Test webpage in a persistent volume!
The StatefulSet restarts the Pod and reconnects the existing dedicated PVC to the new Pod ensuring that the data for that Pod is preserved.
StatefulSets
Stateful sets are useful for stateful applications. Stateful sets run and maintain a set of pods just like deployments do. A stateful set object defines the desired state and its controller achieves it. However, unlike deployments, stateful sets maintain a persistent identity for each pod. Each pod in a stateful set maintains a persistent identity and has an ordinal index with the relevant pod name, a stable hostname and stably identified persistent storage that is linked to the ordinal index.
An ordinal index is just a unique sequential number that is assigned to each pod in the stateful set. This number defines the pod's position in the sets sequence of pods. Deployment, scaling, and updates are ordered using the ordinal index of the pods within a stateful set. For example, if a stateful set named Demo launches three replicas, it will launch pod names Demo-0, Demo-1, and Demo-2 sequentially. This means that all of its predecessors must be running and ready before an action is taken on a newer pod. For example, if Demo-0 is not running and ready, Demo-1 will not be launched. If Demo-0 fails after Demo-1 is running and ready, but before the creation of Demo-2, Demo-2 will not be launched until Demo-0 is relaunched and becomes running and ready. Scaling and ruling updates happen in reverse order. Which means Demo-2 would be changed first. This depends on the pod management policy being set to the default. Ordered ready state. If you want to launch pods in parallel without waiting for the pods to maintain running and ready state, change the pod management policy to parallel. As the name suggests stateful sets are useful for stateful applications. With stable storage, stateful sets use a unique persistent volume claim for each pod. So that each pod can maintain its own individual state. It must have reliable long-term storage to which no other pods write. These persistent volume claims use read write once access mode for applications.
- StatefulSet Example (with associated Service):
kind: Service apiVersion: v1 metadata: name: demo-service labels: app: demo spec: ports: - port: 80 name: web clusterIP: None selector: app: demo --- kind: StatefulSet apiVersion: apps/v1 metadata: name: demo-statefulset spec: selector: matchLabels: app: demo serviceName: demo-service replicas: 3 updateStrategy: type: RollingUpdate template: metadata: labels: app: demo [...] spec: containers: - name: demo-container image: k8s.gcr.io/demo:0.1 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/web volumeClaimTemplates: - metadata: name: demo-pvc spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi
In the above example, we are defining a "headless service" by specifying "None" for the clusterIP
.
ConfigMaps and Secrets
ConfigMaps
$ mkdir -p demo/ $ wget https://example.com/color.properties -O demo/color.properties $ wget https://example.com/ui.properties -O demo/ui.properties $ kubectl create configmap demo --from-file=demo/
kind: ConfigMap apiVersion: v1 metadata: name: demo data: color.properties: |- color.good=green color.bad=red ui.properties: |- resolution=high
- Using a ConfigMap in Pod commands:
kind: Pod apiVersion: v1 metadata: name: demo-pod spec: containers: - name: demo-container image: k8s.gcr.io/busybox command: ["/bin/sh", "-c", "echo $(VARIABLE_DEMO)"] env: - name: VARIABLE_DEMO valueFrom: configMapKeyRef: name: demo key: my.key
- Using a ConfigMap by creating a Volume:
kind: Pod [...] spec: containers: - name: demo-container image: k8s.gcr.io/busybox volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: demo
Secrets
- Types of Secrets
- Generic: used when creating secrets from files directories or literal values.
- TLS: uses an existing public-private encryption key pair. To create one of these, you must give k8s the public key certificate encoded in PEM format, and you must also supply the private key of that certificate.
- Docker registry: used to pass credentials for an image registry to kubelet so it can pull a private image from the Docker registry on behalf of your Pod.
In GKE, the Google Container Registry (GCR) integrates with Cloud Identity and Access Management, so you may not need to use the "Docker registry" Secret type.
- Creating a generic Secret
- Create a Secret using literal values:
$ kubectl create secret generic demo \ --from-literal user=admin \ --from-literal password=1234
- Create a Secret using files:
$ kubectl create secret generic demo \ --from-file=./username.txt \ --from-file=./password.txt
- Create a Secret using naming keys:
$ kubectl create secret generic demo \ --from-file=User=./username.txt \ --from-file=Password=./password.txt
- Using a Secret
- Secret environment variable:
[...] kind: Pod spec: containers: - name: mycontainer image: redis env: - name: SECRET_USERNAME valueFrom: secretKeyRef: name: demo-secret key: username - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: demo-secret key: password
- Secret volume:
[...] kind: Pod spec: containers: - name: mycontainer image: redis volumeMounts: - name: storagesecrets mountPath: "/etc/secrets" readOnly: true volumes: - name: storagesecrets secret: secretName: demo-secret
Working with Kubernetes Engine Secrets and ConfigMaps
- Set up Cloud Pub/Sub and deploy an application to read from the topic
- Set the environment variables for the pub/sub components.
$ export my_pubsub_topic=echo $ export my_pubsub_subscription=echo-read
- Create a Cloud Pub/Sub topic named "echo" and a subscription named "echo-read" that is associated with that topic:
$ gcloud pubsub topics create $my_pubsub_topic $ gcloud pubsub subscriptions create $my_pubsub_subscription \ --topic=$my_pubsub_topic
- Deploy an application to read from Cloud Pub/Sub topics
First, create a deployment with a container that can read from Cloud Pub/Sub topics. Since specific permissions are required to subscribe to and read from Cloud Pub/Sub topics, this container needs to be provided with credentials in order to successfully connect to Cloud Pub/Sub.
- Create a Deployment for use with our Cloud Pub/Sub topic:
$ cat << EOF > pubsub.yaml apiVersion: apps/v1 kind: Deployment metadata: name: pubsub spec: selector: matchLabels: app: pubsub template: metadata: labels: app: pubsub spec: containers: - name: subscriber image: gcr.io/google-samples/pubsub-sample:v1 EOF $ kubectl apply -f pubsub.yaml $ kubectl get pods -l app=pubsub NAME READY STATUS RESTARTS AGE pubsub-65dbdb56f5-5xjp4 0/1 Error 2 36s
Notice the status of the Pod. It has an error and has restarted several times.
- Inspect the logs for the Pod:
$ kubectl logs -l app=pubsub StatusCode.PERMISSION_DENIED, User not authorized to perform this action.
The error message displayed at the end of the log indicates that the application does not have permissions to query the Cloud Pub/Sub service.
- Create service account credentials
To fix the above permission issue, create a new service account and grant it access to the pub/sub subscription that the test application is attempting to use. Instead of changing the service account of the GKE cluster nodes, generate a JSON key for the service account, and then securely pass the JSON key to the Pod via Kubernetes Secrets.
- In the GCP Console, on the Navigation menu, click IAM & admin > Service Accounts.
- Click + Create Service Account.
- In the Service Account Name text box, enter
pubsub-app
and then click Create. - In the Role drop-down list, select Pub/Sub > Pub/Sub Subscriber.
- Confirm the role is listed, and then click Continue.
- Click + Create Key.
- Select JSON as the key type, and then click Create.
A JSON key file containing the credentials of the service account will download to your computer. You can see the file in the download bar at the bottom of your screen. We will use this key file to configure the sample application to authenticate to Cloud Pub/Sub API.
- Click Close and then click Done.
On your hard drive, locate the JSON key that you just downloaded and rename the file to credentials.json
.
- Create a Kubernetes Secret named
pubsub-key
using the downloaded credentials (JSON file):
$ kubectl create secret generic pubsub-key \ --from-file=key.json=$HOME/credentials.json
This command creates a Secret named pubsub-key
that has a key.json
value containing the contents of the private key that you downloaded from the GCP Console.
- Configure the application with the secret
Update the deployment to include the following changes:
- Add a volume to the Pod specification. This volume contains the secret.
- The secrets volume is mounted in the application container.
- The
GOOGLE_APPLICATION_CREDENTIALS
environment variable is set to point to the key file in the secret volume mount. - The
GOOGLE_APPLICATION_CREDENTIALS
environment variable is automatically recognized by Cloud Client Libraries, in this case, the Cloud Pub/Sub client for Python.
- Update the previous Deployment:
$ cat << EOF > pubsub-secret.yaml apiVersion: apps/v1 kind: Deployment metadata: name: pubsub spec: selector: matchLabels: app: pubsub template: metadata: labels: app: pubsub spec: volumes: - name: google-cloud-key secret: secretName: pubsub-key containers: - name: subscriber image: gcr.io/google-samples/pubsub-sample:v1 volumeMounts: - name: google-cloud-key mountPath: /var/secrets/google env: - name: GOOGLE_APPLICATION_CREDENTIALS value: /var/secrets/google/key.json EOF $ kubectl apply -f pubsub-secret.yaml $ kubectl get pods -l app=pubsub NAME READY STATUS RESTARTS AGE pubsub-687959fd65-kwhb5 1/1 Running 0 40s
- Test receiving Cloud Pub/Sub messages
Now that we configured the application, we can publish a message to the Cloud Pub/Sub topic we created earlier in the lab:
$ gcloud pubsub topics publish $my_pubsub_topic --message="Hello, world!" messageIds: - '697037622972840'
Within a few seconds, the message should be picked up by the application and printed to the output stream.
- Inspect the logs from the deployed Pod:
$ kubectl logs -l app=pubsub Pulling messages from Pub/Sub subscription... [2019-08-20 21:46:18.395126] Received message: ID=697037622972840 Data=b'Hello, world!' [2019-08-20 21:46:18.395205] Processing: 697037622972840 [2019-08-20 21:46:21.398350] Processed: 697037622972840
Working with ConfigMaps
ConfigMaps bind configuration files, command-line arguments, environment variables, port numbers, and other configuration artifacts to your Pods' containers and system components at runtime. ConfigMaps enable you to separate your configurations from your Pods and components. However, ConfigMaps are not encrypted, making them inappropriate for credentials. This is the difference between Secrets and ConfigMaps: secrets are better suited for confidential or sensitive information, such as credentials. ConfigMaps are better suited for general configuration information, such as port numbers.
- Use the kubectl command to create ConfigMaps
You use kubectl to create ConfigMaps by following the pattern kubectl create configmap [NAME] [DATA] and adding a flag for file (--from-file) or literal (--from-literal)
- Start with a simple literal in the following kubectl command:
kubectl create configmap sample --from-literal=message=hello
- See how Kubernetes ingested the ConfigMap:
$ kubectl describe configmaps sample Name: sample Namespace: default Labels: <none> Annotations: <none> Data ==== message: ---- hello Events: <none>
- Create a ConfigMap from a file:
$ cat << EOF >sample2.properties message2=world foo=bar meaningOfLife=42 EOF $ kubectl create configmap sample2 --from-file=sample2.properties $ kubectl describe configmaps sample2 Name: sample2 Namespace: default Labels: <none> Annotations: <none> Data ==== sample2.properties: ---- message2=world foo=bar meaningOfLife=42 Events: <none>
- Use manifest files to create ConfigMaps
You can also use a YAML configuration file to create a ConfigMap.
- Create a ConfigMap definition called
sample3
(we will use this ConfigMap later to demonstrate two different ways to expose the data inside a container):
$ cat << EOF > config-map-3.yaml apiVersion: v1 data: airspeed: africanOrEuropean meme: testAllTheThings kind: ConfigMap metadata: name: sample3 namespace: default selfLink: /api/v1/namespaces/default/configmaps/sample3 $ kubectl apply -f config-map-3.yaml $ kubectl describe configmaps sample3 Name: sample3 Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","data":{"airspeed":"africanOrEuropean","meme":"testAllTheThings"},"kind":"ConfigMap","metadata":{"annotations":{},"name... Data ==== airspeed: ---- africanOrEuropean meme: ---- testAllTheThings Events: <none>
Now we have some non-secret, unencrypted, configuration information properly separated from our application and available to our cluster. We have done this using ConfigMaps in three different ways to demonstrate the various options, however, in practice, you typically pick one method, most likely the YAML configuration file approach. Configuration files provide a record of the values that you have stored so that you can easily repeat the process in the future.
Next, let's access this information from within our application.
- Use environment variables to consume ConfigMaps in containers
In order to access ConfigMaps from inside Containers using environment variables, the Pod definition must be updated to include one or more configMapKeyRefs.
Below is an updated version of the Cloud Pub/Sub demo Deployment that includes an additional env:
setting at the end of the file to import environmental variables from the ConfigMap into the container:
- name: INSIGHTS valueFrom: configMapKeyRef: name: sample3 key: meme
- Reapply the updated configuration file:
kubectl apply -f pubsub-configmap.yaml
Now our application has access to an environment variable called INSIGHTS
, which has a value of testAllTheThings
.
- Verify that the environment variable has the correct value:
$ kubectl get pods NAME READY STATUS RESTARTS AGE pubsub-6549d6dffc-w7lbd 1/1 Running 0 35s $ kubectl exec -it pubsub-6549d6dffc-w7lbd -- sh # printenv | grep ^INSIGHTS INSIGHTS=testAllTheThings
- Use mounted volumes to consume ConfigMaps in containers
You can populate a volume with the ConfigMap data instead of (or in addition to) storing it in an environment variable.
In this Deployment, the ConfigMap named sample-3
that we created earlier is also added as a volume called config-3
in the Pod spec. The config-3
volume is then mounted inside the container on the path /etc/config
. The original method using Environment variables to import ConfigMaps is also configured.
- Update the Deployment:
$ cat << EOF > pubsub-configmap2.yaml apiVersion: apps/v1 kind: Deployment metadata: name: pubsub spec: selector: matchLabels: app: pubsub template: metadata: labels: app: pubsub spec: volumes: - name: google-cloud-key secret: secretName: pubsub-key - name: config-3 configMap: name: sample3 containers: - name: subscriber image: gcr.io/google-samples/pubsub-sample:v1 volumeMounts: - name: google-cloud-key mountPath: /var/secrets/google - name: config-3 mountPath: /etc/config env: - name: GOOGLE_APPLICATION_CREDENTIALS value: /var/secrets/google/key.json - name: INSIGHTS valueFrom: configMapKeyRef: name: sample3 key: meme EOF $ kubectl apply -f pubsub-configmap2.yaml
- Reconnect to the container's shell session to see if the value in the ConfigMap is accessible (note: the Pod names will have changed):
$ kubectl get pods NAME READY STATUS RESTARTS AGE pubsub-5fcc8df7b6-p5d9x 1/1 Running 0 5s pubsub-6549d6dffc-w7lbd 1/1 Terminating 0 3m $ kubectl exec -it pubsub-5fcc8df7b6-p5d9x -- sh # cd /etc/config # ls airspeed meme # cat airspeed africanOrEuropean
Access Control and Security in Kubernetes and Google Kubernetes Engine (GKE)
There are two main ways to authorize in GKE (and you really need both):
- Cloud IAM: Project and cluster level access
- RBAC: Cluster and namespace level access
The API server authenticates in different ways:
- OpenID connect tokens [recommended]
- x509 client certificates [suggest disabling]
- Static passwords [suggest disabling]
In GKE, x509 and static passwords are disabled by default (in k8s v1.12+). OpenID Connect is enabled by default.
- Cloud IAM
Three elements are defined in Cloud IAM access control:
- Who? - Identity of the person making the request
- What? - Set of permissions that are granted
- Which? - Which resources this policy applies to
- GKE predefined Cloud IAM roles
Provides granular access to Kubernetes resources.
- GKE Viewer: Read-only permissions to cluster and k8s resources
- GKE Developer: Full access to k8s cluster within resources
- GKE Admin: Full access to clusters and their k8s resources
- GKE Cluster Admin: Create/delete/update/view clusters. No access to k8s resources.
- GKE Host Service Agent User: Only for service account. Manage network resource in a shared VPC.
RBAC
Three k8s RBAC concepts:
- Subjects (Who?)
- Resources (Which?)
- Verbs (What?)
Roles connect Resources to Verbs. Role Bindings connect roles to subjects.
- A role contains rules that represent a set of permissions. For example:
kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: demo-role rules: - apiGroups: [""] resource: ["pods"] verbs: ["get", "list", "watch"]
Note: Only one namespace per role is allowed.
- A Cluster Role grants permissions at the cluster level. For example:
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: demo-clusterrole rules: - apiGroups: [""] resource: ["storageclasses"] verbs: ["get", "list", "watch"]
Note: No need to define namespace in Cluster Role, since it applies at the cluster-level.
- More examples:
rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "list", "watch"] --- rules: - apiGroups: [""] resources: ["pods", "pods/log"] verbs: ["get", "list", "watch"] --- rules: - apiGroups: [""] resources: ["pods"] resourceNames: ["demo-pod"] verbs: ["patch", "update"] --- rule: - nonResourceURLs: ["metrics/", "/metrics/*"] verbs: ["get", "post"]
Note: The last example ("nonResourceURLs
") is a rule unique to ClusterRoles.
- Attach Roles to Role Bindings
kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: demo-rolebinding subjects: - kind: User name: "bob@example.com" apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: demo-role apiGroup: rbac.authorization.k8s.io
- Example Cluster Role Binding:
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: demo-clusterrolebinding subjects: - kind: User name: "admin@example.com" apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: demo-clusterrole apiGroup: rbac.authorization.k8s.io
- Example of how to refer to different subject types:
subjects: - kind: User name: "bob@example.com" apiGroup: rbac.authorization.k8s.io --- subjects: - kind: Group name: "Developers" apiGroup: rbac.authorization.k8s.io --- subjects: - kind: ServiceAccount name: default namespace: kube-system --- subjects: - kind: Group name: system.serviceaccounts apiGroup: rbac.authorization.k8s.io --- subjects: - kind: Group name: system.serviceaccounts apiGroup: rbac.authorization.k8s.io:qa # <- "qa" namespace --- subjects: - kind: Group name: system.authenticated apiGroup: rbac.authorization.k8s.io
Note: Not all resources are namespaced:
$ kubectl api-resources --namespaced=true --output=name | head bindings configmaps endpoints events $ kubectl api-resources --namespaced=false --output=name | head -4 componentstatuses namespaces nodes persistentvolumes
Kubernetes Control Plane Security
- Initiate credential rotation:
$ gcloud container clusters update <name> \ --start-credential-rotation
- Complete credential rotation:
$ gcloud container clusters update <name> \ --complete-credential-rotation
- Initiate IP rotation:
$ gcloud container clusters update <name> \ --start-ip-rotation
- Complete IP rotation:
$ gcloud container clusters update <name> \ --complete-ip-rotation
- Protect your metadata
- Restrict
compute.instances.get
permission for nodes. - Disable legacy Compute Engine API endpoint. (note: Compute Engine API endpoints using versions 0.1 and V1 beta-1, support querying of metadata.)
- The V1 APIs restrict the retrieval of metadata. Starting from GKE version 1.12, the legacy Compute Engine metadata endpoints are disabled by default and with earlier versions, they can only be disabled by creating a new cluster or adding a new node port to an existing cluster.
- Enable metadata concealment (temporary).
- This is basically a firewall that prevents Pods from accessing a node's metadata. It does this by restricting access to Kube ENV, which contains kube credentials and the virtual machines instance identity token. Note that this is a temporary solution that will be deprecated as better security improvements are developed in the future.
SEE: "Protecting Cluster Metadata"
Pod Security
Use security context to limit privileges to containers.
kind: Pod apiVersion: v1 metadata: name: security-context-demo spec: securityContext: runAsUser: 1000 fsGroup: 2000 ...
Use a Pod security policy to apply security contexts:
- A policy is a set of restrictions, requirements, and defaults.
- For a Pod to be admitted to the cluster, all conditions must be fulfilled for a Pod to be created or updated. (note: rules are only applied when the Pod is being created or updated.)
- PodSecurityPolicy controller is an admission controller.
- The controller validates and modifies requests against one or more PodSecurityPolicies.
There is also an extra step called "admission control". A validating or non-mutating admission controller just validates requests. A mutating and mission controller can modify requests if necessary and can also validate requests. A request can be passed through multiple controllers, and if the request fails at any point the entire request is rejected immediately and the end-user receives an error. The pod security policy admission controller acts on the creation and modification of pods and determines whether the pod should be admitted based on the requested security context, and the available pod security policies. Note that these policies are enforced during the creation or update of a pod, but a security context is enforced by the Container Runtime.
- Pod security policy example:
kind: PodSecurityPolicy apiVersion: policy/v1beta1 metadata: name: demo-psp spec: privileged: false allowPriviligeEscalation: false volumes: - 'configMap' - 'emptyDir' - 'projected' - 'secret' - 'persistentVolumeClaim' hostNetwork: false hostIPC: false runAsUser: rule: 'MustRunAsNonRoot' seLinux: rule: 'RunAsAny' readOnlyRootFilesystem: false
- Authorize (the above) Pod security policy:
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: psp-clusterrole rules: - apiGroups: - policy resources: - podsecuritypolicies resourceNames: - demo-psp verbs: - use
- Now, define a Role Binding:
kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: psp-rolebinding namespace: demo roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: psp-clusterrole subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts - kind: ServiceAccount name: service@example.com namespace: demo
A Pod Security Policy controller must be enabled on a GKE cluster:
$ gcloud container clusters update <name> \ --enable-pod-security-policy
WARNING: Careful, the order here matters. If you enable the pod security policy controller before defining any policies, you have just commanded that nothing is allowed to be deployed.
- GKE recommended best practices
- Use container-optimized OS (COS)
- Enable automatic node upgrades (to run the latest available version of k8s)
- Use private cluster and master authorized networks(i.e., they do not contain external IP addresses)
- Use encrypted Secrets for sensitive info
- Assign roles to groups, not users.
- Do not enable Kubernetes Dashboard
Implementing Role-Based Access Control With Kubernetes Engine
- List the current namespaces in the cluster:
$ kubectl get namespaces NAME STATUS AGE default Active 77s kube-public Active 77s kube-system Active 77s
- Create a Namespace called "production":
$ cat << EOF > my-namespace.yaml kind: Namespace apiVersion: v1 metadata: name: production EOF $ kubectl create -f ./my-namespace.yaml $ kubectl get namespaces NAME STATUS AGE default Active 2m16s kube-public Active 2m16s kube-system Active 2m16s production Active 7s $ kubectl describe namespaces production Name: production Labels: <none> Annotations: <none> Status: Active No resource quota. No resource limits.
- Create a Resource in a Namespace
If you do not specify the namespace of a Pod it will use the namespace default
.
- Create a Pod that contains an Nginx container and specify which namespace to deploy it to:
$ cat << EOF > my-pod.yaml kind: Pod apiVersion: v1 metadata: name: nginx labels: name: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 EOF $ kubectl apply -f ./my-pod.yaml --namespace=production
Alternatively, we could have specified the namespace in the YAML file:
kind: Pod apiVersion: v1 metadata: name: nginx labels: name: nginx namespace: production spec: containers: - name: nginx image: nginx ports: - containerPort: 80
- Try using the following command to view your Pod:
$ kubectl get pods No resources found.
You will not see your Pod because kubectl checked the default
namespace (by default) instead of our new namespace.
- Run the command again, but this time specify the new namespace:
$ kubectl get pods --namespace=production NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 54s
Now you should see your newly created Pod.
About Roles and RoleBindings
In this section, we will create a sample custom role, and then create a RoleBinding that grants Username 2 the editor role in the production namespace.
- Defines a role called
pod-reader
that provides create, get, list, and watch permission for Pod objects in the production namespace. Note that this role cannot delete Pods:
$ cat << EOF > pod-reader-role.yaml kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: production name: pod-reader rules: - apiGroups: [""] resources: ["pods"] verbs: ["create", "get", "list", "watch"]
- Create a custom Role
Before you can create a Role, your account must have the permissions granted in the role being assigned. For cluster administrators, this can be easily accomplished by creating the following RoleBinding to grant your own user account the cluster-admin role.
To grant the Username 1 account cluster-admin privileges, run the following command, replacing [USERNAME_1_EMAIL]
with the email address of the Username 1 account:
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user [USERNAME_1_EMAIL]
Now, create the role (defined above):
$ kubectl apply -f pod-reader-role.yaml $ kubectl get roles --namespace production NAME AGE pod-reader 8s
- Create a RoleBinding
The role is used to assign privileges, but by itself it does nothing. The role must be bound to a user and an object, which is done in the RoleBinding.
- Creates a RoleBinding called
username2-editor
for the second lab user to the pod-reader role we created earlier. That role can create and view Pods but cannot delete them:
$ cat << EOF > username2-editor-binding.yaml kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: username2-editor namespace: production subjects: - kind: User name: [USERNAME_2_EMAIL] apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: pod-reader apiGroup: rbac.authorization.k8s.io EOF
This file contains a placeholder, [USERNAME_2_EMAIL]
, that we must replace with the email address of Username 2 before we apply it.
- Use sed to replace the placeholder in the file with the value of the environment variable:
sed -i "s/\[USERNAME_2_EMAIL\]/${USER2}/" username2-editor-binding.yaml
- Confirm that the correct change has been made:
$ cat username2-editor-binding.yaml subjects: - kind: User name: gcpstaginguser68_student@qwiklabs.net apiGroup: rbac.authorization.k8s.io
We will apply this RoleBinding later.
- Test Access
Now we will test whether Username 2 can create a Pod in the production
namespace by using Username 2 to create a Pod. This manifest deploys a simple Pod with a single Nginx container:
$ cat << EOF > production-pod.yaml kind: Pod apiVersion: v1 metadata: name: production-pod labels: name: production-pod namespace: production spec: containers: - name: production-pod image: nginx ports: - containerPort: 8080 EOF
Switch back to the Username 2 GCP Console tab. Make sure you are on the Username 2 GCP Console tab.
In Cloud Shell for Username 2, type the following command to set the environment variable for the zone and cluster name.
$ export my_zone=us-central1-a $ export my_cluster=standard-cluster-1 $ source <(kubectl completion bash) $ gcloud container clusters get-credentials $my_cluster --zone $my_zone
Check if Username 2 can see the production namespace:
$ kubectl get namespaces NAME STATUS AGE default Active 11m kube-public Active 11m kube-system Active 11m production Active 9m8s
The production namespace appears at the bottom of the list, so we can continue.
- Create the resource in the namespace called production:
$ kubectl apply -f ./production-pod.yaml Error from server (Forbidden): error when creating "./production-pod.yaml": pods is forbidden: User "student-c2126354c28c@qwiklabs.net" cannot create resource "pods" in API group "" in the namespace "production"
The above command fails, indicating that Username 2 does not have the correct permission to create Pods. Username 2 only has the viewer permissions it started the lab with at this point because you have not bound any other role to that account yet. You will now change that.
Switch back to the Username 1 GCP Console tab. Make sure you are on the Username 1 GCP Console tab.
In the Cloud Shell for Username 1, execute the following command to create the RoleBinding that grants Username 2 the pod-reader role that includes the permission to create Pods in the production namespace:
$ kubectl apply -f username2-editor-binding.yaml
In the Cloud Shell for Username 1, execute the following command to look for the new role binding:
$ kubectl get rolebinding No resources found.
The rolebinding does not appear because kubectl is showing the default namespace.
In the Cloud Shell for Username 1, execute the following command with the production namespace specified:
$ kubectl get rolebinding --namespace production NAME AGE username2-editor 49s
Switch back to the Username 2 GCP Console tab. Make sure you are on the Username 2 GCP Console tab.
In the Cloud Shell for Username 2, execute the following command to create the resource in the namespace called production:
$ kubectl apply -f ./production-pod.yaml pod/production-pod created
This should now succeed as Username 2 now has the Create permission for Pods in the production namespace.
- Verify the Pod deployed properly in the production namespace:
$ kubectl get pods --namespace production NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 11m production-pod 1/1 Running 0 26s
Verify that only the specific RBAC permissions granted by the pod-reader
role are in effect for Username 2 by attempting to delete the production-pod
:
$ kubectl delete pod production-pod --namespace production Error from server (Forbidden): pods "production-pod" is forbidden: User "student-c2126354c28c@qwiklabs.net" cannot delete resource "pods" in API group "" in the namespace "production"
This fails because Username 2 does not have the delete permission for Pods.
Pod security policies
Creating a Pod Security Policy
In this section, we will create a Pod Security Policy. This policy does not allow privileged Pods and restricts runAsUser
to non-root accounts only, preventing the user of the Pod from escalating to root:
$ cat << EOF > restricted-psp.yaml kind: PodSecurityPolicy apiVersion: policy/v1beta1 metadata: name: restricted-psp spec: privileged: false # Don't allow privileged pods! seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny runAsUser: rule: MustRunAsNonRoot fsGroup: rule: RunAsAny volumes: - '*' EOF $ kubectl apply -f restricted-psp.yaml $ kubectl get podsecuritypolicy restricted-psp NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES restricted-psp false RunAsAny MustRunAsNonRoot RunAsAny RunAsAny false *
NOTE: This policy has no effect until a cluster role is created and bound to a user or service account with the permission to "use" the policy.
- Create a ClusterRole to a Pod Security Policy
- Create a ClusterRole that includes the resource we created in the last section (
restricted-psp
), and grant the subject the ability to use therestricted-psp
resource. The subject is the user or service account that is bound to this role. We will bind an account to this role later to enable the use of the policy:
$ cat << EOF > psp-cluster-role.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: restricted-pods-role rules: - apiGroups: - extensions resources: - podsecuritypolicies resourceNames: - restricted-psp verbs: - use EOF
However, before we can create a Role, the account we use to create the role must already have the permissions granted in the role being assigned. For cluster administrators, this can be easily accomplished by creating the necessary RoleBinding to grant your own user account the cluster-admin role.
- To grant your user account cluster-admin privileges, run the following command, replacing [USERNAME_1_EMAIL] with the email address of the Username 1 account:
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user [USERNAME_1_EMAIL]
- Create the ClusterRole with access to the security policy:
$ kubectl apply -f psp-cluster-role.yaml $ kubectl get clusterrole restricted-pods-role NAME AGE restricted-pods-role 7s
The ClusterRole is ready, but it is not yet bound to a subject, and therefore is not yet active.
- Create a ClusterRoleBinding for the Pod Security Policy
The next step in the process involves binding the ClusterRole to a subject, a user or service account, that would be responsible for creating Pods in the target namespace. Typically these policies are assigned to service accounts because Pods are typically deployed by replicationControllers in Deployments rather than as one-off executions by a human user.
- Bind the
restricted-pods-role
(created in the last section) to thesystem:serviceaccounts
group in thedefault
Namespace:
$ cat << EOF > psp-cluster-role-binding.yaml kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: restricted-pod-rolebinding namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: restricted-pods-role subjects: # Example: All service accounts in default namespace - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts EOF $ kubectl apply -f psp-cluster-role-binding.yaml
- Activate Security Policy
The PodSecurityPolicy controller must be enabled to affect the admission control of new Pods in the cluster.
Caution! If you do not define and authorize policies prior to enabling the PodSecurityPolicy controller, no Pods will be permitted to execute on the cluster.
- Enable the PodSecurityPolicy controller:
$ gcloud beta container clusters update $my_cluster --zone $my_zone --enable-pod-security-policy
This process takes several minutes to complete.
Note: The PodSecurityPolicy controller, can be disabled by running this command:
$ gcloud beta container clusters update [CLUSTER_NAME] --no-enable-pod-security-policy
- Test the Pod Security Policy
The final step in the process involves testing to see if the Policy is active. This Pod attempts to start an nginx container in a privileged context:
$ cat << EOF > privileged-pod.yaml kind: Pod apiVersion: v1 metadata: name: privileged-pod spec: containers: - name: privileged-pod image: nginx securityContext: privileged: true EOF $ kubectl apply -f privileged-pod.yaml Error from server (Forbidden): error when creating "privileged-pod.yaml": pods "privileged-pod-1" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
You should not be able to deploy the privileged Pod.
Edit the privileged-pod.yaml
manifest and remove the two lines at the bottom that invoke the privileged container security context. The file should now look as follows:
kind: Pod apiVersion: v1 metadata: name: privileged-pod spec: containers: - name: privileged-pod image: nginx
- Re-deploy the privileged Pod:
$ kubectl apply -f privileged-pod.yaml
The command now succeeds because the container no longer requires a privileged security context.
Rotate IP Address and Credentials
In this section, we will perform IP and credential rotation on our cluster. It is a security best practice to do so regularly to reduce credential lifetimes. While there are separate commands to rotate the serving IP and credentials, rotating credentials additionally rotates the IP as well.
- Update the GKE cluster to start the credential rotation process:
$ gcloud container clusters update $my_cluster --zone $my_zone --start-credential-rotation
After the command completes, the cluster will initiate the process to update each of the nodes. That process can take up to 15 minutes for your cluster. The process also automatically updates the kubeconfig entry for the current user.
The cluster master now temporarily serves the new IP address in addition to the original address.
Note: You must update the kubeconfig file on any other system that uses kubectl or API to access the master before completing rotation process to avoid losing access.
- Complete the credential and IP rotation process:
$ gcloud container clusters update $my_cluster --zone $my_zone --complete-credential-rotation
This finalizes the rotation processes and removes the original cluster IP address.
Stackdriver
- Metrics vs. Events
- Metrics: Represent system performance (e.g., CPU or disk usage). These can be values that change up or down over time called gauge values or values that increase over time called counters.
- Returns numerical values
- Events: Represent actions, such as Pod restarts or scale-in/scale-out activity.
- Returns "success", "warning", or "failure".
Logging
Logging is often viewed as a passive form of systems monitoring.
Stackdriver stores logs for 30 days (default) and up to 50GB is free.
After 30 days, Stackdriver purges your logs. If you wish to keep these logs, export them to BigQuery or Cloud Storage for long-term storage (longer than 30 days).
Node log files (stored in /var/log
on each node) that are older than 1 day or that reach 100 Mb will be compressed and rotated (using standard Linux log rotate). Only the 5 most recent log files are kept on the node. However, all logs are streamed to Stackdriver (in JSON format) for stored for 30 days.
GKE installs a logging agent on every node in a cluster. This streams the logs of every container/pod into Stackdriver, using FluentD (running as a DaemonSet). The configuration of FluentD is managed via ConfigMaps.
Monitoring
In GKE, monitoring is divided into 2 domains:
- Cluster-level:
- Master nodes (api-server, etcd, scheduler, controller-manager, cloud-controller-manager)
- Worker nodes
- Number of nodes, node utilization, pods/deployments running, errors and failures.
- Pods:
- container metrics
- application metrics
- system metrics
Probes
The best practice is to apply additional health checks to your (microservices) Pods:
- Liveness probes:
- Is the container running?
- If not, restart the container (if RestartPolicy is set to
Always
orOnFailure
)
- Readiness probes:
- Is the container ready to accept requests?
- If not, remove the Pod's IP address from all Service endpoints (by the endpoint controller)
These probes can be defined using three types of handlers:
- command;
- HTTP; and
- TCP
- Example of a command probe handler:
kind: Pod apiVersion: v1 metadata: name: demo-pod namespace: default spec: containers: - name: liveness livenessProbe: exec: command: - cat - /tmp/ready
If cat /tmp/ready
returns an exit code of 0
, the liveness probe reports that the container is successful.
- Example of an HTTP probe handler:
[...] spec: containers: - name: liveness livenessProbe: httpGet: path: /healthz port: 8080
If returns 200-400 => good, otherwise it will kill the container.
- Example of a TCP probe handler:
[...] spec: containers: - name: liveness livenessProbe: tcpSocket: port: 8080 # optional: initialDelaySeconds: 15 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3
If the connection is established, the container is considered healthy.
Using Prometheus monitoring with Stackdriver
- Set up Prometheus monitoring with GKE and Stackdriver
When you configure Stackdriver Kubernetes Monitoring with Prometheus support, then services that expose metrics in the Prometheus data model can be exported from the cluster and made visible as external metrics in Stackdriver.
In this task, you create the Prometheus service-account and a cluster role called prometheus and then use those when you deploy the container for the Prometheus service to provide the permissions that Prometheus requires.
The file rbac-setup.yml that is included in the source repository is a Kubernetes manifest file that creates the Kubernetes service account and cluster role for you.
In the Cloud Shell, execute the following command to set up the Kubernetes service account and cluster role ( both are named "prometheus") for the collector:
$ git clone https://github.com/GoogleCloudPlatformTraining/training-data-analyst $ cd ~/training-data-analyst/courses/ak8s/16_Logging/ $ kubectl apply -f rbac-setup.yml --as=admin --as-group=system:masters
A basic Prometheus configuration file called prometheus-service.yml has also been provided for you. This creates a Kubernetes Namespace called Stackdriver, a Deployment that creates a single replica of the Stackdriver Prometheus container, and a ConfigMap that defines the configuration of the Prometheus collector. You modify values in the ConfigMap section of prometheus-service.yml so that it will monitor the GKE cluster you created for this lab.
- Replace the placeholder variable in the prometheus-service.yml file with your current project ID:
sed -i 's/prometheus-to-sd/'"${GOOGLE_CLOUD_PROJECT}"'/g'\ prometheus-service.yml
- Replace the placeholder variable in the prometheus-service.yml file with your current cluster name:
sed -i 's/prom-test-cluster-2/'"${my_cluster}"'/g'\ prometheus-service.yml
- Replace the placeholder variable in the prometheus-service.yml file with the GCP zone for the cluster:
sed -i 's/us-central1-a/'"${my_zone}"'/g' prometheus-service.yml
- Start the prometheus server using your modified configuration:
$ kubectl apply -f prometheus-service.yml
After configuring Prometheus, run the following command to validate the installation:
$ kubectl get deployment,service -n stackdriver NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.extensions/prometheus 1 1 1 1 8s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/prometheus ClusterIP 10.12.2.179 <none> 9090/TCP 8s
Using Liveness and Readiness probes for GKE Pods
In this section, we will deploy a liveness probe to detect applications that have transitioned from a running state to a broken state. Sometimes, applications are temporarily unable to serve traffic. For example, an application might need to load large data or configuration files during startup. In such cases, you do not want to kill the application, but you do not want to send it requests either. Kubernetes provides readiness probes to detect and mitigate these situations. A Pod with containers reporting that they are not ready does not receive traffic through Kubernetes Services.
Readiness probes are configured similarly to liveness probes. The only difference is that you use the readinessProbe
field instead of the livenessProbe
field.
- Define and deploy a simple container called liveness running Busybox and a liveness probe that uses the cat command against the file /tmp/healthy within the container to test for liveness every 5 seconds. The startup script for the liveness container creates the /tmp/healthy on startup and then deletes it 30 seconds later to simulate an outage that the Liveness probe can detect:
$ cat << EOF > exec-liveness.yaml apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers: - name: liveness image: k8s.gcr.io/busybox args: - /bin/sh - -c - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5 EOF $ kubectl create -f exec-liveness.yaml
- Within 30 seconds, view the Pod events:
$ kubectl describe pod liveness-exec Type: Secret (a volume populated by a Secret) SecretName: default-token-wq52t Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age ... Message ---- ------ ---- ... ------- Normal Scheduled 11s ... Successfully assigned liveness-e ... Normal Su...ntVolume 10s ... MountVolume.SetUp succeeded for ... Normal Pulling 10s ... pulling image "k8s.gcr.io/busybox" Normal Pulled 9s ... Successfully pulled image "k8s.g ... Normal Created 9s ... Created container Normal Started 9s ... Started container
The output indicates that no liveness probes have failed yet.
After 35 seconds, view the Pod events again:
$ kubectl describe pod liveness-exec
At the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated:
Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory ... Killing container with id docker://liveness:Container failed liveness probe.. Container will be killed and recreated.
$ kubectl get pod liveness-exec NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 2 2m15s
Use Stackdriver Logging with GKE
In this section, we will deploy a GKE cluster and demo application using Terraform that creates sample Stackdriver logging events. You view the logs for GKE resources in Logging and then create and monitor a custom monitoring metric created using a Stackdriver log filter.
- Download Sample Logging Tool
We will download a Terraform configuration that creates a GKE cluster and then deploy a sample web application to that cluster to generate Logging events.
- Setup:
$ mkdir ~/terraform-demo $ cd ~/terraform-demo $ git clone https://github.com/GoogleCloudPlatformTraining/gke-logging-sinks-demo $ cd ~/terraform-demo/gke-logging-sinks-demo/
- Deploy The Sample Logging Tool
We will now deploy the GKE Stackdriver Logging demo using Terraform.
- Set your zone and region:
$ gcloud config set compute/region us-central1 $ gcloud config set compute/zone us-central1-a
( Instruct Terraform to run the sample logging tool:
$ make create
This process takes 2-3 minutes to complete. When complete you will see the message:
Apply complete! Resources: 8 added, 0 changed, 0 destroyed.
Using Cloud SQL with Kubernetes Engine
CloudSQL Proxy is set up as a sidecar container running alongside your app container in your Pod.
Overview
In this section, we will set up a Kubernetes Deployment of WordPress connected to Cloud SQL via the SQL Proxy. The SQL Proxy lets you interact with a Cloud SQL instance as if it were installed locally (localhost:3306
), and even though you are on an unsecured port locally, the SQL Proxy makes sure you are secure over the wire to your Cloud SQL Instance.
To complete this section, we will create several components:
- Create a GKE cluster;
- Create a Cloud SQL Instance to connect to, and a Service Account to provide permission for our Pods to access the Cloud SQL Instance; and, finally
- Deploy WordPress on your GKE cluster, with the SQL Proxy as a Sidecar, connected to our Cloud SQL Instance.
Objectives
In this section, we will perform the following tasks:
- Create a Cloud SQL instance and database for Wordpress
- Create credentials and Kubernetes Secrets for application authentication
- Configure a Deployment with a Wordpress image to use SQL Proxy
- Install SQL Proxy as a sidecar container and use it to provide SSL access to a CloudSQL instance external to the GKE Cluster
- Create a GKE cluster
- Setup:
$ export my_zone=us-central1-a $ export my_cluster=standard-cluster-1 $ source <(kubectl completion bash)
- Create a VPC-native Kubernetes cluster:
$ gcloud container clusters create $my_cluster \ --num-nodes 3 --enable-ip-alias --zone $my_zone
- Configure access to the cluster for kubectl:
$ gcloud container clusters get-credentials $my_cluster --zone $my_zone
- Get the repository:
$ git clone https://github.com/GoogleCloudPlatformTraining/training-data-analyst $ cd ~/training-data-analyst/courses/ak8s/18_Cloud_SQL/
- Create a Cloud SQL Instance
- Create the SQL instance:
$ gcloud sql instances create sql-instance --tier=db-n1-standard-2 --region=us-central1
- In the GCP Console, navigate to SQL.
- You should see
sql-instance
listed, click on the name, and then click on the Users tab.- You will have to wait a few minutes for the Cloud SQL instance to be provisioned. When you see the existing
mysql.sys
androot
users listed you can proceed to the next step.
- You will have to wait a few minutes for the Cloud SQL instance to be provisioned. When you see the existing
- Click Create User Account and create an account, using sqluser as the username and sqlpassword as the password.
- Leave the Hostname option set to Allow any host (%). and click Create.
- Go back to Overview tab, still in your instance (sql-instance), and copy your Instance connection name.
- You will probably need to scroll down a bit to see it.
Create an environment variable to hold your Cloud SQL instance name, substituting the placeholder with the name you copied in the previous step.
export SQL_NAME=[Cloud SQL Instance Name]
Your command should look similar to the following:
export SQL_NAME=xtof-gcp-gcpd-e506927dfe49:us-central1:sql-instance
- Connect to your Cloud SQL instance.
$ gcloud sql connect sql-instance
When prompted to enter the root password press enter. The root SQL user password is blank by default.
The MySQL [(none)]>
prompt appears, indicating that you are now connected to the Cloud SQL instance using the MySQL client.
- Create the database required for Wordpress (this is called wordpress by default):
MySQL [(none)]> create database wordpress; MySQL [(none)]> use wordpress; MySQL [wordpress]> show tables; # <- This will report Empty set as you have not created any tables yet. MySQL [wordpress]> exit;
- Prepare a Service Account with Permission to Access Cloud SQL
- To create a Service Account, in the GCP Console navigate to IAM & admin > Service accounts.
- Click + Create Service Account.
- Specify a Service account name called
sql-access
then click Create. - Click Select a role.
- Search for Cloud SQL, select Cloud SQL Client and click Continue.
- Click +Create Key, and make sure JSON key type is selected and click Create.
- This will create a public/private key pair, and download the private key file automatically to your computer. You will need this JSON file later.
- Click Close to close the notification dialogue.
- Locate the JSON credential file you downloaded and rename it to
credentials.json
. - Click Done.
- Create Secrets
We will create two Kubernetes Secrets: one to provide the MySQL credentials and one to provide the Google credentials (the service account).
- Create a Secret for your MySQL credentials:
$ kubectl create secret generic sql-credentials \ --from-literal=username=sqluser\ --from-literal=password=sqlpassword
If you used a different username and password when creating the Cloud SQL user accounts substitute those here.
- Create a Secret for your GCP Service Account credentials:
$ kubectl create secret generic google-credentials\ --from-file=key.json=credentials.json
Note that the file is uploaded to the Secret using the name key.json
. That is the file name that a container will see when this Secret is attached as a Secret Volume.
- Deploy the SQL Proxy agent as a sidecar container
A sample deployment manifest file called sql-proxy.yaml
has been provided for you that deploys a demo Wordpress application container with the SQL Proxy agent as a sidecar container.
In the Wordpress container environment settings the WORDPRESS_DB_HOST
is specified using the localhost IP address. The cloudsql-proxy
sidecar container is configured to point to the Cloud SQL instance you created in the previous task. The database username and password are passed to the Wordpress container as secret keys, and the JSON credentials file is passed to the container using a Secret volume. A Service is also created to allow you to connect to the Wordpress instance from the internet.
kind: Deployment apiVersion: apps/v1 metadata: name: wordpress labels: app: wordpress spec: selector: matchLabels: app: wordpress template: metadata: labels: app: wordpress spec: containers: - name: web image: gcr.io/cloud-marketplace/google/wordpress ports: - containerPort: 80 env: - name: WORDPRESS_DB_HOST value: 127.0.0.1:3306 # These secrets are required to start the pod. # [START cloudsql_secrets] - name: WORDPRESS_DB_USER valueFrom: secretKeyRef: name: sql-credentials key: username - name: WORDPRESS_DB_PASSWORD valueFrom: secretKeyRef: name: sql-credentials key: password # [END cloudsql_secrets] # Change <INSTANCE_CONNECTION_NAME> here to include your GCP # project, the region of your Cloud SQL instance and the name # of your Cloud SQL instance. The format is # $PROJECT:$REGION:$INSTANCE # [START proxy_container] - name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: ["/cloud_sql_proxy", "-instances=<INSTANCE_CONNECTION_NAME>=tcp:3306", "-credential_file=/secrets/cloudsql/key.json"] # [START cloudsql_security_context] securityContext: runAsUser: 2 # non-root user allowPrivilegeEscalation: false # [END cloudsql_security_context] volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true # [END proxy_container] # [START volumes] volumes: - name: cloudsql-instance-credentials secret: secretName: google-credentials # [END volumes] --- apiVersion: "v1" kind: "Service" metadata: name: "wordpress-service" namespace: "default" labels: app: "wordpress" spec: ports: - protocol: "TCP" port: 80 selector: app: "wordpress" type: "LoadBalancer" loadBalancerIP: ""
The important sections to note in this manifest are:
- In the Wordpress
env
section, the variableWORDPRESS_DB_HOST
is set to127.0.0.1:3306
. This will connect to a container in the same Pod listening on port 3306. This is the port that the SQL-Proxy listens on by default. - In the Wordpress
env
section, the variablesWORDPRESS_DB_USER
andWORDPRESS_DB_PASSWORD
are set using values stored in thesql-credential
Secret we created in the last section. - In the
cloudsql-proxy
container section, the command switch that defines the SQL Connection name, "-instances=<INSTANCE_CONNECTION_NAME>=tcp:3306
", contains a placeholder variable that is not configured using a ConfigMap or Secret and so must be updated directly in this example manifest to point to your Cloud SQL instance. - In the
cloudsql-proxy
container section, the JSON credential file is mounted using the Secret volume in the directory/secrets/cloudsql/
. The command switch "-credential_file=/secrets/cloudsql/key.json
" points to the filename in that directory that we specified when creating thegoogle-credentials
Secret. - The Service section at the end creates an external LoadBalancer called "wordpress-service" that allows the application to be accessed from external internet addresses.
Use sed to update the placeholder variable for the SQL Connection name to the instance name of your Cloud SQL instance.
sed -i 's/<INSTANCE_CONNECTION_NAME>/'"${SQL_NAME}"'/g'\ sql-proxy.yaml
- Deploy the application:
$ kubectl apply -f sql-proxy.yaml $ kubectl get deployment wordpress NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE wordpress 1 1 1 1 30s
Repeat the above command until you that one instance is available.
- List the services in your GKE cluster:
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.12.0.1 <none> 443/TCP 20m wordpress-service LoadBalancer 10.12.7.147 <pending> 80:30239/TCP 55s
The external LoadBalancer IP address for the wordpress-service is the address you use to connect to your Wordpress blog. Repeat the command until you get an external address.
- Connect to your Wordpress instance
- Open a new browser tab and connect to your Wordpress site using the external LoadBalancer IP address. This will start the initial Wordpress installation wizard.
- Select English (United States) and click Continue.
- Enter a sample name for the Site Title.
- Enter a Username and Password to administer the site.
- Enter an email address.
None of these values are particularly important, you will not need to use them.
- Click Install Wordpress.
After a few seconds you will see the Success! Notification. You can log in if you wish to explore the Wordpress admin interface but it is not required for the lab.
The initialization process has created new database tables and data in the wordpress database on your Cloud SQL instance. You will now validate that these new database tables have been created using the SQL proxy container.
- Connect to your Cloud SQL instance:
$ gcloud sql connect sql-instance
When prompted to enter the root password press enter. The root SQL user password is blank by default. The MySQL [(none)]> prompt appears indicating that you are now connected to the Cloud SQL instance using the MySQL client.
MySQL [(none)]> use wordpress; MySQL [wordpress]> show tables;
This will now show a number of new database tables that were created when Wordpress was initialized demonstrating that the sidecar SQL Proxy container was configured correctly.
MySQL [wordpress]> show tables; +-----------------------+ | Tables_in_wordpress | +-----------------------+ | wp_commentmeta | | wp_comments | | wp_links | | wp_options | | wp_postmeta | | wp_posts | | wp_term_relationships | | wp_term_taxonomy | | wp_termmeta | | wp_terms | | wp_usermeta | | wp_users | +-----------------------+ 12 rows in set (0.04 sec)
- List all of the Wordpress user table entries:
MySQL [wordpress]> select * from wp_users;
This will list the database record for the Wordpress admin account showing the email you chose when initializing Wordpress.
- Exit the MySQL client:
MySQL [wordpress]> exit;