Difference between revisions of "Kubernetes/GKE"
(Created page with "'''Google Kubernetes Engine''' (GKE) is a managed, production-ready environment for deploying containerized applications in Kubernetes. ==External links== * [https://clou...") |
|||
Line 1: | Line 1: | ||
'''Google Kubernetes Engine''' (GKE) is a managed, production-ready environment for deploying containerized applications in [[Kubernetes]]. | '''Google Kubernetes Engine''' (GKE) is a managed, production-ready environment for deploying containerized applications in [[Kubernetes]]. | ||
+ | |||
+ | ==Deployments== | ||
+ | |||
+ | A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, <code>.spec.template</code>) is changed, for example, if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout. | ||
+ | |||
+ | <!-- | ||
+ | git clone https://github.com/GoogleCloudPlatformTraining/training-data-analyst | ||
+ | cd ~/training-data-analyst/courses/ak8s/06_Deployments/ | ||
+ | --> | ||
+ | |||
+ | ; Trigger a deployment rollout | ||
+ | |||
+ | * To update the version of nginx in the deployment, execute the following command: | ||
+ | <pre> | ||
+ | $ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record | ||
+ | $ kubectl rollout status deployment.v1.apps/nginx-deployment | ||
+ | $ kubectl rollout history deployment nginx-deployment | ||
+ | </pre> | ||
+ | |||
+ | ; Trigger a deployment rollback | ||
+ | |||
+ | To roll back an object's rollout, you can use the <code>kubectl rollout undo</code> command. | ||
+ | |||
+ | To roll back to the previous version of the nginx deployment, execute the following command: | ||
+ | <pre> | ||
+ | $ kubectl rollout undo deployments nginx-deployment | ||
+ | </pre> | ||
+ | |||
+ | * View the updated rollout history of the deployment. | ||
+ | <pre> | ||
+ | $ kubectl rollout history deployment nginx-deployment | ||
+ | |||
+ | deployments "nginx-deployment" | ||
+ | REVISION CHANGE-CAUSE | ||
+ | 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true | ||
+ | 3 <none> | ||
+ | </pre> | ||
+ | |||
+ | * View the details of the latest deployment revision: | ||
+ | <pre> | ||
+ | $ kubectl rollout history deployment/nginx-deployment --revision=3 | ||
+ | </pre> | ||
+ | |||
+ | The output should look like the example. Your output might not be an exact match but it will show that the current revision has rolled back to nginx:1.7.9. | ||
+ | |||
+ | <pre> | ||
+ | deployments "nginx-deployment" with revision #3 | ||
+ | Pod Template: | ||
+ | Labels: app=nginx | ||
+ | pod-template-hash=3123191453 | ||
+ | Containers: | ||
+ | nginx: | ||
+ | Image: nginx:1.7.9 | ||
+ | Port: 80/TCP | ||
+ | Host Port: 0/TCP | ||
+ | Environment: <none> | ||
+ | Mounts: <none> | ||
+ | Volumes: <none> | ||
+ | </pre> | ||
+ | |||
+ | ===Perform a canary deployment=== | ||
+ | |||
+ | A canary deployment is a separate deployment used to test a new version of your application. A single service targets both the canary and the normal deployments. And it can direct a subset of users to the canary version to mitigate the risk of new releases. The manifest file nginx-canary.yaml that is provided for you deploys a single pod running a newer version of nginx than your main deployment. In this task, you create a canary deployment using this new deployment file. | ||
+ | <pre> | ||
+ | apiVersion: apps/v1 | ||
+ | kind: Deployment | ||
+ | metadata: | ||
+ | name: nginx-canary | ||
+ | labels: | ||
+ | app: nginx | ||
+ | spec: | ||
+ | replicas: 1 | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | app: nginx | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: nginx | ||
+ | track: canary | ||
+ | Version: 1.9.1 | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: nginx | ||
+ | image: nginx:1.9.1 | ||
+ | ports: | ||
+ | - containerPort: 80 | ||
+ | </pre> | ||
+ | |||
+ | The manifest for the nginx Service you deployed in the previous task uses a label selector to target the Pods with the app: nginx label. Both the normal deployment and this new canary deployment have the app: nginx label. Inbound connections will be distributed by the service to both the normal and canary deployment Pods. The canary deployment has fewer replicas (Pods) than the normal deployment, and thus it is available to fewer users than the normal deployment. | ||
+ | |||
+ | * Create the canary deployment based on the configuration file. | ||
+ | <pre> | ||
+ | $ kubectl apply -f nginx-canary.yaml | ||
+ | </pre> | ||
+ | |||
+ | When the deployment is complete, verify that both the nginx and the nginx-canary deployments are present. | ||
+ | <pre> | ||
+ | $ kubectl get deployments | ||
+ | </pre> | ||
+ | |||
+ | Switch back to the browser tab that is connected to the external LoadBalancer service ip and refresh the page. You should continue to see the standard "Welcome to nginx" page. | ||
+ | |||
+ | Switch back to the Cloud Shell and scale down the primary deployment to 0 replicas. | ||
+ | <pre> | ||
+ | $ kubectl scale --replicas=0 deployment nginx-deployment | ||
+ | </pre> | ||
+ | |||
+ | Verify that the only running replica is now the Canary deployment: | ||
+ | <pre> | ||
+ | $ kubectl get deployments | ||
+ | </pre> | ||
+ | |||
+ | Switch back to the browser tab that is connected to the external LoadBalancer service ip and refresh the page. You should continue to see the standard "Welcome to nginx" page showing that the Service is automatically balancing traffic to the canary deployment. | ||
+ | |||
+ | Note: Session affinity | ||
+ | The Service configuration used in the lab does not ensure that all requests from a single client will always connect to the same Pod. Each request is treated separately and can connect to either the normal nginx deployment or to the nginx-canary deployment. This potential to switch between different versions may cause problems if there are significant changes in functionality in the canary release. To prevent this you can set the sessionAffinity field to ClientIP in the specification of the service if you need a client's first request to determine which Pod will be used for all subsequent connections. | ||
+ | |||
+ | For example: | ||
+ | <pre> | ||
+ | apiVersion: v1 | ||
+ | kind: Service | ||
+ | metadata: | ||
+ | name: nginx | ||
+ | spec: | ||
+ | type: LoadBalancer | ||
+ | sessionAffinity: ClientIP | ||
+ | selector: | ||
+ | app: nginx | ||
+ | ports: | ||
+ | - protocol: TCP | ||
+ | port: 60000 | ||
+ | targetPort: 80 | ||
+ | </pre> | ||
+ | |||
+ | ==Jobs and CronJobs== | ||
+ | |||
+ | * Simple example: | ||
+ | $ kubectl run pi --image perl --restart Never -- perl -Mbignum bpi -wle 'print bpi(2000)' | ||
+ | |||
+ | ; Parallel Job with fixed completion count: | ||
+ | <pre> | ||
+ | $ cat << EOF > my-app-job.yaml | ||
+ | apiVersion: batch/v1 | ||
+ | kind: Job | ||
+ | metadata: | ||
+ | name: my-app-job | ||
+ | spec: | ||
+ | completions: 3 | ||
+ | parallelism: 2 | ||
+ | template: | ||
+ | spec: | ||
+ | [...] | ||
+ | EOF | ||
+ | </pre> | ||
+ | |||
+ | <pre> | ||
+ | spec: | ||
+ | backoffLimit: 4 | ||
+ | activeDeadlineSeconds: 300 | ||
+ | </pre> | ||
+ | |||
+ | ; Example#1 | ||
+ | <!-- | ||
+ | Change to the directory that contains the sample files for this lab. | ||
+ | cd ~/training-data-analyst/courses/ak8s/07_Jobs_CronJobs | ||
+ | --> | ||
+ | ; Create and run a Job | ||
+ | |||
+ | You will create a job using a sample deployment manifest called example-job.yaml that has been provided for you. This Job computes the value of Pi to 2,000 places and then prints the result. | ||
+ | <pre> | ||
+ | apiVersion: batch/v1 | ||
+ | kind: Job | ||
+ | metadata: | ||
+ | # Unique key of the Job instance | ||
+ | name: example-job | ||
+ | spec: | ||
+ | template: | ||
+ | metadata: | ||
+ | name: example-job | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: pi | ||
+ | image: perl | ||
+ | command: ["perl"] | ||
+ | args: ["-Mbignum=bpi", "-wle", "print bpi(2000)"] | ||
+ | # Do not restart containers after they exit | ||
+ | restartPolicy: Never | ||
+ | </pre> | ||
+ | |||
+ | To create a Job from this file, execute the following command: | ||
+ | <pre> | ||
+ | $ kubectl apply -f example-job.yaml | ||
+ | $ kubectl describe job | ||
+ | Host Port: <none> | ||
+ | Command: | ||
+ | perl | ||
+ | Args: | ||
+ | -Mbignum=bpi | ||
+ | -wle | ||
+ | print bpi(2000) | ||
+ | Environment: <none> | ||
+ | Mounts: <none> | ||
+ | Volumes: <none> | ||
+ | Events: | ||
+ | Type Reason Age From Message | ||
+ | ---- ------ ---- ---- ------- | ||
+ | Normal SuccessfulCreate 17s job-controller Created pod: example-job-gtf7w | ||
+ | |||
+ | $ kubectl get pods | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | example-job-gtf7w 0/1 Completed 0 43s | ||
+ | </pre> | ||
+ | |||
+ | ; Clean up and delete the Job | ||
+ | |||
+ | When a Job completes, the Job stops creating Pods. The Job API object is not removed when it completes, which allows you to view its status. Pods created by the Job are not deleted, but they are terminated. Retention of the Pods allows you to view their logs and to interact with them. | ||
+ | |||
+ | To get a list of the Jobs in the cluster, execute the following command: | ||
+ | <pre> | ||
+ | $ kubectl get jobs | ||
+ | |||
+ | NAME DESIRED SUCCESSFUL AGE | ||
+ | example-job 1 1 2m | ||
+ | </pre> | ||
+ | |||
+ | To retrieve the log file from the Pod that ran the Job execute the following command. You must replace [POD-NAME] with the node name you recorded in the last task | ||
+ | <pre> | ||
+ | $ kubectl logs [POD-NAME] | ||
+ | 3.141592653589793238... | ||
+ | </pre> | ||
+ | |||
+ | The output will show that the job wrote the first two thousand digits of pi to the Pod log. | ||
+ | |||
+ | To delete the Job, execute the following command: | ||
+ | <pre> | ||
+ | $ kubectl delete job example-job | ||
+ | </pre> | ||
+ | |||
+ | If you try to query the logs again the command will fail as the Pod can no longer be found. | ||
+ | |||
+ | ===Define and deploy a CronJob manifest=== | ||
+ | |||
+ | You can create CronJobs to perform finite, time-related tasks that run once or repeatedly at a time that you specify. | ||
+ | |||
+ | In this section, we will create and run a CronJob, and then clean up and delete the Job. | ||
+ | |||
+ | ; Create and run a CronJob | ||
+ | |||
+ | The CronJob manifest file example-cronjob.yaml has been provided for you. This CronJob deploys a new container every minute that prints the time, date and "Hello, World!". | ||
+ | <pre> | ||
+ | apiVersion: batch/v1beta1 | ||
+ | kind: CronJob | ||
+ | metadata: | ||
+ | name: hello | ||
+ | spec: | ||
+ | schedule: "*/1 * * * *" | ||
+ | jobTemplate: | ||
+ | spec: | ||
+ | template: | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: hello | ||
+ | image: busybox | ||
+ | args: | ||
+ | - /bin/sh | ||
+ | - -c | ||
+ | - date; echo "Hello, World!" | ||
+ | restartPolicy: OnFailure | ||
+ | </pre> | ||
+ | |||
+ | <block> | ||
+ | Note | ||
+ | |||
+ | CronJobs use the required schedule field, which accepts a time in the Unix standard crontab format. All CronJob times are in UTC: | ||
+ | |||
+ | * The first value indicates the minute (between 0 and 59). | ||
+ | * The second value indicates the hour (between 0 and 23). | ||
+ | * The third value indicates the day of the month (between 1 and 31). | ||
+ | * The fourth value indicates the month (between 1 and 12). | ||
+ | * The fifth value indicates the day of the week (between 0 and 6). | ||
+ | |||
+ | The schedule field also accepts * and ? as wildcard values. Combining / with ranges specifies that the task should repeat at a regular interval. In the example, */1 * * * * indicates that the task should repeat every minute of every day of every month. | ||
+ | </block> | ||
+ | |||
+ | To create a Job from this file, execute the following command: | ||
+ | <pre> | ||
+ | $ kubectl apply -f example-cronjob.yaml | ||
+ | <pre> | ||
+ | |||
+ | To check the status of this Job, execute the following command, where [job_name] is the name of your job: | ||
+ | <pre> | ||
+ | $ kubectl describe job [job_name] | ||
+ | |||
+ | Image: busybox | ||
+ | Port: <none> | ||
+ | Host Port: <none> | ||
+ | Args: | ||
+ | /bin/sh | ||
+ | -c | ||
+ | date; echo "Hello, World!" | ||
+ | Environment: <none> | ||
+ | Mounts: <none> | ||
+ | Volumes: <none> | ||
+ | Events: | ||
+ | Type Reason Age From Message | ||
+ | ---- ------ ---- ---- ------- | ||
+ | Normal SuccessfulCreate 35s job-controller Created pod: hello-1565824980-sgdnn | ||
+ | </pre> | ||
+ | |||
+ | View the output of the Job by querying the logs for the Pod. Replace [POD-NAME] with the name of the Pod you recorded in the last step. | ||
+ | <pre> | ||
+ | $ kubectl logs <pod-name> | ||
+ | |||
+ | Wed Aug 14 23:23:03 UTC 2019 | ||
+ | Hello, World! | ||
+ | </pre> | ||
+ | |||
+ | To view all job resources in your cluster, including all of the Pods created by the CronJob which have completed, execute the following command: | ||
+ | <pre> | ||
+ | $ kubectl get jobs | ||
+ | |||
+ | NAME COMPLETIONS DURATION AGE | ||
+ | hello-1565824980 1/1 2s 2m29s | ||
+ | hello-1565825040 1/1 2s 89s | ||
+ | hello-1565825100 1/1 2s 29s | ||
+ | </pre> | ||
+ | |||
+ | Your job names might be different from the example output. By default, Kubernetes sets the Job history limits so that only the last three successful and last failed job are retained so this list will only contain the most recent three of four jobs. | ||
+ | |||
+ | ; Clean up and delete the Job | ||
+ | |||
+ | In order to stop the CronJob and clean up the Jobs associated with it you must delete the CronJob. | ||
+ | |||
+ | To delete all these jobs, execute the following command: | ||
+ | <pre> | ||
+ | $ kubectl delete cronjob hello | ||
+ | </pre> | ||
+ | |||
+ | To verify that the jobs were deleted, execute the following command: | ||
+ | <pre> | ||
+ | $ kubectl get jobs | ||
+ | No resources found. | ||
+ | </pre> | ||
+ | All the Jobs were removed. | ||
+ | |||
==External links== | ==External links== |
Revision as of 19:44, 23 August 2019
Google Kubernetes Engine (GKE) is a managed, production-ready environment for deploying containerized applications in Kubernetes.
Contents
Deployments
A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, .spec.template
) is changed, for example, if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout.
- Trigger a deployment rollout
- To update the version of nginx in the deployment, execute the following command:
$ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record $ kubectl rollout status deployment.v1.apps/nginx-deployment $ kubectl rollout history deployment nginx-deployment
- Trigger a deployment rollback
To roll back an object's rollout, you can use the kubectl rollout undo
command.
To roll back to the previous version of the nginx deployment, execute the following command:
$ kubectl rollout undo deployments nginx-deployment
- View the updated rollout history of the deployment.
$ kubectl rollout history deployment nginx-deployment deployments "nginx-deployment" REVISION CHANGE-CAUSE 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true 3 <none>
- View the details of the latest deployment revision:
$ kubectl rollout history deployment/nginx-deployment --revision=3
The output should look like the example. Your output might not be an exact match but it will show that the current revision has rolled back to nginx:1.7.9.
deployments "nginx-deployment" with revision #3 Pod Template: Labels: app=nginx pod-template-hash=3123191453 Containers: nginx: Image: nginx:1.7.9 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none>
Perform a canary deployment
A canary deployment is a separate deployment used to test a new version of your application. A single service targets both the canary and the normal deployments. And it can direct a subset of users to the canary version to mitigate the risk of new releases. The manifest file nginx-canary.yaml that is provided for you deploys a single pod running a newer version of nginx than your main deployment. In this task, you create a canary deployment using this new deployment file.
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-canary labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx track: canary Version: 1.9.1 spec: containers: - name: nginx image: nginx:1.9.1 ports: - containerPort: 80
The manifest for the nginx Service you deployed in the previous task uses a label selector to target the Pods with the app: nginx label. Both the normal deployment and this new canary deployment have the app: nginx label. Inbound connections will be distributed by the service to both the normal and canary deployment Pods. The canary deployment has fewer replicas (Pods) than the normal deployment, and thus it is available to fewer users than the normal deployment.
- Create the canary deployment based on the configuration file.
$ kubectl apply -f nginx-canary.yaml
When the deployment is complete, verify that both the nginx and the nginx-canary deployments are present.
$ kubectl get deployments
Switch back to the browser tab that is connected to the external LoadBalancer service ip and refresh the page. You should continue to see the standard "Welcome to nginx" page.
Switch back to the Cloud Shell and scale down the primary deployment to 0 replicas.
$ kubectl scale --replicas=0 deployment nginx-deployment
Verify that the only running replica is now the Canary deployment:
$ kubectl get deployments
Switch back to the browser tab that is connected to the external LoadBalancer service ip and refresh the page. You should continue to see the standard "Welcome to nginx" page showing that the Service is automatically balancing traffic to the canary deployment.
Note: Session affinity The Service configuration used in the lab does not ensure that all requests from a single client will always connect to the same Pod. Each request is treated separately and can connect to either the normal nginx deployment or to the nginx-canary deployment. This potential to switch between different versions may cause problems if there are significant changes in functionality in the canary release. To prevent this you can set the sessionAffinity field to ClientIP in the specification of the service if you need a client's first request to determine which Pod will be used for all subsequent connections.
For example:
apiVersion: v1 kind: Service metadata: name: nginx spec: type: LoadBalancer sessionAffinity: ClientIP selector: app: nginx ports: - protocol: TCP port: 60000 targetPort: 80
Jobs and CronJobs
- Simple example:
$ kubectl run pi --image perl --restart Never -- perl -Mbignum bpi -wle 'print bpi(2000)'
- Parallel Job with fixed completion count
$ cat << EOF > my-app-job.yaml apiVersion: batch/v1 kind: Job metadata: name: my-app-job spec: completions: 3 parallelism: 2 template: spec: [...] EOF
spec: backoffLimit: 4 activeDeadlineSeconds: 300
- Example#1
- Create and run a Job
You will create a job using a sample deployment manifest called example-job.yaml that has been provided for you. This Job computes the value of Pi to 2,000 places and then prints the result.
apiVersion: batch/v1 kind: Job metadata: # Unique key of the Job instance name: example-job spec: template: metadata: name: example-job spec: containers: - name: pi image: perl command: ["perl"] args: ["-Mbignum=bpi", "-wle", "print bpi(2000)"] # Do not restart containers after they exit restartPolicy: Never
To create a Job from this file, execute the following command:
$ kubectl apply -f example-job.yaml $ kubectl describe job Host Port: <none> Command: perl Args: -Mbignum=bpi -wle print bpi(2000) Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 17s job-controller Created pod: example-job-gtf7w $ kubectl get pods NAME READY STATUS RESTARTS AGE example-job-gtf7w 0/1 Completed 0 43s
- Clean up and delete the Job
When a Job completes, the Job stops creating Pods. The Job API object is not removed when it completes, which allows you to view its status. Pods created by the Job are not deleted, but they are terminated. Retention of the Pods allows you to view their logs and to interact with them.
To get a list of the Jobs in the cluster, execute the following command:
$ kubectl get jobs NAME DESIRED SUCCESSFUL AGE example-job 1 1 2m
To retrieve the log file from the Pod that ran the Job execute the following command. You must replace [POD-NAME] with the node name you recorded in the last task
$ kubectl logs [POD-NAME] 3.141592653589793238...
The output will show that the job wrote the first two thousand digits of pi to the Pod log.
To delete the Job, execute the following command:
$ kubectl delete job example-job
If you try to query the logs again the command will fail as the Pod can no longer be found.
Define and deploy a CronJob manifest
You can create CronJobs to perform finite, time-related tasks that run once or repeatedly at a time that you specify.
In this section, we will create and run a CronJob, and then clean up and delete the Job.
- Create and run a CronJob
The CronJob manifest file example-cronjob.yaml has been provided for you. This CronJob deploys a new container every minute that prints the time, date and "Hello, World!".
apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo "Hello, World!" restartPolicy: OnFailure
<block> Note
CronJobs use the required schedule field, which accepts a time in the Unix standard crontab format. All CronJob times are in UTC:
- The first value indicates the minute (between 0 and 59).
- The second value indicates the hour (between 0 and 23).
- The third value indicates the day of the month (between 1 and 31).
- The fourth value indicates the month (between 1 and 12).
- The fifth value indicates the day of the week (between 0 and 6).
The schedule field also accepts * and ? as wildcard values. Combining / with ranges specifies that the task should repeat at a regular interval. In the example, */1 * * * * indicates that the task should repeat every minute of every day of every month. </block>
To create a Job from this file, execute the following command:
$ kubectl apply -f example-cronjob.yaml <pre> To check the status of this Job, execute the following command, where [job_name] is the name of your job: <pre> $ kubectl describe job [job_name] Image: busybox Port: <none> Host Port: <none> Args: /bin/sh -c date; echo "Hello, World!" Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 35s job-controller Created pod: hello-1565824980-sgdnn
View the output of the Job by querying the logs for the Pod. Replace [POD-NAME] with the name of the Pod you recorded in the last step.
$ kubectl logs <pod-name> Wed Aug 14 23:23:03 UTC 2019 Hello, World!
To view all job resources in your cluster, including all of the Pods created by the CronJob which have completed, execute the following command:
$ kubectl get jobs NAME COMPLETIONS DURATION AGE hello-1565824980 1/1 2s 2m29s hello-1565825040 1/1 2s 89s hello-1565825100 1/1 2s 29s
Your job names might be different from the example output. By default, Kubernetes sets the Job history limits so that only the last three successful and last failed job are retained so this list will only contain the most recent three of four jobs.
- Clean up and delete the Job
In order to stop the CronJob and clean up the Jobs associated with it you must delete the CronJob.
To delete all these jobs, execute the following command:
$ kubectl delete cronjob hello
To verify that the jobs were deleted, execute the following command:
$ kubectl get jobs No resources found.
All the Jobs were removed.