Difference between revisions of "Kubernetes"
(→Install and configure the minions) |
|||
Line 756: | Line 756: | ||
master$ kubectl delete replicationcontroller nginx-www | master$ kubectl delete replicationcontroller nginx-www | ||
master$ kubectl get pods # nothing | master$ kubectl get pods # nothing | ||
+ | |||
+ | ===Creating temporary Pods at the CLI=== | ||
+ | |||
+ | * Make sure we have no Pods running: | ||
+ | master$ kubectl get pods | ||
+ | |||
+ | * Create temporary deployment pod: | ||
+ | master$ kubectl run mysample --image=foobar/apache | ||
+ | master$ kubectl get pods | ||
+ | <pre> | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | mysample-1424711890-fhtxb 0/1 ContainerCreating 0 1s | ||
+ | </pre> | ||
+ | master$ kubectl get deployment | ||
+ | <pre> | ||
+ | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE | ||
+ | mysample 1 1 1 0 7s | ||
+ | </pre> | ||
+ | |||
+ | * Create a temporary deployment pod (where we know it will fail): | ||
+ | master$ kubectl run myexample --image=christophchamp/ubuntu_sysadmin | ||
+ | master$ kubectl -o wide get pods | ||
+ | <pre> | ||
+ | NAME READY STATUS RESTARTS AGE NODE | ||
+ | myexample-3534121234-mpr35 0/1 CrashLoopBackOff 12 39m k8s.minion3.dev | ||
+ | mysample-2812764540-74c5h 1/1 Running 0 41m k8s.minion2.dev | ||
+ | </pre> | ||
+ | |||
+ | * Check on why the "myexample" pod is in status "CrashLoopBackOff": | ||
+ | master$ kubectl describe pods/myexample-3534121234-mpr35 | ||
+ | master$ kubectl describe deployments/mysample | ||
+ | master$ kubectl describe pods/mysample-2812764540-74c5h | awk '/^Node/{print $2}' | ||
+ | k8s.minion2.dev/192.168.200.102 | ||
+ | |||
+ | master$ kubectl delete deployment mysample | ||
+ | |||
+ | * Run multiple replicas of the same pod: | ||
+ | master$ kubectl run myreplicas --image=latest123/apache --replicas=2 --labels=app=myapache,version=1.0.0 | ||
+ | master$ kubectl describe deployment myreplicas | ||
+ | <pre> | ||
+ | Name: myreplicas | ||
+ | Namespace: default | ||
+ | CreationTimestamp: Fri, 21 Oct 2016 19:10:30 +0000 | ||
+ | Labels: app=myapache,version=1.0.0 | ||
+ | Selector: app=myapache,version=1.0.0 | ||
+ | Replicas: 2 updated | 2 total | 1 available | 1 unavailable | ||
+ | StrategyType: RollingUpdate | ||
+ | MinReadySeconds: 0 | ||
+ | RollingUpdateStrategy: 1 max unavailable, 1 max surge | ||
+ | OldReplicaSets: <none> | ||
+ | NewReplicaSet: myreplicas-2209834598 (2/2 replicas created) | ||
+ | ... | ||
+ | </pre> | ||
+ | |||
+ | master$ kubectl get pods -o wide | ||
+ | <pre> | ||
+ | NAME READY STATUS RESTARTS AGE NODE | ||
+ | myreplicas-2209834598-5iyer 1/1 Running 0 1m k8s.minion1.dev | ||
+ | myreplicas-2209834598-cslst 1/1 Running 0 1m k8s.minion2.dev | ||
+ | </pre> | ||
+ | |||
+ | master$ kubectl describe pods -l version=1.0.0 | ||
+ | |||
+ | * Cleanup: | ||
+ | master$ kubectl delete deployment myreplicas | ||
+ | |||
+ | ===Interacting with Pod containers=== | ||
+ | |||
+ | * Create example Apache pod definition file: | ||
+ | master$ cat << EOF > apache.yml | ||
+ | <pre> | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | name: apache | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: apache | ||
+ | image: latest123/apache | ||
+ | ports: | ||
+ | - containerPort: 80 | ||
+ | EOF | ||
+ | </pre> | ||
+ | master$ kubectl create -f apache.yml | ||
+ | master$ kubectl get pods -o wide | ||
+ | <pre> | ||
+ | NAME READY STATUS RESTARTS AGE NODE | ||
+ | apache 1/1 Running 0 12m k8s.minion3.dev | ||
+ | </pre> | ||
+ | |||
+ | * Test pod and make some basic configuration changes: | ||
+ | master$ kubectl exec apache date | ||
+ | master$ kubectl exec mypod -i -t -- cat /var/www/html/index.html # default apache HTML | ||
+ | master$ kubectl exec apache -i -t -- /bin/bash | ||
+ | container$ export TERM=xterm | ||
+ | container$ echo "xtof test" > /var/www/html/index.html | ||
+ | minion3$ curl 172.17.0.2 | ||
+ | xtof test | ||
+ | container$ exit | ||
+ | |||
+ | master$ kubectl get pods -o wide | ||
+ | <pre> | ||
+ | NAME READY STATUS RESTARTS AGE NODE | ||
+ | apache 1/1 Running 0 12m k8s.minion3.dev | ||
+ | </pre> | ||
+ | Pod/container is still running even after we exited (as expected). | ||
+ | |||
+ | * Cleanup: | ||
+ | master$ kubectl delete pod apache | ||
+ | |||
+ | ===Logs=== | ||
+ | |||
+ | * Start our example Apache pod to use for checking Kubernetes logging features: | ||
+ | master$ kubectl create -f apache.yml | ||
+ | master$ kubectl get pods | ||
+ | <pre> | ||
+ | NAME READY STATUS RESTARTS AGE | ||
+ | apache 1/1 Running 0 9s | ||
+ | </pre> | ||
+ | master$ kubectl logs apache | ||
+ | <pre> | ||
+ | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message | ||
+ | </pre> | ||
+ | master$ kubectl logs --tail=10 apache | ||
+ | master$ kubectl logs --since=24h apache # or 10s, 2m, etc. | ||
+ | master$ kubectl logs -f apache # follow the logs | ||
+ | master$ kubectl logs -f -c apache apache # where -c is the container ID | ||
+ | |||
+ | * Cleanup: | ||
+ | master$ kubectl delete pod apache | ||
+ | |||
+ | ===Autoscaling and scaling Pods=== | ||
+ | |||
+ | master$ kubectl run myautoscale --image=latest123/apache --port=80 --labels=app=myautoscale | ||
+ | |||
+ | master$ kubectl get pods -o wide | ||
+ | <pre> | ||
+ | NAME READY STATUS RESTARTS AGE NODE | ||
+ | myautoscale-3243017378-kq4z7 1/1 Running 0 47s k8s.minion3.dev | ||
+ | </pre> | ||
+ | |||
+ | * Create an autoscale definition: | ||
+ | master$ kubectl autoscale deployment myautoscale --min=2 --max=6 --cpu-percent=80 | ||
+ | |||
+ | master$ kubectl get deployments | ||
+ | <pre> | ||
+ | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE | ||
+ | myautoscale 2 2 2 2 4m | ||
+ | </pre> | ||
+ | master$ kubectl get pods -o wide | ||
+ | <pre> | ||
+ | NAME READY STATUS RESTARTS AGE NODE | ||
+ | myautoscale-3243017378-kq4z7 1/1 Running 0 3m k8s.minion3.dev | ||
+ | myautoscale-3243017378-r2f3d 1/1 Running 0 4s k8s.minion2.dev | ||
+ | </pre> | ||
+ | |||
+ | * Scale up an already autoscaled deployment: | ||
+ | master$ kubectl scale --current-replicas=2 --replicas=4 deployment/myautoscale | ||
+ | |||
+ | master$ kubectl get deployments | ||
+ | <pre> | ||
+ | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE | ||
+ | myautoscale 4 4 4 4 8m | ||
+ | </pre> | ||
+ | |||
+ | master$ kubectl get pods -o wide | ||
+ | <pre> | ||
+ | NAME READY STATUS RESTARTS AGE NODE | ||
+ | myautoscale-3243017378-2rxhp 1/1 Running 0 8s k8s.minion1.dev | ||
+ | myautoscale-3243017378-kq4z7 1/1 Running 0 7m k8s.minion3.dev | ||
+ | myautoscale-3243017378-ozxs8 1/1 Running 0 8s k8s.minion3.dev | ||
+ | myautoscale-3243017378-r2f3d 1/1 Running 0 4m k8s.minion2.dev | ||
+ | </pre> | ||
+ | |||
+ | * Scale down: | ||
+ | master$ kubectl scale --current-replicas=4 --replicas=2 deployment/myautoscale | ||
+ | |||
+ | Note: You can not scale down past the original minimum number of pods/containers specified in the original autoscale deployment (i.e., min=2 in our example). | ||
+ | |||
+ | * Cleanup: | ||
+ | master$ kubectl delete deployment myautoscale | ||
+ | |||
+ | ===Failure and recovery=== | ||
+ | |||
+ | master$ kubectl run myrecovery --image=latest123/apache --port=80 --replicas=2 --labels=app=myrecovery | ||
+ | master$ kubectl get deployments | ||
+ | <pre> | ||
+ | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE | ||
+ | myrecovery 2 2 2 2 6s | ||
+ | </pre> | ||
+ | master$ kubectl get pods -o wide | ||
+ | <pre> | ||
+ | NAME READY STATUS RESTARTS AGE NODE | ||
+ | myrecovery-563119102-5xu8f 1/1 Running 0 12s k8s.minion1.dev | ||
+ | myrecovery-563119102-zw6wp 1/1 Running 0 12s k8s.minion2.dev | ||
+ | </pre> | ||
+ | |||
+ | * Now stop Kubernetes- and Docker-related services on one of the minions/nodes (so we have a total of 2 nodes online): | ||
+ | minion1$ systemctl stop docker kubelet kube-proxy | ||
+ | |||
+ | master$ kubectl get pods -o wide | ||
+ | <pre> | ||
+ | NAME READY STATUS RESTARTS AGE NODE | ||
+ | myrecovery-563119102-qyi04 1/1 Running 0 7m k8s.minion3.dev | ||
+ | myrecovery-563119102-zw6wp 1/1 Running 0 14m k8s.minion2.dev | ||
+ | </pre> | ||
+ | Pod switch from minion1 to minion3. | ||
+ | |||
+ | * Now stop Kubernetes- and Docker-related services on one of the remaining online minions/nodes (so we have a total of 1 node online): | ||
+ | minion2$ systemctl stop docker kubelet kube-proxy | ||
+ | master$ kubectl get pods -o wide | ||
+ | <pre> | ||
+ | NAME READY STATUS RESTARTS AGE NODE | ||
+ | myrecovery-563119102-b5tim 1/1 Running 0 2m k8s.minion3.dev | ||
+ | myrecovery-563119102-qyi04 1/1 Running 0 17m k8s.minion3.dev | ||
+ | </pre> | ||
+ | Both Pods are now running on minion3, the only available node. | ||
+ | |||
+ | * Start up Kubernetes- and Docker-related services again on minion1 and delete on of the Pods: | ||
+ | minion1$ systemctl start docker kubelet kube-proxy | ||
+ | master$ kubectl delete pod myrecovery-563119102-b5tim | ||
+ | master$ kubectl get pods -o wide | ||
+ | <pre> | ||
+ | NAME READY STATUS RESTARTS AGE NODE | ||
+ | myrecovery-563119102-8unzg 1/1 Running 0 1m k8s.minion1.dev | ||
+ | myrecovery-563119102-qyi04 1/1 Running 0 20m k8s.minion3.dev | ||
+ | </pre> | ||
+ | Pods are now running on separate nodes. | ||
+ | |||
+ | * Cleanup: | ||
+ | master$ kubectl delete deployments/myrecovery | ||
==External links== | ==External links== |
Revision as of 16:27, 24 October 2016
Kuerbernetes (k8s) is an open source container cluster manager. Kubernetes' primary goal is to provide a platform for automating deployment, scaling, and operations of application containers across a luster of hosts. Kubernetes was released by Google on July 2015.
Contents
- 1 Design overview
- 2 Components
- 3 Setup a Kubernetes cluster
- 4 Working with our Kubernetes cluster
- 4.1 Create and deploy pod definitions
- 4.2 Tags, labels, and selectors
- 4.3 Deployments
- 4.4 Multi-Pod (container) replication controller
- 4.5 Create and deploy service definitions
- 4.6 Creating temporary Pods at the CLI
- 4.7 Interacting with Pod containers
- 4.8 Logs
- 4.9 Autoscaling and scaling Pods
- 4.10 Failure and recovery
- 5 External links
Design overview
Kubernetes is built through the definition of a set of components (building blocks or "primitives") which, when used collectively, provide a method for the deployment, maintenance, and scalability of container-based application clusters.
These "primitives" are designed to be loosely coupled (i.e., where little to no knowledge of the other component definitions is needed to use) as well as easily extensible through an API. Both the internal components of Kubernetes as well as the extensions and containers make use of this API.
Components
The building blocks of Kubernetes are the following:
- Nodes (minions)
- You can think of these as "container clients". These are the individual hosts (physical or virtual) that Docker is installed on and hosts the various containers within your managed cluster.
- Each node will run etcd (a key-pair management and communication service, used by Kubernetes for exchanging messages and reporting on cluster status) as well as the Kubernetes Proxy.
- Pods
- A pod consists of one or more containers. Those containers are guaranteed (by the cluster controller) to be located on the same host machine in order to facilitate sharing of resources.
- Pods are assigned unique IPs within each cluster. These allow an application to use ports without having to worry about conflicting port utilization.
- Pods can contain definitions of disk volumes or shares, and then provide access from those to all the members (containers) within the pod.
- Finally, pod management is done through the API or delegated to a controller.
- Labels
- Clients can attach "key-value pairs" to any object in the system (like Pods or Nodes). These become the labels that identify them in the configuration and management of them.
- Selectors
- Label Selectors represent queries that are made against those labels. They resolve to the corresponding matching objects.
- These two items are the primary way that grouping is done in Kubernetes and determine which components that a given operation applies to when indicated.
- Controllers
- These are used in the management of your cluster. Controllers are the mechanism by which your desired configuration state is enforced.
- Controllers manage a set of pods and, depending on the desired configuration state, may engage other controllers to handle replication and scaling (Replication Controller) of X number of containers and pods across the cluster. It is also responsible for replacing any container in a pod that fails (based on the desired state of the cluster).
- Other controllers that can be engaged include a DaemonSet Controller (enforces a 1-to-1 ratio of pods to minions) and a Job Controller (that runs pods to "completion", such as in batch jobs).
- Each set of pods any controller manages, is determined by the label selectors that are part of its definition.
- Services
- A pod consists of one or more containers. Those containers are guaranteed (by the cluster controller) to be located on the same host machine, in order to facilitate sharing of resources.
- This is so pods can "work together", like in a multi-tiered application configuration. Each set of pods that define and implement a service (like MySQL or Apache) are defined by the label selector.
- Kubernetes can then provide service discovery and handle routing with the static IP for each pod as well as load balancing (round robin based) connections to that service among the pods that match the label selector indicated.
- By default, although a service is only exposed inside a cluster, it can also be exposed outside a cluster, as needed.
- Control Pane
- API
Setup a Kubernetes cluster
In this section, I will show you how to setup a Kubernetes cluster with etcd and Docker. The cluster will consist of 1 master host and 3 minions (aka nodes).
Setup VMs
For this demo, I will be creating 4 VMs via Vagrant (with VirtualBox).
- Create Vagrant demo environment:
$ mkdir $HOME/dev/kubernetes && cd $_
- Create Vagrantfile with the following contents:
# -*- mode: ruby -*- # vi: set ft=ruby : require 'yaml' VAGRANTFILE_API_VERSION = "2" $common_script = <<COMMON_SCRIPT # Set verbose set -v # Set exit on error set -e echo -e "$(date) [INFO] Starting modified Vagrant..." sudo yum update -y # Timestamp provision date > /etc/vagrant_provisioned_at COMMON_SCRIPT unless defined? CONFIG configuration_file = File.join(File.dirname(__FILE__), 'vagrant_config.yml') CONFIG = YAML.load(File.open(configuration_file, File::RDONLY).read) end CONFIG['box'] = {} unless CONFIG.key?('box') def modifyvm_network(node) node.vm.provider "virtualbox" do |vbox| vbox.customize ["modifyvm", :id, "--nicpromisc1", "allow-all"] #vbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"] vbox.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"] end end def modifyvm_resources(node, memory, cpus) node.vm.provider "virtualbox" do |vbox| vbox.customize ["modifyvm", :id, "--memory", memory] vbox.customize ["modifyvm", :id, "--cpus", cpus] end end ## START: Actual Vagrant process Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = CONFIG['box']['name'] # Uncomment the following line if you wish to be able to pass files from # your local filesystem directly into the vagrant VM: #config.vm.synced_folder "data", "/vagrant" ## VM: k8s master ############################################################# config.vm.define "master" do |node| node.vm.hostname = "k8s.master.dev" node.vm.provision "shell", inline: $common_script #node.vm.network "forwarded_port", guest: 80, host: 8080 node.vm.network "private_network", ip: CONFIG['host_groups']['master'] # Uncomment the following if you wish to define CPU/memory: #node.vm.provider "virtualbox" do |vbox| # vbox.customize ["modifyvm", :id, "--memory", "4096"] # vbox.customize ["modifyvm", :id, "--cpus", "2"] #end #modifyvm_resources(node, "4096", "2") end ## VM: k8s minion1 ############################################################ config.vm.define "minion1" do |node| node.vm.hostname = "k8s.minion1.dev" node.vm.provision "shell", inline: $common_script node.vm.network "private_network", ip: CONFIG['host_groups']['minion1'] end ## VM: k8s minion2 ############################################################ config.vm.define "minion2" do |node| node.vm.hostname = "k8s.minion2.dev" node.vm.provision "shell", inline: $common_script node.vm.network "private_network", ip: CONFIG['host_groups']['minion2'] end ## VM: k8s minion3 ############################################################ config.vm.define "minion3" do |node| node.vm.hostname = "k8s.minion3.dev" node.vm.provision "shell", inline: $common_script node.vm.network "private_network", ip: CONFIG['host_groups']['minion3'] end ############################################################################### end
The above Vagrantfile uses the following configuration file:
$ cat vagrant_config.yml
--- box: name: centos/7 storage_controller: 'SATA Controller' debug: false development: false network: dns1: 8.8.8.8 dns2: 8.8.4.4 internal: network: 192.168.200.0/24 external: start: 192.168.100.100 end: 192.168.100.200 network: 192.168.100.0/24 bridge: wlan0 netmask: 255.255.255.0 broadcast: 192.168.100.255 host_groups: master: 192.168.200.100 minion1: 192.168.200.101 minion2: 192.168.200.102 minion3: 192.168.200.103
- In the Vagrant Kubernetes directory (i.e.,
$HOME/dev/kubernetes
), run the following command:
$ vagrant up
Setup hosts
Note: Run the following commands/steps on all hosts (master and minions).
- Log into the k8s master host:
$ vagrant ssh master
- Kubernetes cluster
$ cat << EOF >> /etc/hosts 192.168.200.100 k8s.master.dev 192.168.200.101 k8s.minion1.dev 192.168.200.102 k8s.minion2.dev 192.168.200.103 k8s.minion3.dev EOF
- Install, enable, and start NTP:
$ yum install -y ntp $ systemctl enable ntpd && systemctl start ntpd $ timedatectl
- Disable any firewall rules (for now; we will add the rules back later):
$ systemctl stop firewalld && systemctl disable firewalld $ systemctl stop iptables
- Disable SELinux (for now; we will turn it on again later):
$ setenforce 0 $ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/sysconfig/selinux $ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config $ sestatus
- Add the Docker repo and update yum:
$ cat << EOF > /etc/yum.repos.d/virt7-docker-common-release.repo [virt7-docker-common-release] name=virr7-docker-common-release baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/ gpgcheck=0 EOF $ yum update
- Install Docker, Kubernetes, and etcd:
$ yum install -y --enablerepo=virt7-docker-common-release kubernetes docker etcd
Install and configure master controller
Note: Run the following commands on only the master host.
- Edit
/etc/kubernetes/config
and add (or make changes to) the following lines:
KUBE_MASTER="--master=http://k8s.master.dev:8080" KUBE_ETCD_SERVERS="--etcd-servers=http://k8s.master.dev:2379"
- Edit
/etc/etcd/etcd.conf
and add (or make changes to) the following lines:
[member] ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" [cluster] ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
- Edit
/etc/kubernetes/apiserver
and add (or make changes to) the following lines:
# The address on the local server to listen to. #KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1" KUBE_API_ADDRESS="--address=0.0.0.0" # The port on the local server to listen on. KUBE_API_PORT="--port=8080" # Port minions listen on KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies #KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" # Add your own! KUBE_API_ARGS=""
- Enable and start the following etcd and Kubernetes services:
$ for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICE systemctl enable $SERVICE systemctl status $SERVICE done
- Check on the status of the above services (the following command should report 4 running services):
$ systemctl status etcd kube-apiserver kube-controller-manager kube-scheduler | grep "(running)" | wc -l # => 4
- Check on the status of the Kubernetes API server:
$ kubectl cluster-info Kubernetes master is running at http://localhost:8080 $ curl http://localhost:8080/version #~OR~ $ curl http://k8s.master.dev:8080/version
{ "major": "1", "minor": "2", "gitVersion": "v1.2.0", "gitCommit": "ec7364b6e3b155e78086018aa644057edbe196e5", "gitTreeState": "clean" }
- Get a list of Kubernetes API paths:
$ curl http://k8s.master.dev:8080/paths
{ "paths": [ "/api", "/api/v1", "/apis", "/apis/autoscaling", "/apis/autoscaling/v1", "/apis/batch", "/apis/batch/v1", "/apis/extensions", "/apis/extensions/v1beta1", "/healthz", "/healthz/ping", "/logs/", "/metrics", "/resetMetrics", "/swagger-ui/", "/swaggerapi/", "/ui/", "/version" ] }
- List all available paths (key-value stores) known to ectd:
$ etcdctl ls / --recursive
The master controller in a Kubernetes cluster must have the following services running to function as the master host in the cluster:
- ntpd
- etcd
- kube-controller-manager
- kube-apiserver
- kube-scheduler
Note: The Docker daemon should not be running on the master host.
Install and configure the minions
Note: Run the following commands/steps on all minion hosts.
- Log into the k8s minion hosts:
$ vagrant ssh minion1 # do the same for minion2 and minion3
- Edit
/etc/kubernetes/config
and add (or make changes to) the following lines:
KUBE_MASTER="--master=http://k8s.master.dev:8080" KUBE_ECTD_SERVERS="--etcd-servers=http://k8s.master.dev:2379"
- Edit
/etc/kubernetes/kubelet
and add (or make changes to) the following lines:
### # kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=k8s.minion1.dev" # ***CHANGE TO CORRECT MINION HOSTNAME*** # location of the api-server KUBELET_API_SERVER="--api-servers=http://k8s.master.dev:8080" # pod infrastructure container #KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" # Add your own! KUBELET_ARGS=""
- Enable and start the following services:
$ for SERVICE in kube-proxy kubelet docker; do systemctl restart $SERVICE systemctl enable $SERVICE systemctl status $SERVICE done
- Test that Docker is running and can start containers:
$ docker info $ docker pull hello-world $ docker run hello-world
Each minion in a Kubernetes cluster must have the following services running to function as a member of the cluster (i.e., a "Ready" node):
- ntpd
- kubelet
- kube-proxy
- docker
Kubectl: Exploring our environment
Note: Run all of the following commands on the master host.
- Get a list of nodes with
kubectl
:
$ kubectl get nodes
NAME STATUS AGE k8s.minion1.dev Ready 20m k8s.minion2.dev Ready 12m k8s.minion3.dev Ready 12m
- Describe nodes with
kubectl
:
$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}' $ kubectl get nodes -o jsonpath='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' | tr ';' "\n"
k8s.minion1.dev:OutOfDisk=False Ready=True k8s.minion2.dev:OutOfDisk=False Ready=True k8s.minion3.dev:OutOfDisk=False Ready=True
- Get the man page for
kubectl
:
$ man kubectl-get
Working with our Kubernetes cluster
Note: The following section will be working from within the Kubernetes cluster we created above.
Create and deploy pod definitions
- Turn off nodes 1 and 2:
minion{1,2}$ systemctl stop kubelet kube-proxy
master$ kubectl get nodes
NAME STATUS AGE k8s.minion1.dev Ready 1h k8s.minion2.dev NotReady 37m k8s.minion3.dev NotReady 39m
- Check for any k8s Pods (there should be none):
master$ kubectl get pods
- Create a builds directory for our Pods:
master$ mkdir builds && cd $_
- Create a Pod running Nginx inside a Docker container:
master$ cat << EOF >nginx.yml
--- apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 EOF
- Create k8s Pod:
master$ kubectl create -f nginx.yml
- Check on Pod creation status:
master$ kubectl get pods
NAME READY STATUS RESTARTS AGE nginx 0/1 ContainerCreating 0 2s
master$ kubectl get pods
NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 3m
minion1$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a718c6c0355d nginx:1.7.9 "nginx -g 'daemon off" 3 minutes ago Up 3 minutes k8s_nginx.4580025_nginx_default_699e...
master$ kubectl describe pod nginx
master$ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1 busybox$ wget -qO- 172.17.0.2 master$ kubectl delete pod busybox master$ kubectl delete pod nginx
- Port forwarding:
master$ kubectl create -f nginx.yml master$ kubectl port-forward nginx :80 & I1020 23:12:29.478742 23394 portforward.go:213] Forwarding from [::1]:40065 -> 80 master$ curl -I localhost:40065
Tags, labels, and selectors
master$ cat << EOF > nginx-pod-label.yml
--- apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 EOF
master$ kubectl create -f nginx-pod-label.yml master$ kubectl get pods -l app=nginx master$ kubectl describe pods -l app=nginx2
- Add labels or overwrite existing ones:
master$ kubectl label pods nginx new-label=mynginx master$ kubectl describe pods/nginx | awk '/^Labels/{print $2}' new-label=nginx master$ kubectl label pods nginx new-label=foo master$ kubectl describe pods/nginx | awk '/^Labels/{print $2}' new-label=foo
Deployments
master$ cat << EOF > nginx-deployment-dev.yml
--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-deployment-dev spec: replicas: 1 template: metadata: labels: app: nginx-deployment-dev spec: containers: - name: nginx-deployment-dev image: nginx:1.7.9 ports: - containerPort: 80 EOF
master$ cat nginx-deployment-prod.yml
--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-deployment-prod spec: replicas: 1 template: metadata: labels: app: nginx-deployment-prod spec: containers: - name: nginx-deployment-prod image: nginx:1.7.9 ports: - containerPort: 80
master$ kubectl create --validate -f nginx-deployment-dev.yml master$ kubectl create --validate -f nginx-deployment-prod.yml
master$ kubectl get pods
NAME READY STATUS RESTARTS AGE nginx-deployment-dev-104434401-jiiic 1/1 Running 0 5m nginx-deployment-prod-3051195443-hj9b1 1/1 Running 0 12m
master$ kubectl describe deployments -l app=nginx-deployment-dev
Name: nginx-deployment-dev Namespace: default CreationTimestamp: Thu, 20 Oct 2016 23:48:46 +0000 Labels: app=nginx-deployment-dev Selector: app=nginx-deployment-dev Replicas: 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 1 max unavailable, 1 max surge OldReplicaSets: <none> NewReplicaSet: nginx-deployment-dev-2568522567 (1/1 replicas created) ...
master$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment-prod 1 1 1 1 44s
master$ cat << EOF > nginx-deployment-dev-update.yml
--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-deployment-dev spec: replicas: 1 template: metadata: labels: app: nginx-deployment-dev spec: containers: - name: nginx-deployment-dev image: nginx:1.8 # ***CHANGED*** ports: - containerPort: 80
master$ kubectl apply -f nginx-deployment-dev-update.yml master$ kubectl get pods -l app=nginx-deployment-dev
NAME READY STATUS RESTARTS AGE nginx-deployment-dev-104434401-jiiic 0/1 ContainerCreating 0 27s
master$ kubectl get pods -l app=nginx-deployment-dev
NAME READY STATUS RESTARTS AGE nginx-deployment-dev-104434401-jiiic 1/1 Running 0 6m
- Cleanup:
master$ kubectl delete deployment nginx-deployment-dev master$ kubectl delete deployment nginx-deployment-prod
Multi-Pod (container) replication controller
- Start the other two nodes (the ones we previously stopped):
minion2$ systemctl start kubelet kube-proxy minion3$ systemctl start kubelet kube-proxy master$ kubectl get nodes
NAME STATUS AGE k8s.minion1.dev Ready 2h k8s.minion2.dev Ready 2h k8s.minion3.dev Ready 2h
master$ cat << EOF > nginx-multi-node.yml
--- apiVersion: v1 kind: ReplicationController metadata: name: nginx-www spec: replicas: 3 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80
master$ kubectl get pods
NAME READY STATUS RESTARTS AGE nginx-www-2evxu 0/1 ContainerCreating 0 10s nginx-www-416ct 0/1 ContainerCreating 0 10s nginx-www-ax41w 0/1 ContainerCreating 0 10s
master$ kubectl get pods
NAME READY STATUS RESTARTS AGE nginx-www-2evxu 1/1 Running 0 1m nginx-www-416ct 1/1 Running 0 1m nginx-www-ax41w 1/1 Running 0 1m
master$ kubectl describe pods | awk '/^Node/{print $2}'
k8s.minion2.dev/192.168.200.102 k8s.minion1.dev/192.168.200.101 k8s.minion3.dev/192.168.200.103
minion1$ docker ps # 1 nginx container running minion2$ docker ps # 1 nginx container running minion3$ docker ps # 1 nginx container running minion3$ docker ps --format "{{.Image}}"
nginx gcr.io/google_containers/pause:2.0
master$ kubectl describe replicationcontroller
Name: nginx-www Namespace: default Image(s): nginx Selector: app=nginx Labels: app=nginx Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed ...
- Attempt to delete one of the three pods:
master$ kubectl get pods
NAME READY STATUS RESTARTS AGE nginx-www-2evxu 1/1 Running 0 11m nginx-www-416ct 1/1 Running 0 11m nginx-www-ax41w 1/1 Running 0 11m
master$ kubectl delete pod nginx-www-2evxu master$ kubectl get pods
NAME READY STATUS RESTARTS AGE nginx-www-3cck4 1/1 Running 0 12s nginx-www-416ct 1/1 Running 0 11m nginx-www-ax41w 1/1 Running 0 11m
A new pod (nginx-www-3cck4
) automatically started up. This is because the expected state, as defined in our YAML file, is for there to be 3 pods running at all times. Thus, if one or more of the pods were to go down, a new pod (or pods) will automatically start up to bring the state back to the expected state.
- To force-delete all pods:
master$ kubectl delete replicationcontroller nginx-www master$ kubectl get pods # nothing
Create and deploy service definitions
master$ kubectl create -f nginx-multi-node.yml
master$ cat << EOF > nginx-service.yml
--- apiVersion: v1 kind: Service metadata: name: nginx-service spec: ports: - port: 8000 targetPort: 80 protocol: TCP selector: app: nginx
master$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.254.0.1 <none> 443/TCP 3h
master$ kubectl create -f nginx-service.yml
master$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.254.0.1 <none> 443/TCP 3h nginx-service 10.254.110.127 <none> 8000/TCP 10s
master$ kubectl run busybox --generator=run-pod/v1 --image=busybox --restart=Never --tty -i busybox$ wget -qO- 10.254.110.127:8000 # works
- Cleanup
master$ kubectl delete pod busybox master$ kubectl delete service nginx-service master$ kubectl get pods
NAME READY STATUS RESTARTS AGE nginx-www-jh2e9 1/1 Running 0 13m nginx-www-jir2g 1/1 Running 0 13m nginx-www-w91uw 1/1 Running 0 13m
master$ kubectl delete replicationcontroller nginx-www master$ kubectl get pods # nothing
Creating temporary Pods at the CLI
- Make sure we have no Pods running:
master$ kubectl get pods
- Create temporary deployment pod:
master$ kubectl run mysample --image=foobar/apache master$ kubectl get pods
NAME READY STATUS RESTARTS AGE mysample-1424711890-fhtxb 0/1 ContainerCreating 0 1s
master$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE mysample 1 1 1 0 7s
- Create a temporary deployment pod (where we know it will fail):
master$ kubectl run myexample --image=christophchamp/ubuntu_sysadmin master$ kubectl -o wide get pods
NAME READY STATUS RESTARTS AGE NODE myexample-3534121234-mpr35 0/1 CrashLoopBackOff 12 39m k8s.minion3.dev mysample-2812764540-74c5h 1/1 Running 0 41m k8s.minion2.dev
- Check on why the "myexample" pod is in status "CrashLoopBackOff":
master$ kubectl describe pods/myexample-3534121234-mpr35 master$ kubectl describe deployments/mysample master$ kubectl describe pods/mysample-2812764540-74c5h | awk '/^Node/{print $2}' k8s.minion2.dev/192.168.200.102
master$ kubectl delete deployment mysample
- Run multiple replicas of the same pod:
master$ kubectl run myreplicas --image=latest123/apache --replicas=2 --labels=app=myapache,version=1.0.0 master$ kubectl describe deployment myreplicas
Name: myreplicas Namespace: default CreationTimestamp: Fri, 21 Oct 2016 19:10:30 +0000 Labels: app=myapache,version=1.0.0 Selector: app=myapache,version=1.0.0 Replicas: 2 updated | 2 total | 1 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 1 max unavailable, 1 max surge OldReplicaSets: <none> NewReplicaSet: myreplicas-2209834598 (2/2 replicas created) ...
master$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE NODE myreplicas-2209834598-5iyer 1/1 Running 0 1m k8s.minion1.dev myreplicas-2209834598-cslst 1/1 Running 0 1m k8s.minion2.dev
master$ kubectl describe pods -l version=1.0.0
- Cleanup:
master$ kubectl delete deployment myreplicas
Interacting with Pod containers
- Create example Apache pod definition file:
master$ cat << EOF > apache.yml
--- apiVersion: v1 kind: Pod metadata: name: apache spec: containers: - name: apache image: latest123/apache ports: - containerPort: 80 EOF
master$ kubectl create -f apache.yml master$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE NODE apache 1/1 Running 0 12m k8s.minion3.dev
- Test pod and make some basic configuration changes:
master$ kubectl exec apache date master$ kubectl exec mypod -i -t -- cat /var/www/html/index.html # default apache HTML master$ kubectl exec apache -i -t -- /bin/bash container$ export TERM=xterm container$ echo "xtof test" > /var/www/html/index.html minion3$ curl 172.17.0.2 xtof test container$ exit
master$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE NODE apache 1/1 Running 0 12m k8s.minion3.dev
Pod/container is still running even after we exited (as expected).
- Cleanup:
master$ kubectl delete pod apache
Logs
- Start our example Apache pod to use for checking Kubernetes logging features:
master$ kubectl create -f apache.yml master$ kubectl get pods
NAME READY STATUS RESTARTS AGE apache 1/1 Running 0 9s
master$ kubectl logs apache
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
master$ kubectl logs --tail=10 apache master$ kubectl logs --since=24h apache # or 10s, 2m, etc. master$ kubectl logs -f apache # follow the logs master$ kubectl logs -f -c apache apache # where -c is the container ID
- Cleanup:
master$ kubectl delete pod apache
Autoscaling and scaling Pods
master$ kubectl run myautoscale --image=latest123/apache --port=80 --labels=app=myautoscale
master$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE NODE myautoscale-3243017378-kq4z7 1/1 Running 0 47s k8s.minion3.dev
- Create an autoscale definition:
master$ kubectl autoscale deployment myautoscale --min=2 --max=6 --cpu-percent=80
master$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE myautoscale 2 2 2 2 4m
master$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE NODE myautoscale-3243017378-kq4z7 1/1 Running 0 3m k8s.minion3.dev myautoscale-3243017378-r2f3d 1/1 Running 0 4s k8s.minion2.dev
- Scale up an already autoscaled deployment:
master$ kubectl scale --current-replicas=2 --replicas=4 deployment/myautoscale
master$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE myautoscale 4 4 4 4 8m
master$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE NODE myautoscale-3243017378-2rxhp 1/1 Running 0 8s k8s.minion1.dev myautoscale-3243017378-kq4z7 1/1 Running 0 7m k8s.minion3.dev myautoscale-3243017378-ozxs8 1/1 Running 0 8s k8s.minion3.dev myautoscale-3243017378-r2f3d 1/1 Running 0 4m k8s.minion2.dev
- Scale down:
master$ kubectl scale --current-replicas=4 --replicas=2 deployment/myautoscale
Note: You can not scale down past the original minimum number of pods/containers specified in the original autoscale deployment (i.e., min=2 in our example).
- Cleanup:
master$ kubectl delete deployment myautoscale
Failure and recovery
master$ kubectl run myrecovery --image=latest123/apache --port=80 --replicas=2 --labels=app=myrecovery master$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE myrecovery 2 2 2 2 6s
master$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE NODE myrecovery-563119102-5xu8f 1/1 Running 0 12s k8s.minion1.dev myrecovery-563119102-zw6wp 1/1 Running 0 12s k8s.minion2.dev
- Now stop Kubernetes- and Docker-related services on one of the minions/nodes (so we have a total of 2 nodes online):
minion1$ systemctl stop docker kubelet kube-proxy
master$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE NODE myrecovery-563119102-qyi04 1/1 Running 0 7m k8s.minion3.dev myrecovery-563119102-zw6wp 1/1 Running 0 14m k8s.minion2.dev
Pod switch from minion1 to minion3.
- Now stop Kubernetes- and Docker-related services on one of the remaining online minions/nodes (so we have a total of 1 node online):
minion2$ systemctl stop docker kubelet kube-proxy master$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE NODE myrecovery-563119102-b5tim 1/1 Running 0 2m k8s.minion3.dev myrecovery-563119102-qyi04 1/1 Running 0 17m k8s.minion3.dev
Both Pods are now running on minion3, the only available node.
- Start up Kubernetes- and Docker-related services again on minion1 and delete on of the Pods:
minion1$ systemctl start docker kubelet kube-proxy master$ kubectl delete pod myrecovery-563119102-b5tim master$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE NODE myrecovery-563119102-8unzg 1/1 Running 0 1m k8s.minion1.dev myrecovery-563119102-qyi04 1/1 Running 0 20m k8s.minion3.dev
Pods are now running on separate nodes.
- Cleanup:
master$ kubectl delete deployments/myrecovery
External links
- Official website
- Kubernets code — via GitHub