Difference between revisions of "Kubernetes"

From Christoph's Personal Wiki
Jump to: navigation, search
(External links)
(Setup a Kubernetes cluster)
Line 32: Line 32:
  
 
In this section, I will show you how to setup a Kubernetes cluster with etcd and Docker. The cluster will consist of 1 master host and 3 minions (aka nodes).
 
In this section, I will show you how to setup a Kubernetes cluster with etcd and Docker. The cluster will consist of 1 master host and 3 minions (aka nodes).
 +
 +
===Setup VMs===
 +
 +
For this demo, I will be creating 4 VMs via [[Vagrant]] (with VirtualBox).
 +
 +
* Create Vagrant demo environment:
 +
$ mkdir $HOME/dev/kubernetes && cd $_
 +
 +
* Create Vagrantfile with the following contents:
 +
<pre>
 +
# -*- mode: ruby -*-
 +
# vi: set ft=ruby :
 +
 +
require 'yaml'
 +
VAGRANTFILE_API_VERSION = "2"
 +
 +
$common_script = <<COMMON_SCRIPT
 +
# Set verbose
 +
set -v
 +
# Set exit on error
 +
set -e
 +
echo -e "$(date) [INFO] Starting modified Vagrant..."
 +
sudo yum update -y
 +
# Timestamp provision
 +
date > /etc/vagrant_provisioned_at
 +
COMMON_SCRIPT
 +
 +
unless defined? CONFIG
 +
  configuration_file = File.join(File.dirname(__FILE__), 'vagrant_config.yml')
 +
  CONFIG = YAML.load(File.open(configuration_file, File::RDONLY).read)
 +
end
 +
 +
CONFIG['box'] = {} unless CONFIG.key?('box')
 +
 +
def modifyvm_network(node)
 +
  node.vm.provider "virtualbox" do |vbox|
 +
    vbox.customize ["modifyvm", :id, "--nicpromisc1", "allow-all"]
 +
    #vbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
 +
    vbox.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
 +
  end
 +
end
 +
 +
def modifyvm_resources(node, memory, cpus)
 +
  node.vm.provider "virtualbox" do |vbox|
 +
    vbox.customize ["modifyvm", :id, "--memory", memory]
 +
    vbox.customize ["modifyvm", :id, "--cpus", cpus]
 +
  end
 +
end
 +
 +
## START: Actual Vagrant process
 +
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
 +
 +
  config.vm.box = CONFIG['box']['name']
 +
 +
  # Uncomment the following line if you wish to be able to pass files from
 +
  # your local filesystem directly into the vagrant VM:
 +
  #config.vm.synced_folder "data", "/vagrant"
 +
 +
## VM: k8s master #############################################################
 +
  config.vm.define "master" do |node|
 +
    node.vm.hostname = "k8s.master.dev"
 +
    node.vm.provision "shell", inline: $common_script
 +
    #node.vm.network "forwarded_port", guest: 80, host: 8080
 +
    node.vm.network "private_network", ip: CONFIG['host_groups']['master']
 +
 +
    # Uncomment the following if you wish to define CPU/memory:
 +
    #node.vm.provider "virtualbox" do |vbox|
 +
    #  vbox.customize ["modifyvm", :id, "--memory", "4096"]
 +
    #  vbox.customize ["modifyvm", :id, "--cpus", "2"]
 +
    #end
 +
    #modifyvm_resources(node, "4096", "2")
 +
  end
 +
## VM: k8s minion1 ############################################################
 +
  config.vm.define "minion1" do |node|
 +
    node.vm.hostname = "k8s.minion1.dev"
 +
    node.vm.provision "shell", inline: $common_script
 +
    node.vm.network "private_network", ip: CONFIG['host_groups']['minion1']
 +
  end
 +
## VM: k8s minion2 ############################################################
 +
  config.vm.define "minion2" do |node|
 +
    node.vm.hostname = "k8s.minion2.dev"
 +
    node.vm.provision "shell", inline: $common_script
 +
    node.vm.network "private_network", ip: CONFIG['host_groups']['minion2']
 +
  end
 +
## VM: k8s minion3 ############################################################
 +
  config.vm.define "minion3" do |node|
 +
    node.vm.hostname = "k8s.minion3.dev"
 +
    node.vm.provision "shell", inline: $common_script
 +
    node.vm.network "private_network", ip: CONFIG['host_groups']['minion3']
 +
  end
 +
###############################################################################
 +
 +
end
 +
</pre>
 +
 +
The above Vagrantfile uses the following configuration file:
 +
$ cat vagrant_config.yml
 +
<pre>
 +
---
 +
box:
 +
  name: centos/7
 +
  storage_controller: 'SATA Controller'
 +
debug: false
 +
development: false
 +
network:
 +
  dns1: 8.8.8.8
 +
  dns2: 8.8.4.4
 +
  internal:
 +
    network: 192.168.200.0/24
 +
  external:
 +
    start: 192.168.100.100
 +
    end: 192.168.100.200
 +
    network: 192.168.100.0/24
 +
    bridge: wlan0
 +
    netmask: 255.255.255.0
 +
    broadcast: 192.168.100.255
 +
host_groups:
 +
  master: 192.168.200.100
 +
  minion1: 192.168.200.101
 +
  minion2: 192.168.200.102
 +
  minion3: 192.168.200.103
 +
</pre>
 +
 +
* In the Vagrant Kubernetes directory (i.e., <code>$HOME/dev/kubernetes</code>), run the following command:
 +
$ vagrant up
  
 
===Setup hosts===
 
===Setup hosts===
 
''Note: Run the following commands/steps on all hosts (master and minions).''
 
''Note: Run the following commands/steps on all hosts (master and minions).''
 +
 +
* Log into the k8s master host:
 +
$ vagrant ssh master
  
 
* Kubernetes cluster
 
* Kubernetes cluster
Line 122: Line 250:
 
===Install and configure the minions===
 
===Install and configure the minions===
 
''Note: Run the following commands/steps on all minion hosts.''
 
''Note: Run the following commands/steps on all minion hosts.''
 +
 +
* Log into the k8s minion hosts:
 +
$ vagrant ssh minion1  # do the same for minion2 and minion3
  
 
* Edit <code>/etc/kubernetes/config</code> and add (or make changes to) the following lines:
 
* Edit <code>/etc/kubernetes/config</code> and add (or make changes to) the following lines:

Revision as of 22:08, 20 October 2016

Kuerbernetes (k8s) is an open source container cluster manager. Kubernetes' primary goal is to provide a platform for automating deployment, scaling, and operations of application containers across a luster of hosts. Kubernetes was released by Google on July 2015.

Design overview

Kubernetes is built through the definition of a set of components (building blocks or "primitives") which, when used collectively, provide a method for the deployment, maintenance, and scalability of container-based application clusters.

These "primitives" are designed to be loosely coupled (i.e., where little to no knowledge of the other component definitions is needed to use) as well as easily extensible through an API. Both the internal components of Kubernetes as well as the extensions and containers make use of this API.

Components

The building blocks of Kubernetes are the following:

Nodes (minions) 
You can think of these as "container clients". These are the individual hosts (physical or virtual) that Docker is installed on and hosts the various containers within your managed cluster.
Each node will run etcd (a key-pair management and communication service, used by Kubernetes for exchanging messages and reporting on cluster status) as well as the Kubernetes Proxy.
Pods 
A pod consists of one or more containers. Those containers are guaranteed (by the cluster controller) to be located on the same host machine in order to facilitate sharing of resources.
Pods are assigned unique IPs within each cluster. These allow an application to use ports without having to worry about conflicting port utilization.
Pods can contain definitions of disk volumes or shares, and then provide access from those to all the members (containers) within the pod.
Finally, pod management is done through the API or delegated to a controller.
Labels 
Clients can attach "key-value pairs" to any object in the system (like Pods or Nodes). These become the labels that identify them in the configuration and management of them.
Selectors 
Label Selectors represent queries that are made against those labels. They resolve to the corresponding matching objects.
These two items are the primary way that grouping is done in Kubernetes and determine which components that a given operation applies to when indicated.
Controllers 
These are used in the management of your cluster. Controllers are the mechanism by which your desired configuration state is enforced.
Controllers manage a set of pods and, depending on the desired configuration state, may engage other controllers to handle replication and scaling (Replication Controller) of X number of containers and pods across the cluster. It is also responsible for replacing any container in a pod that fails (based on the desired state of the cluster).
Other controllers that can be engaged include a DaemonSet Controller (enforces a 1-to-1 ratio of pods to minions) and a Job Controller (that runs pods to "completion", such as in batch jobs).
Each set of pods any controller manages, is determined by the label selectors that are part of its definition.
Services 
A pod consists of one or more containers. Those containers are guaranteed (by the cluster controller) to be located on the same host machine, in order to facilitate sharing of resources.
This is so pods can "work together", like in a multi-tiered application configuration. Each set of pods that define and implement a service (like MySQL or Apache) are defined by the label selector.
Kubernetes can then provide service discovery and handle routing with the static IP for each pod as well as load balancing (round robin based) connections to that service among the pods that match the label selector indicated.
By default, although a service is only exposed inside a cluster, it can also be exposed outside a cluster, as needed.
Control Pane
API

Setup a Kubernetes cluster

In this section, I will show you how to setup a Kubernetes cluster with etcd and Docker. The cluster will consist of 1 master host and 3 minions (aka nodes).

Setup VMs

For this demo, I will be creating 4 VMs via Vagrant (with VirtualBox).

  • Create Vagrant demo environment:
$ mkdir $HOME/dev/kubernetes && cd $_
  • Create Vagrantfile with the following contents:
# -*- mode: ruby -*-
# vi: set ft=ruby :

require 'yaml'
VAGRANTFILE_API_VERSION = "2"

$common_script = <<COMMON_SCRIPT
# Set verbose
set -v
# Set exit on error
set -e
echo -e "$(date) [INFO] Starting modified Vagrant..."
sudo yum update -y
# Timestamp provision
date > /etc/vagrant_provisioned_at
COMMON_SCRIPT

unless defined? CONFIG
  configuration_file = File.join(File.dirname(__FILE__), 'vagrant_config.yml')
  CONFIG = YAML.load(File.open(configuration_file, File::RDONLY).read)
end

CONFIG['box'] = {} unless CONFIG.key?('box')

def modifyvm_network(node)
  node.vm.provider "virtualbox" do |vbox|
    vbox.customize ["modifyvm", :id, "--nicpromisc1", "allow-all"]
    #vbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
    vbox.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
  end
end

def modifyvm_resources(node, memory, cpus)
  node.vm.provider "virtualbox" do |vbox|
    vbox.customize ["modifyvm", :id, "--memory", memory]
    vbox.customize ["modifyvm", :id, "--cpus", cpus]
  end
end

## START: Actual Vagrant process
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  config.vm.box = CONFIG['box']['name']

  # Uncomment the following line if you wish to be able to pass files from
  # your local filesystem directly into the vagrant VM:
  #config.vm.synced_folder "data", "/vagrant"

## VM: k8s master #############################################################
  config.vm.define "master" do |node|
    node.vm.hostname = "k8s.master.dev"
    node.vm.provision "shell", inline: $common_script
    #node.vm.network "forwarded_port", guest: 80, host: 8080
    node.vm.network "private_network", ip: CONFIG['host_groups']['master']

    # Uncomment the following if you wish to define CPU/memory:
    #node.vm.provider "virtualbox" do |vbox|
    #  vbox.customize ["modifyvm", :id, "--memory", "4096"]
    #  vbox.customize ["modifyvm", :id, "--cpus", "2"]
    #end
    #modifyvm_resources(node, "4096", "2")
  end
## VM: k8s minion1 ############################################################
  config.vm.define "minion1" do |node|
    node.vm.hostname = "k8s.minion1.dev"
    node.vm.provision "shell", inline: $common_script
    node.vm.network "private_network", ip: CONFIG['host_groups']['minion1']
  end
## VM: k8s minion2 ############################################################
  config.vm.define "minion2" do |node|
    node.vm.hostname = "k8s.minion2.dev"
    node.vm.provision "shell", inline: $common_script
    node.vm.network "private_network", ip: CONFIG['host_groups']['minion2']
  end
## VM: k8s minion3 ############################################################
  config.vm.define "minion3" do |node|
    node.vm.hostname = "k8s.minion3.dev"
    node.vm.provision "shell", inline: $common_script
    node.vm.network "private_network", ip: CONFIG['host_groups']['minion3']
  end
###############################################################################

end

The above Vagrantfile uses the following configuration file:

$ cat vagrant_config.yml
---
box:
  name: centos/7
  storage_controller: 'SATA Controller'
debug: false
development: false
network:
  dns1: 8.8.8.8
  dns2: 8.8.4.4
  internal:
    network: 192.168.200.0/24
  external:
    start: 192.168.100.100
    end: 192.168.100.200
    network: 192.168.100.0/24
    bridge: wlan0
    netmask: 255.255.255.0
    broadcast: 192.168.100.255
host_groups:
  master: 192.168.200.100
  minion1: 192.168.200.101
  minion2: 192.168.200.102
  minion3: 192.168.200.103
  • In the Vagrant Kubernetes directory (i.e., $HOME/dev/kubernetes), run the following command:
$ vagrant up

Setup hosts

Note: Run the following commands/steps on all hosts (master and minions).

  • Log into the k8s master host:
$ vagrant ssh master
  • Kubernetes cluster
$ cat << EOF >> /etc/hosts
192.168.200.100    k8s.master.dev
192.168.200.101    k8s.minion1.dev
192.168.200.102    k8s.minion2.dev
192.168.200.103    k8s.minion3.dev
EOF
  • Install, enable, and start NTP:
$ yum install -y ntp
$ systemctl enable ntpd && systemctl start ntpd
$ timedatectl
  • Disable any firewall rules (for now; we will add the rules back later):
$ systemctl stop firewalld && systemctl disable firewalld
$ systemctl stop iptables
  • Disable SELinux (for now; we will turn it on again later):
$ setenforce 0
$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/sysconfig/selinux
$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config
$ sestatus
  • Add the Docker repo and update yum:
$ cat << EOF > /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virr7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
EOF
$ yum update
  • Install Docker, Kubernetes, and etcd:
$ yum install -y --enablerepo=virt7-docker-common-release kubernetes docker etcd

Install and configure master controller

Note: Run the following commands on only the master host.

  • Edit /etc/kubernetes/config and add (or make changes to) the following lines:
KUBE_MASTER="--master=http://k8s.master.dev:8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://k8s.master.dev:2379"
  • Edit /etc/etcd/etcd.conf and add (or make changes to) the following lines:
[member]
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
  • Edit /etc/kubernetes/apiserver and add (or make changes to) the following lines:
# The address on the local server to listen to.
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""
  • Enable and start the following etcd and Kubernetes services:
$ for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler; do
      systemctl restart $SERVICE
      systemctl enable $SERVICE
      systemctl status $SERVICE 
  done
  • Check on the status of the above services (the following command should report 4 running services):
$ systemctl status etcd kube-apiserver kube-controller-manager kube-scheduler | grep "(running)" | wc -l # => 4

Install and configure the minions

Note: Run the following commands/steps on all minion hosts.

  • Log into the k8s minion hosts:
$ vagrant ssh minion1  # do the same for minion2 and minion3
  • Edit /etc/kubernetes/config and add (or make changes to) the following lines:
KUBE_MASTER="--master=http://k8s.master.dev:8080"
KUBE_ECTD_SERVERS="--etcd-servers=http://k8s.master.dev:2379"
  • Edit /etc/kubernetes/kubelet and add (or make changes to) the following lines:
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s.minion1.dev"  # ***CHANGE TO CORRECT MINION HOSTNAME***

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://k8s.master.dev:8080"

# pod infrastructure container
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""
  • Enable and start the following services:
$ for SERVICE in kube-proxy kubelet docker; do
      systemctl restart $SERVICE
      systemctl enable $SERVICE
      systemctl status $SERVICE
  done
  • Test that Docker is running and can start containers:
$ docker info
$ docker pull hello-world
$ docker run hello-world

Kubectl: Exploring our environment

Note: Run all of the following commands on the master host.

  • Get a list of nodes with kubectl:
$ kubectl get nodes
NAME              STATUS    AGE
k8s.minion1.dev   Ready     20m
k8s.minion2.dev   Ready     12m
k8s.minion3.dev   Ready     12m
  • Describe nodes with kubectl:
$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'
$ kubectl get nodes -o jsonpath='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' | tr ';' "\n"
k8s.minion1.dev:OutOfDisk=False
Ready=True
k8s.minion2.dev:OutOfDisk=False
Ready=True
k8s.minion3.dev:OutOfDisk=False
Ready=True
  • Get the man page for kubectl:
$ man kubectl-get

External links