Difference between revisions of "Rancher"

From Christoph's Personal Wiki
Jump to: navigation, search
(Setup Rancher HA on bare-metal)
(Install and configure Rancher HA)
Line 321: Line 321:
 
Replace <code><DB_NAME></code>, <code><DB_USER></code>, and <code><DB_PASSWD></code> with values of your choice.
 
Replace <code><DB_NAME></code>, <code><DB_USER></code>, and <code><DB_PASSWD></code> with values of your choice.
  
===Install and configure Rancher HA===
+
===Install and configure Rancher HA Master nodes===
  
 
''Note: Perform all of the actions in this section on all 3 x Rancher HA Master servers (do not perform any of these actions on <code>rancher04.dev</code>).''
 
''Note: Perform all of the actions in this section on all 3 x Rancher HA Master servers (do not perform any of these actions on <code>rancher04.dev</code>).''
 +
 +
* Make sure all of your Rancher HA Master servers have the following ports opened between themselves:
 +
9345
 +
8080
 +
* Make sure all of your Rancher HA Master servers can reach port <code>3306</code> on the server where MariaDB Server is running (i.e., <code>rancher04.dev</code>).
 +
 +
* Start Rancher on all three Rancher HA Master servers:
 +
$ HOST_IP=10.x.x.x      # <- replace with the host IP address of where these commands will be run from
 +
$ DB_HOST=10.x.x.x      # <- replace with the private IP address of the host where MariaDB is running
 +
$ DB_PORT=3306
 +
$ DB_NAME=<DB_NAME>    # <- replace with actual value
 +
$ DB_USER=<DB_USER>    # <- replace with actual value
 +
$ DB_PASSWD=<DB_PASSWD> # <- replace with actual value
 +
 +
$ docker run -d --restart=unless-stopped -p 8080:8080 -p 9345:9345 rancher/server \
 +
  --db-host ${DB_HOST} --db-port ${DB_PORT} --db-user ${DB_USER} --db-pass ${DB_PASSWD} --db-name ${DB_NAME} \
 +
  --advertise-address ${HOST_IP}
 +
 +
* Check the logs for the container started by the above command:
 +
$ docker logs -f <container_id>
 +
 +
Once you see the following message:
 +
msg="Listening on :8090"
 +
 +
Rancher should be setup (in HA mode). You should now be able to bring up the Rancher UI by using the public IP of any one of your Rancher HA Master nodes in your browser with port <code>8080</code> (e.g., http://1.2.3.4:8080</code>).
 +
 +
===Install and configure Rancher Worker nodes===
 +
 +
''Note: Perform all of the actions in this section on all 4 x bare-metal servers.''
 +
 +
Since the Master nodes will also be acting as Worker nodes and the 4th node (<code>rancher04.dev</code>) is just a Worker node, we need to do the following on all 4 servers.
  
 
==External links==
 
==External links==

Revision as of 21:23, 27 March 2018

Rancher is a container management platform. Rancher natively supports and manages all of your Cattle, Kubernetes, Mesos, and Swarm clusters.

Container management

  • App Catalog
  • Orchestration: Compose, Kubernetes, Marathon, etc.
  • Scheduling: Swarm, Kubernetes, Mesos, etc.
  • Monitoring: cAdvisor, Sysdig, Datadog, etc.
  • Access Control: LDAP, AD, GitHub, etc.
  • Registry: DockerHub, Quay.io, etc.
  • Engine: Docker, Rkt, etc.
  • Security: Notary, Vault, etc.
  • Network: VXLAN, IPSEC, HAProxy, etc.
  • Storage: Ceph, Gluster, Swift, etc.
  • Distributed DB: Etcd, Consul, MongoDB, etc.

Setup Rancher HA with AWS

NOTE: This section is currently incomplete. It will be updated soon.

For my Rancher HA with AWS setup, I will use the following:

Virtual Private Cloud (VPC)

  • Virtual Private Cloud (VPC): rancher-vpc (w/3 subnets)
  • VPC CIDR: 172.22.0.0/16
  • Rancher management subnet: 172.22.1.0/24 (us-west-2a)

Rancher management server nodes (EC2 instances)

  • Rancher management server nodes (EC2 instances running CentOS 7):
    • mgmt-host-1 (172.22.1.210)
    • mgmt-host-2 (172.22.1.211)
    • mgmt-host-3 (172.22.1.212)

Each of the Rancher management server nodes (referred to as "server nodes" from now on) will have Docker 1.10.3 installed and running.

Each of the server nodes will have the following security group inbound rules:

Security group inbound rules
Type Protocol Port Source Purpose
SSH TCP 22 0.0.0.0/0 ssh
HTTP TCP 80 0.0.0.0/0 http
HTTPS TCP 443 0.0.0.0/0 https
TCP TCP 81 0.0.0.0/0 proxy_to_http
TCP TCP 444 0.0.0.0/0 proxy_to_https
TCP TCP 6379 172.22.1.0/24 redis
TCP TCP 2376 172.22.1.0/24 swarm
TCP TCP 2181 0.0.0.0/0 zookeeper_client
TCP TCP 2888 172.22.1.0/24 zookeeper_quorum
TCP TCP 3888 172.22.1.0/24 zookeeper_leader
TCP TCP 3306 172.22.1.0/24 mysql (RDS)
TCP TCP 8080 0.0.0.0/0
TCP TCP 18080 0.0.0.0/0 <optional>
UDP UDP 500 172.22.1.0/24 access between nodes
UDP UDP 4500 172.22.1.0/24 access between nodes


External database (RDS)

The external database (DB) will be running on an AWS Relational Database Service (RDS) and we shall call this RDS: "rancher-ext-db" and it will be listening on port 3306 on 172.22.1.26 and be in VPC "rancher-vpc". The RDS will be running MariaDB 10.0.24.

External load balancer (ELB)

The external load balancer (LB) will be running on an AWS Elastic Load Balancer (ELB) and we shall call this ELB: "rancher-ext-lb". It will be in VPC "rancher-vpc" and it will have the following listeners configured:

ELB listeners
Load Balancer Protocol Load Balancer Port Instance Protocol Instance Port Cipher SSL Certificate
TCP 80 TCP 81 N/A N/A
TCP 443 TCP 444 N/A N/A
HTTP 8080 HTTP 8080 N/A N/A


  • Create ELB policies:
$ AWS_PROFILE=dev
$ LB_NAME=rancher-ext-lb
$ POLICY_NAME=rancher-ext-lb-ProxyProtocol-policy
$ aws --profile ${AWS_PROFILE} elb create-load-balancer-policy \
      --load-balancer-name ${LB_NAME} \
      --policy-name ${POLICY_NAME} \
      --policy-type-name ProxyProtocolPolicyType \
      --policy-attributes AttributeName=ProxyProtocol,AttributeValue=true
$ aws --profile ${AWS_PROFILE} elb set-load-balancer-policies-for-backend-server \
      --load-balancer-name ${LB_NAME} \
      --instance-port 81 \
      --policy-names ${POLICY_NAME}
$ aws --profile ${AWS_PROFILE} elb set-load-balancer-policies-for-backend-server \
      --load-balancer-name ${LB_NAME} \
      --instance-port 444 \
      --policy-names ${POLICY_NAME}

Rancher HA management stack

A fully functioning Rancher HA setup will have the following Docker containers running:

Rancher management stack
Service Containers IPs Traffic to Portsa Traffic flow
6 x cattle
rancher-ha-parent (x3) 172.22.1.210, 172.22.1.211, 172.22.1.212 zookeeper, redis 3306/tcp
0.0.0.0:18080->8080/tcp
0.0.0.0:2181->12181/tcp
0.0.0.0:2888->12888/tcp
0.0.0.0:3888->13888/tcp
0.0.0.0:6379->16379/tcp
rancher-ha-cattle (x3) 172.22.1.210, 172.22.1.211, 172.22.1.212 zookeeper, redis
2 x go-machine-service
management_go-machine-service_{1,2} 172.22.1.210, 172.22.1.211 cattle 3306, 8080
3 x load-balancer
management_load-balancer_{1,2,3} 172.22.1.210, 172.22.1.211, 172.22.1.212 websocket-proxy, cattle 80, 443, 81, 444 0.0.0.0:80-81->80-81/tcp
0.0.0.0:443-444->443-444/tcp
3 x load-balancer-swarm
management_load-blancer-swarm_{1,2,3} 172.22.1.210, 172.22.1.211, 172.22.1.212 websocket-proxy-ssl 2376 0.0.0.0:2376->2376/tcp
2 x rancher-compose-executor
management_rancher-compose-executor_{1,2} 172.22.1.211, 172.22.1.212 cattle
3 x redis
rancher-ha-redis 172.22.1.210, 172.22.1.211, 172.22.1.212 tunnel
36 x tunnel
rancher-ha-tunnel-redis-1 (x3) 172.22.1.210, 172.22.1.211, 172.22.1.212 redis 6379 0.0.0.0:16379->127.0.0.1:6379/tcp
rancher-ha-tunnel-redis-2 (x3) 172.22.1.210, 172.22.1.211, 172.22.1.212 redis 6379 127.0.0.1:6380->172.22.1.211:6379/tcp
rancher-ha-tunnel-redis-3 (x3) 172.22.1.210, 172.22.1.211, 172.22.1.212 redis 6379 127.0.0.1:6381->172.22.1.212:6379/tcp
rancher-ha-tunnel-zk-client-1 (x3) 172.22.1.210, 172.22.1.211, 172.22.1.212 zookeeper 2181 0.0.0.0:12181->127.0.0.1:2181/tcp
rancher-ha-tunnel-zk-client-2 (x3) 172.22.1.210, 172.22.1.211, 172.22.1.212 zookeeper 2181 127.0.0.1:2182->172.22.1.211:2181/tcp
rancher-ha-tunnel-zk-client-3 (x3) 172.22.1.210, 172.22.1.211, 172.22.1.212 zookeeper 2181 127.0.0.1:2183->172.22.1.212:2181/tcp
rancher-ha-tunnel-zk-leader-1 (x3) 172.22.1.210, 172.22.1.211, 172.22.1.212 zookeeper 3888 0.0.0.0:13888->127.0.0.1:3888/tcp
rancher-ha-tunnel-zk-leader-2 (x3) 172.22.1.210, 172.22.1.211, 172.22.1.212 zookeeper 3888 127.0.0.1:3889->172.22.1.211:3888/tcp
rancher-ha-tunnel-zk-leader-3 (x3) 172.22.1.210, 172.22.1.211, 172.22.1.212 zookeeper 3888 127.0.0.1:3890->172.22.1.212:3888/tcp
rancher-ha-tunnel-zk-quorum-1 (x3) 172.22.1.210, 172.22.1.211, 172.22.1.212 zookeeper 2888 0.0.0.0:12888->127.0.0.1:2888/tcp
rancher-ha-tunnel-zk-quorum-2 (x3) 172.22.1.210, 172.22.1.211, 172.22.1.212 zookeeper 2888 127.0.0.1:2889->172.22.1.211:2888/tcp
rancher-ha-tunnel-zk-quorum-3 (x3) 172.22.1.210, 172.22.1.211, 172.22.1.212 zookeeper 2888 127.0.0.1:2890->172.22.1.212:2888/tcp
2 x websocket-proxy
management_websocket-proxy_{1,2} 172.22.1.210, 172.22.1.212 cattle
2 x websocket-proxy-ssl
management_websocket-proxy-ssl_{1,2} 172.22.1.210, 172.22.1.211 cattle
3 x zookeeper
rancher-ha-zk 172.22.1.210, 172.22.1.211, 172.22.1.212 tunnel
3 x rancher-ha (cluster-manager)
rancher-ha (x3) 172.22.1.210, 172.22.1.211, 172.22.1.212 host 80, 18080, 3306 172.22.1.x:x->172.22.1.26:3306
3 x NetworkAgent
NetworkAgent 172.22.1.210, 172.22.1.211, 172.22.1.212 all 500/udp, 4500/udp 0.0.0.0:500->500/udp
0.0.0.0:4500->4500/udp

a TCP, unless otherwise specified.



Setup Rancher HA on bare-metal

This section will show you how to setup Rancher in High Availability (HA) mode on bare-metal servers. We will also setup a Kubernetes cluster managed by Rancher.

Since a given version of Rancher requires specific versions of Docker and Kubernetes, we will use the following:

  • Hardware: 4 x bare-metal servers (rack-mounted):
    • rancher01.dev # Rancher HA Master #1 + Worker Node #1
    • rancher02.dev # Rancher HA Master #2 + Worker Node #2
    • rancher03.dev # Rancher HA Master #3 + Worker Node #3
    • rancher04.dev # Worker Node #4
  • OS and software:
    • CentOS 7.4
    • Rancher 1.6
    • Docker 17.03.x-ce
    • Kubernetes 1.8

Install and configure Docker

Note: Perform all of the actions in this section on all 4 bare-metal servers.

  • Install Docker 17.03 (CE):
$ sudo yum update
$ curl https://releases.rancher.com/install-docker/17.03.sh | sudo sh
$ sudo systemctl enable docker
$ sudo usermod -aG docker $(whoami)  # logout and then log back in
  • Check that Docker has been successfully installed:
$ docker --version
Docker version 17.03.2-ce, build f5ec1e2
$ docker run hello-world
...
This message shows that your installation appears to be working correctly.
... 
  • Prevent Docker from being upgraded (i.e., lock it to always use Docker 17.03):
$ sudo yum -y install yum-versionlock
$ sudo yum versionlock add docker-ce
$ sudo yum versionlock add docker-ce-selinux
$ yum versionlock list
Loaded plugins: fastestmirror, versionlock
0:docker-ce-17.03.2.ce-1.el7.centos.*
0:docker-ce-selinux-17.03.2.ce-1.el7.centos.*

Note: If you ever need to remove this version lock, you can run `sudo yum versionlock delete docker-ce-*`.

Install and configure Network Time Protocol (NTP)

see Network Time Protocol for details.

Note: Perform all of the actions in this section on all 4 bare-metal servers.

  • Install NTP:
$ sudo yum install ntp
$ sudo systemctl start ntpd
$ sudo systemctl enable ntpd
  • Configure NTP (note: add the closest NTP pool of servers to your bare-metal server's location) by editing /etc/ntp.conf and add/update the following lines:
$ sudo vi /etc/ntp.conf
restrict default nomodify notrap nopeer noquery kod limited
#...
server 0.north-america.pool.ntp.org iburst
server 1.north-america.pool.ntp.org iburst
server 2.north-america.pool.ntp.org iburst
server 3.north-america.pool.ntp.org iburst
  • Restart NTP and check status:
$ sudo systemctl restart ntpd
$ ntpq -p   # list NTP pools stats
$ ntpdc -l  # list NTP clients

Install and configure external database

Note: Perform all of the actions in this section on rancher04.dev (i.e., Worker Node #4) only. I will use MariaDB 5.5.x.

  • Install MariaDB Server:
$ sudo yum install -y mariadb-server
$ sudo systemctl start mariadb
$ sudo systemctl enable mariadb
  • Configure MariaDB Server:
$ sudo mysql_secure_installation  # Follow the recommendations
  • Edit /etc/my.cnf and add the following under the [mysqld] section:
max_allowed_packet=16M
  • Restart MariaDB Server:
$ sudo systemctl restart mariadb
  • Log into MariaDB Server and create database and user for Rancher:
$ mysql -u root -p
mysql> CREATE DATABASE IF NOT EXISTS <DB_NAME> COLLATE = 'utf8_general_ci' CHARACTER SET = 'utf8';
mysql> GRANT ALL ON <DB_NAME>.* TO '<DB_USER>'@'%' IDENTIFIED BY '<DB_PASSWD>';
mysql> GRANT ALL ON <DB_NAME>.* TO '<DB_USER>'@'localhost' IDENTIFIED BY '<DB_PASSWD>';

Replace <DB_NAME>, <DB_USER>, and <DB_PASSWD> with values of your choice.

Install and configure Rancher HA Master nodes

Note: Perform all of the actions in this section on all 3 x Rancher HA Master servers (do not perform any of these actions on rancher04.dev).

  • Make sure all of your Rancher HA Master servers have the following ports opened between themselves:
9345
8080
  • Make sure all of your Rancher HA Master servers can reach port 3306 on the server where MariaDB Server is running (i.e., rancher04.dev).
  • Start Rancher on all three Rancher HA Master servers:
$ HOST_IP=10.x.x.x      # <- replace with the host IP address of where these commands will be run from
$ DB_HOST=10.x.x.x      # <- replace with the private IP address of the host where MariaDB is running
$ DB_PORT=3306
$ DB_NAME=<DB_NAME>     # <- replace with actual value
$ DB_USER=<DB_USER>     # <- replace with actual value
$ DB_PASSWD=<DB_PASSWD> # <- replace with actual value

$ docker run -d --restart=unless-stopped -p 8080:8080 -p 9345:9345 rancher/server \
 --db-host ${DB_HOST} --db-port ${DB_PORT} --db-user ${DB_USER} --db-pass ${DB_PASSWD} --db-name ${DB_NAME} \
 --advertise-address ${HOST_IP}
  • Check the logs for the container started by the above command:
$ docker logs -f <container_id>

Once you see the following message:

msg="Listening on :8090"

Rancher should be setup (in HA mode). You should now be able to bring up the Rancher UI by using the public IP of any one of your Rancher HA Master nodes in your browser with port 8080 (e.g., http://1.2.3.4:8080</code>).

Install and configure Rancher Worker nodes

Note: Perform all of the actions in this section on all 4 x bare-metal servers.

Since the Master nodes will also be acting as Worker nodes and the 4th node (rancher04.dev) is just a Worker node, we need to do the following on all 4 servers.

External links