Rancher
Rancher is a container management platform. Rancher natively supports and manages all of your Cattle, Kubernetes, Mesos, and Swarm clusters.
Contents
Container management
- App Catalog
- Orchestration: Compose, Kubernetes, Marathon, etc.
- Scheduling: Swarm, Kubernetes, Mesos, etc.
- Monitoring: cAdvisor, Sysdig, Datadog, etc.
- Access Control: LDAP, AD, GitHub, etc.
- Registry: DockerHub, Quay.io, etc.
- Engine: Docker, Rkt, etc.
- Security: Notary, Vault, etc.
- Network: VXLAN, IPSEC, HAProxy, etc.
- Storage: Ceph, Gluster, Swift, etc.
- Distributed DB: Etcd, Consul, MongoDB, etc.
Setup Rancher HA with AWS
For my Rancher HA with AWS setup, I will use the following:
Virtual Private Cloud (VPC)
- Virtual Private Cloud (VPC): rancher-vpc (w/3 subnets)
- VPC CIDR: 172.22.0.0/16
- Rancher management subnet: 172.22.1.0/24 (us-west-2a)
Rancher management server nodes (EC2 instances)
- Rancher management server nodes (EC2 instances running CentOS 7):
- mgmt-host-1 (172.22.1.210)
- mgmt-host-2 (172.22.1.211)
- mgmt-host-3 (172.22.1.212)
Each of the Rancher management server nodes (referred to as "server nodes" from now on) will have Docker 1.10.3 installed and running.
Each of the server nodes will have the following security group inbound rules:
Security group inbound rules | ||||
---|---|---|---|---|
Type | Protocol | Port | Source | Purpose |
SSH | TCP | 22 | 0.0.0.0/0 | ssh |
HTTP | TCP | 80 | 0.0.0.0/0 | http |
HTTPS | TCP | 443 | 0.0.0.0/0 | https |
TCP | TCP | 81 | 0.0.0.0/0 | proxy_to_http |
TCP | TCP | 444 | 0.0.0.0/0 | proxy_to_https |
TCP | TCP | 6379 | 172.22.1.0/24 | redis |
TCP | TCP | 2376 | 172.22.1.0/24 | swarm |
TCP | TCP | 2181 | 0.0.0.0/0 | zookeeper_client |
TCP | TCP | 2888 | 172.22.1.0/24 | zookeeper_quorum |
TCP | TCP | 3888 | 172.22.1.0/24 | zookeeper_leader |
TCP | TCP | 3306 | 172.22.1.0/24 | mysql (RDS) |
TCP | TCP | 8080 | 0.0.0.0/0 | |
TCP | TCP | 18080 | 0.0.0.0/0 | <optional> |
UDP | UDP | 500 | 172.22.1.0/24 | access between nodes |
UDP | UDP | 4500 | 172.22.1.0/24 | access between nodes |
External database (RDS)
The external database (DB) will be running on an AWS Relational Database Service (RDS) and we shall call this RDS: "rancher-ext-db" and it will be listening on port 3306 on 172.22.1.26 and be in VPC "rancher-vpc". The RDS will be running MariaDB 10.0.24.
External load balancer (ELB)
The external load balancer (LB) will be running on an AWS Elastic Load Balancer (ELB) and we shall call this ELB: "rancher-ext-lb". It will be in VPC "rancher-vpc" and it will have the following listeners configured:
ELB listeners | |||||
---|---|---|---|---|---|
Load Balancer Protocol | Load Balancer Port | Instance Protocol | Instance Port | Cipher | SSL Certificate |
TCP | 80 | TCP | 81 | N/A | N/A |
TCP | 443 | TCP | 444 | N/A | N/A |
HTTP | 8080 | HTTP | 8080 | N/A | N/A |
- Create ELB policies:
$ AWS_PROFILE=dev $ LB_NAME=rancher-ext-lb $ POLICY_NAME=rancher-ext-lb-ProxyProtocol-policy $ aws --profile ${AWS_PROFILE} elb create-load-balancer-policy \ --load-balancer-name ${LB_NAME} \ --policy-name ${POLICY_NAME} \ --policy-type-name ProxyProtocolPolicyType \ --policy-attributes AttributeName=ProxyProtocol,AttributeValue=true $ aws --profile ${AWS_PROFILE} elb set-load-balancer-policies-for-backend-server \ --load-balancer-name ${LB_NAME} \ --instance-port 81 \ --policy-names ${POLICY_NAME} $ aws --profile ${AWS_PROFILE} elb set-load-balancer-policies-for-backend-server \ --load-balancer-name ${LB_NAME} \ --instance-port 444 \ --policy-names ${POLICY_NAME}
Rancher HA management stack
A fully functioning Rancher HA setup will have the following Docker containers running:
Rancher management stack | |||||
---|---|---|---|---|---|
Service | Containers | IPs | Traffic to | Portsa | Traffic flow |
6 x cattle | |||||
rancher-ha-parent (x3) | 172.22.1.210, 172.22.1.211, 172.22.1.212 | zookeeper, redis | 3306/tcp 0.0.0.0:18080->8080/tcp 0.0.0.0:2181->12181/tcp 0.0.0.0:2888->12888/tcp 0.0.0.0:3888->13888/tcp 0.0.0.0:6379->16379/tcp | ||
rancher-ha-cattle (x3) | 172.22.1.210, 172.22.1.211, 172.22.1.212 | zookeeper, redis | |||
2 x go-machine-service | |||||
management_go-machine-service_{1,2} | 172.22.1.210, 172.22.1.211 | cattle | 3306, 8080 | ||
3 x load-balancer | |||||
management_load-balancer_{1,2,3} | 172.22.1.210, 172.22.1.211, 172.22.1.212 | websocket-proxy, cattle | 80, 443, 81, 444 | 0.0.0.0:80-81->80-81/tcp 0.0.0.0:443-444->443-444/tcp | |
3 x load-balancer-swarm | |||||
management_load-blancer-swarm_{1,2,3} | 172.22.1.210, 172.22.1.211, 172.22.1.212 | websocket-proxy-ssl | 2376 | 0.0.0.0:2376->2376/tcp | |
2 x rancher-compose-executor | |||||
management_rancher-compose-executor_{1,2} | 172.22.1.211, 172.22.1.212 | cattle | |||
3 x redis | |||||
rancher-ha-redis | 172.22.1.210, 172.22.1.211, 172.22.1.212 | tunnel | |||
36 x tunnel | |||||
rancher-ha-tunnel-redis-1 (x3) | 172.22.1.210, 172.22.1.211, 172.22.1.212 | redis | 6379 | 0.0.0.0:16379->127.0.0.1:6379/tcp | |
rancher-ha-tunnel-redis-2 (x3) | 172.22.1.210, 172.22.1.211, 172.22.1.212 | redis | 6379 | 127.0.0.1:6380->172.22.1.211:6379/tcp | |
rancher-ha-tunnel-redis-3 (x3) | 172.22.1.210, 172.22.1.211, 172.22.1.212 | redis | 6379 | 127.0.0.1:6381->172.22.1.212:6379/tcp | |
rancher-ha-tunnel-zk-client-1 (x3) | 172.22.1.210, 172.22.1.211, 172.22.1.212 | zookeeper | 2181 | 0.0.0.0:12181->127.0.0.1:2181/tcp | |
rancher-ha-tunnel-zk-client-2 (x3) | 172.22.1.210, 172.22.1.211, 172.22.1.212 | zookeeper | 2181 | 127.0.0.1:2182->172.22.1.211:2181/tcp | |
rancher-ha-tunnel-zk-client-3 (x3) | 172.22.1.210, 172.22.1.211, 172.22.1.212 | zookeeper | 2181 | 127.0.0.1:2183->172.22.1.212:2181/tcp | |
rancher-ha-tunnel-zk-leader-1 (x3) | 172.22.1.210, 172.22.1.211, 172.22.1.212 | zookeeper | 3888 | 0.0.0.0:13888->127.0.0.1:3888/tcp | |
rancher-ha-tunnel-zk-leader-2 (x3) | 172.22.1.210, 172.22.1.211, 172.22.1.212 | zookeeper | 3888 | 127.0.0.1:3889->172.22.1.211:3888/tcp | |
rancher-ha-tunnel-zk-leader-3 (x3) | 172.22.1.210, 172.22.1.211, 172.22.1.212 | zookeeper | 3888 | 127.0.0.1:3890->172.22.1.212:3888/tcp | |
rancher-ha-tunnel-zk-quorum-1 (x3) | 172.22.1.210, 172.22.1.211, 172.22.1.212 | zookeeper | 2888 | 0.0.0.0:12888->127.0.0.1:2888/tcp | |
rancher-ha-tunnel-zk-quorum-2 (x3) | 172.22.1.210, 172.22.1.211, 172.22.1.212 | zookeeper | 2888 | 127.0.0.1:2889->172.22.1.211:2888/tcp | |
rancher-ha-tunnel-zk-quorum-3 (x3) | 172.22.1.210, 172.22.1.211, 172.22.1.212 | zookeeper | 2888 | 127.0.0.1:2890->172.22.1.212:2888/tcp | |
2 x websocket-proxy | |||||
management_websocket-proxy_{1,2} | 172.22.1.210, 172.22.1.212 | cattle | |||
2 x websocket-proxy-ssl | |||||
management_websocket-proxy-ssl_{1,2} | 172.22.1.210, 172.22.1.211 | cattle | |||
3 x zookeeper | |||||
rancher-ha-zk | 172.22.1.210, 172.22.1.211, 172.22.1.212 | tunnel | |||
3 x rancher-ha (cluster-manager) | |||||
rancher-ha (x3) | 172.22.1.210, 172.22.1.211, 172.22.1.212 | host | 80, 18080, 3306 | 172.22.1.x:x->172.22.1.26:3306 | |
3 x NetworkAgent | |||||
NetworkAgent | 172.22.1.210, 172.22.1.211, 172.22.1.212 | all | 500/udp, 4500/udp | 0.0.0.0:500->500/udp 0.0.0.0:4500->4500/udp |
a TCP, unless otherwise specified.
Setup Rancher HA on bare-metal
This section will show you how to setup Rancher in High Availability (HA) mode on bare-metal servers. We will also setup a Kubernetes cluster managed by Rancher.
Since a given version of Rancher requires specific versions of Docker and Kubernetes, we will use the following:
- Hardware: 4 x bare-metal servers (rack-mounted)
- OS and software:
- CentOS 7.4
- Rancher 1.6
- Docker 17.03.x-ce
- Kubernetes 1.8
- Install Docker 17.03 (CE):
$ sudo yum update $ curl https://releases.rancher.com/install-docker/17.03.sh | sudo sh $ sudo systemctl enable docker $ sudo usermod -aG docker $(whoami) # logout and then log back in
- Check that Docker has been successfully installed:
$ docker --version Docker version 17.03.2-ce, build f5ec1e2 $ docker run hello-world ... This message shows that your installation appears to be working correctly. ...
- Prevent Docker from being upgraded (i.e., lock it to always use Docker 17.03):
$ sudo yum -y install yum-versionlock $ sudo yum versionlock add docker-ce $ sudo yum versionlock add docker-ce-selinux $ yum versionlock list Loaded plugins: fastestmirror, versionlock 0:docker-ce-17.03.2.ce-1.el7.centos.* 0:docker-ce-selinux-17.03.2.ce-1.el7.centos.*
Note: If you ever need to remove this version lock, you can run `sudo yum versionlock delete docker-ce-*`
.
- Install and configure Network Time Protocol (NTP):
$ sudo yum install ntp $ sudo systemctl start ntpd $ sudo systemctl enable ntpd
- Configure NTP (note: add the closest NTP pool of servers to your bare-metal server's location) by editing
/etc/ntp.conf
and add/update the following lines:
$ sudo vi /etc/ntp.conf restrict default nomodify notrap nopeer noquery kod limited #... server 0.north-america.pool.ntp.org iburst server 1.north-america.pool.ntp.org iburst server 2.north-america.pool.ntp.org iburst server 3.north-america.pool.ntp.org iburst
- Restart NTP and check status:
$ sudo systemctl restart ntpd $ ntpq -p # list NTP pools stats $ ntpdc -l # list NTP clients