Difference between revisions of "Istio"

From Christoph's Personal Wiki
Jump to: navigation, search
(Install Istio)
(Minikube method)
 
(One intermediate revision by the same user not shown)
Line 111: Line 111:
  
 
* Deploy sample app:
 
* Deploy sample app:
$ kubectl apply -f 4-application-full-stack.yaml
+
<pre>
 +
$ kubectl apply -f 4-application-full-stack.yaml
 +
$ kubectl get po
 +
NAME                                  READY  STATUS    RESTARTS  AGE
 +
api-gateway-5cd5c547c6-jgksc          2/2    Running  0          5m28s
 +
photo-service-7c79458679-822mw        2/2    Running  0          5m28s
 +
position-simulator-6c7b7949f8-fb227  2/2    Running  0          5m28s
 +
position-tracker-cbbc8b7f6-rhz9s      2/2    Running  0          5m28s
 +
staff-service-6597879677-n5qwh        2/2    Running  0          5m28s
 +
vehicle-telemetry-c8fcb46c6-qf6hq    2/2    Running  0          5m28s
 +
webapp-85fd946885-kwzsr              2/2    Running  0          5m28s
 +
</pre>
 +
 
 +
* Get cluster IP from minikube:
 +
$ minikube ip
 +
192.168.49.2
 +
 
 +
* Get port for <code>webapp</code>:
 +
<pre>
 +
$ kubectl get svc | grep fleetman-webapp
 +
fleetman-webapp              NodePort    10.107.155.240  <none>        80:30080/TCP  6m38s
 +
</pre>
 +
 
 +
Put the IP and port into your browser (i.e., <code>192.168.49.2:30080</code>).
 +
 
 +
* Get Kiali and Jaeger (aka "tracing") ports:
 +
<pre>
 +
$ kubectl -n istio-system get service kiali --output jsonpath={.spec.ports[*].nodePort}
 +
31000 30479
 +
$ kubectl -n istio-system get service tracing --output jsonpath={.spec.ports[*].nodePort}
 +
31001
 +
</pre>
 +
 
 +
* Kiali (<code>31000</code>):
 +
<pre>
 +
$ curl -sIL $(minikube ip):31000/ | grep ^HTTP
 +
HTTP/1.1 302 Found
 +
HTTP/1.1 200 OK
 +
</pre>
 +
 
 +
* Jaeger (<code>31001</code>):
 +
$ curl -sIL $(minikube ip):31001/ | grep ^HTTP
 +
HTTP/1.1 200 OK
 +
</pre>
  
 
===Docker method===
 
===Docker method===

Latest revision as of 21:29, 16 August 2021

Istio is an opensource tool that allows you to connect, secure, control, and observe services. It is commonly used as a service mesh in Kubernetes.

In software architecture, a service mesh is a dedicated infrastructure layer for facilitating service-to-service communications between microservices, often using a sidecar proxy.

Having such a dedicated communication layer can provide a number of benefits, such as providing observability into communications, providing secure connections, or automating retries and backoff for failed requests.

Istio architecture

NOTE: This section will cover Istio 1.10.x (as of 16 July 2021).
Control Plane

All Pods in the Control Plane are running in the istio-system namespace

  • istiod (Istio Daemon) is the name of the Pod running the service mesh (it used to be called "pilot").
Data Plane

All other Pods running in your system will have Proxies injected into them (if enabled on a given namespace). These Proxies are collectively called the "Data Plane" in Istio.

Old Istio (pre-1.8)

NOTE: The following is true, as of October 2019. Istio is being refactored and some of the following will have changed at some point in the future.

Istio is made up of the following components:

Envoy (L7 proxy)
  • Dynamic service discovery
  • Load balancing
  • Health checks
  • Stagged rollouts
  • Fault injection
Control Plane API
Pilot (sends traffic to proxy)
  • Routing policies
  • Service discovery
  • Intelligent routing
  • Resiliency
Citadel
  • User authentication
  • Credential management
  • Certificate management
  • Traffic encryption
Mixer
  • Access control
  • Usage policies
  • Telemetry data
Misc
  • Galley
Istio policies
  • Uses Mixer

Install Istio

Minikube method

  • Install minikube:
$ minikube version
minikube version: v1.22.0
  • Start up minikube cluster (w/4GB of RAM):
$ minikube start --memory 4096
  • Install Custom Resource Definitions (CRDs) for Istio:
$ kubectl apply -f 1-istio-init.yaml 
namespace/istio-system created
customresourcedefinition.apiextensions.k8s.io/authorizationpolicies.security.istio.io created
customresourcedefinition.apiextensions.k8s.io/destinationrules.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/envoyfilters.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/gateways.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/istiooperators.install.istio.io created
customresourcedefinition.apiextensions.k8s.io/peerauthentications.security.istio.io created
customresourcedefinition.apiextensions.k8s.io/requestauthentications.security.istio.io created
customresourcedefinition.apiextensions.k8s.io/serviceentries.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/sidecars.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/telemetries.telemetry.istio.io created
customresourcedefinition.apiextensions.k8s.io/virtualservices.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/workloadentries.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/workloadgroups.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/monitoringdashboards.monitoring.kiali.io created
  • Install Istio:
$ grep ^kind 2-istio-minikube.yaml | wc -l
78
$ kubectl apply -f 2-istio-minikube.yaml
$ kubectl -n istio-system get po
NAME                                   READY   STATUS    RESTARTS   AGE
grafana-7bdcf77687-5s5gd               1/1     Running   0          5m30s
istio-egressgateway-5547fcc8fc-tkgln   1/1     Running   0          5m31s
istio-ingressgateway-8f568d595-2xvm8   1/1     Running   0          5m31s
istiod-6659979bdf-z7qbk                1/1     Running   0          5m31s
jaeger-5c7c5c8d87-ks2bv                1/1     Running   0          5m30s
kiali-7fd9f6f484-dnrwn                 1/1     Running   0          5m29s
prometheus-f5f544b59-z5c7s             2/2     Running   0          5m30s
  • Create Kiali secret:
$ kubectl apply -f 3-kiali-secret.yaml
  • Enable Istio sidecar injection:
$ kubectl label namespace default istio-injection=enabled
  • Deploy sample app:
$ kubectl apply -f 4-application-full-stack.yaml
$ kubectl get po
NAME                                  READY   STATUS    RESTARTS   AGE
api-gateway-5cd5c547c6-jgksc          2/2     Running   0          5m28s
photo-service-7c79458679-822mw        2/2     Running   0          5m28s
position-simulator-6c7b7949f8-fb227   2/2     Running   0          5m28s
position-tracker-cbbc8b7f6-rhz9s      2/2     Running   0          5m28s
staff-service-6597879677-n5qwh        2/2     Running   0          5m28s
vehicle-telemetry-c8fcb46c6-qf6hq     2/2     Running   0          5m28s
webapp-85fd946885-kwzsr               2/2     Running   0          5m28s
  • Get cluster IP from minikube:
$ minikube ip
192.168.49.2
  • Get port for webapp:
$ kubectl get svc | grep fleetman-webapp
fleetman-webapp              NodePort    10.107.155.240   <none>        80:30080/TCP   6m38s

Put the IP and port into your browser (i.e., 192.168.49.2:30080).

  • Get Kiali and Jaeger (aka "tracing") ports:
$ kubectl -n istio-system get service kiali --output jsonpath={.spec.ports[*].nodePort}
31000 30479
$ kubectl -n istio-system get service tracing --output jsonpath={.spec.ports[*].nodePort}
31001
  • Kiali (31000):
$ curl -sIL $(minikube ip):31000/ | grep ^HTTP
HTTP/1.1 302 Found
HTTP/1.1 200 OK
  • Jaeger (31001):

$ curl -sIL $(minikube ip):31001/ | grep ^HTTP HTTP/1.1 200 OK </pre>

Docker method

  • Add current user to docker group:
sudo usermod -aG docker $(whoami)
  • Install docker-compose and make it executable:
COMPOSE_VERSION=1.23.2
sudo curl -L "https://github.com/docker/compose/releases/download/${COMPOSE_VERSION}/docker-compose-Linux-x86_64" \
  -o /usr/local/bin/docker-compose  
sudo chmod +x /usr/local/bin/docker-compose
  • Download Istio and unpack it:
ISTIO_VERSION=1.0.6
wget wget https://github.com/istio/istio/releases/download/${ISTIO_VERSION}/istio-${ISTIO_VERSION}-linux-amd64.tar.gz
tar -xvf istio-1.0.6-linux-amd64.tar.gz
chmod +x istio-1.0.6/bin/istioctl && mv istio-1.0.6/bin/istioctl /usr/local/bin/
  • Preconfigure kubectl for pilot:
kubectl config set-context istio --cluster=istio
kubectl config set-cluster istio --server=http://localhost:8080
kubectl config use-context istio
  • Create a DOCKER_GATEWAY environment variable:
export DOCKER_GATEWAY=172.28.0.1:  # <- don't forget the colon
  • Bring up Istio's control plane (this command may need to be repeated to ensure the pilot container starts):
docker-compose -f install/consul/istio.yaml up -d
  • Change bookinfo.yaml from using port 30080 to port 9081:
sed -i 's/9081/30080/' ./istio-1.0.6/samples/bookinfo/platform/consul/bookinfo.yaml
  • Bring up the application:
docker-compose -f ./istio-1.0.6/samples/bookinfo/platform/consul/bookinfo.yaml up -d
  • Bring up the sidecars:
docker-compose -f ./istio-1.0.6/samples/bookinfo/platform/consul/bookinfo.sidecars.yaml up -d

Kubernetes method

  • Get the Istio installation package onto the Kube Master and unpack it:
$ wget https://github.com/istio/istio/releases/download/1.0.6/istio-1.0.6-linux.tar.gz
$ tar -xvf istio-1.0.6-linux.tar.gz
  • Add istioctl to our path:
$ export PATH:<path_to_istio_bin>:$PATH
  • Set Istio to NodePort at port 30080:
$ sed -i 's/LoadBalancer/NodePort/;s/31380/30080/' ./istio-1.0.6/install/kubernetes/istio-demo.yaml
  • Bring up the Istio control plane:
$ kubectl apply -f ./istio-1.0.6/install/kubernetes/istio-demo.yaml
  • Verify that the control plane is running:
$ kubectl -n istio-system get pods

When all of the Pods are up and running (which we can verify by running that command again) we can move on.

  • Install the "bookinfo" application with manual sidecar injection:
$ kubectl apply -f $(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)
  • Verify that the application is running and that there are 2 containers per Pod:
$ kubectl get pods
  • Once everything is running, create an Ingress and virtual service for the application:
$ kubectl apply -f istio-1.0.6/samples/bookinfo/networking/bookinfo-gateway.yaml

Verify the page loads at the URI http://<kn1_IP ADDRESS>:30080/productpage

Verify That Routing Rules Are Working by Configuring the Application to Route to v1 Then v2 of the reviews Backend Service
  • Set the default destination rules:
$ kubectl apply -f istio-1.0.6/samples/bookinfo/networking/destination-rule-all.yaml
  • Route all traffic to version 1 of the application and verify that it is working:
$ kubectl apply -f istio-1.0.6/samples/bookinfo/networking/virtual-service-all-v1.yaml
  • Update the virtual service file to point to version 2 of the service and verify that it is working. Edit istio-1.0.6/samples/bookinfo/networking/virtual-service-all-v1.yaml (using whatever text editor you like) and change this:
    - destination:
        host: reviews
        subset: v1

to this:

    - destination:
        host: reviews
        subset: v2

Prometheus and Grafana

In this section, we will be looking at using Prometheus and Grafana to gain insight into the behaviour of the traffic inside the Istio mesh. In order to gain access to this with a browser, we are going to be using Nginx to create a proxy for the services. This is the Nginx configuration that was used before (located at /etc/nginx/sites-enabled/default):

server {
  listen 80 default_server;
  listen [::]:80 default_server;
  root /var/www/html;
  index index.html index.htm index.nginx-debian.html;
  server_name _;  
  location / {proxy_pass  http://127.0.0.1:9090;} # Prometheus
  #location / {proxy_pass  http://127.0.0.1:3000;}  # Grafana
}

There are also 2 commands that are used to forward the ports.

  • The command to forward the ports for Prometheus:
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 9090:9090 &
  • The command the forward the port for Grafana:
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 &

See also

External links