Kubernetes/AWS
From Christoph's Personal Wiki
This article will cover topics related to Kubernetes running on AWS, whether running on EKS or stand-alone EC2 instances, etc.
Contents
Enable ELB Access Logs via Kubernetes Service
- Setup details
- Kubernetes v1.17.3
- kubectl v1.17.3
- 1 x EC2 instance (Ubuntu 16.04) => k8s master+worker node
- Initial steps
- First, setup some environment variables:
$ MY_ELB_LOGS_BUCKET=my-elb-logs $ ELB_ACCOUNT_ID=797873946194 # <- us-west-2
You can find the appropriate ${ELB_ACCOUNT_ID}
here.
- Create an S3 bucket in which to host your ELB logs:
$ aws s3 mb s3://${MY_ELB_LOGS_BUCKET}
- Make sure this S3 bucket as the following bucket policy (set under the permissions):
$ cat <<EOF >policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::${ELB_ACCOUNT_ID}:root" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::${MY_ELB_LOGS_BUCKET}/*" } ] } EOF $ aws s3api put-bucket-policy --bucket ${MY_ELB_LOGS_BUCKET} --policy file://policy.json
- Kubernetes setup
- Create test Nginx Deployment:
$ cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.17.9 ports: - containerPort: 80 EOF
- Create a Service to put in front of above Deployment:
$ cat <<EOF | kubectl create -f - apiVersion: v1 kind: Service metadata: name: frontdoor-service annotations: service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true" # The interval for publishing the access logs (can be 5 or 60 minutes). service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "5" service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "${MY_ELB_LOGS_BUCKET}" service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "logs/frontdoor" labels: app: frontdoor spec: type: LoadBalancer ports: - name: frontdoorport port: 30010 targetPort: 30010 selector: app: nginx EOF
- Get information on Service created:
$ kubectl get svc frontdoor-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontdoor-service LoadBalancer 10.43.184.39 a371dfd887b56468fa65e126e0d03500-527425434.us-west-2.elb.amazonaws.com 30010:30526/TCP 62m $ kubectl describe svc frontdoor-service Name: frontdoor-service Namespace: default Labels: app=frontdoor Annotations: service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: 5 service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: true service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: my-elb-logs service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: logs/frontdoor ...
- AWS details
- Describe AWS Load Balancer (ELB) Kubernetes automatically created for us:
$ aws elb describe-load-balancer-attributes \ --profile default \ --region us-west-2 \ --load-balancer-name a371dfd887b56468fa65e126e0d03500 { "LoadBalancerAttributes": { "ConnectionDraining": { "Enabled": false, "Timeout": 300 }, "CrossZoneLoadBalancing": { "Enabled": false }, "ConnectionSettings": { "IdleTimeout": 60 }, "AccessLog": { "S3BucketPrefix": "logs/frontdoor", "EmitInterval": 5, "Enabled": true, "S3BucketName": "my-elb-logs" } } }
- Interact with that ELB DNS name so we can generate some traffic for our access logs:
$ ab -c100 -n20000 http://a371dfd887b56468fa65e126e0d03500-527425434.us-west-2.elb.amazonaws.com:30010/ $ for i in $(seq 1 100); do curl -sI http://a371dfd887b56468fa65e126e0d03500-527425434.us-west-2.elb.amazonaws.com:30010/ | grep ^HTTP; done
- Check that the S3 bucket has ELB access logs:
$ aws s3 ls \ --profile default \ --recursive \ s3://${MY_ELB_LOGS_BUCKET}/logs/frontdoor/ 2020-03-04 16:05:12 86 logs/frontdoor/AWSLogs/<redacted>/ELBAccessLogTestFile 2020-03-04 16:25:16 156 logs/frontdoor/AWSLogs/<redacted>/elasticloadbalancing/us-west-2/2020/03/05/<redacted>_elasticloadbalancing_us-west-2_a371dfd887b56468fa65e126e0d03500_20200305T0025Z_54.39.161.151_4jmuxnr9.log 2020-03-04 16:25:31 15434 logs/frontdoor/AWSLogs/<redacted>/elasticloadbalancing/us-west-2/2020/03/05/<redacted>_elasticloadbalancing_us-west-2_a371dfd887b56468fa65e126e0d03500_20200305T0025Z_52.216.39.65_2tv1rd8u.log
- View the contents of one of those access logs:
$ aws --profile default s3 cp \ s3://${MY_ELB_LOGS_BUCKET}/logs/frontdoor/AWSLogs/<redacted>/elasticloadbalancing/us-west-2/2020/03/05/<redacted>_elasticloadbalancing_us-west-2_a173dfd887b56468fa65e126e0d03500_20200305T0025Z_52.216.39.65_2tv1rd8u.log - | head -3 2020-03-05T00:22:25.152094Z a371dfd887b56468fa65e126e0d03500 70.104.137.198:35200 10.10.0.167:30526 0.000432 0.000006 0.000015 - - 141 238 "- - - " "-" - - 2020-03-05T00:22:25.243193Z a371dfd887b56468fa65e126e0d03500 70.104.137.198:22800 10.10.0.167:30526 0.000518 0.000007 0.000016 - - 141 238 "- - - " "-" - - 2020-03-05T00:22:25.282568Z a371dfd887b56468fa65e126e0d03500 70.104.137.198:22801 10.10.0.167:30526 0.000422 0.000005 0.000014 - - 141 238 "- - - " "-" - -
Related links
- Kubernetes Cloud Providers - AWS
- Enable Access Logs for Your Classic Load Balancer
- Rancher - Setting up Cloud Providers
Enable TLS Termination via Kubernetes Service
- Variables:
$ AWS_PROFILE=default $ AWS_REGION=us-west-2 $ AWS_ACM_ARN=arn:aws:acm:us-west-2:000000000000:certificate/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee $ K8S_CLUSTER_ID=c-abc1d
Note: If your k8s cluster was created by Rancher, the ${K8S_CLUSTER_ID}
will look something like "c-abc1d
" and you can find this in the Rancher UI.
- First, tag all AWS VPC Subnets used by your Kubernetes cluster with the following tags:
KubernetesCluster: ${K8S_CLUSTER_ID} kubernetes.io/role/elb: 1 kubernetes.io/cluster/${K8S_CLUSTER_ID}: owned # <- only needed if cluster was created by Rancher
- Describe the subnet to validate correct tagging:
$ aws --profile ${AWS_PROFILE} \ --region ${AWS_REGION} \ ec2 describe-subnets \ --subnet-ids subnet-00000000000000000 { "Subnets": [ { "Tags": [ { "Value": "1", "Key": "kubernetes.io/role/elb" }, { "Value": "c-abc1d", "Key": "KubernetesCluster" }, { "Value": "owned", "Key": "kubernetes.io/cluster/c-abc1d" } ], ... } ] }
- Create Kubernetes Service and associated Deployment (using sample Nginx container):
$ cat <<EOF | kubectl create -f - apiVersion: v1 kind: Service metadata: name: nginx-tls-term annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "${AWS_ACM_ARN}" service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https" service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http" external-dns.alpha.kubernetes.io/hostname: "nginx.example.com" spec: type: LoadBalancer ports: - name: https port: 443 targetPort: 80 selector: app: nginx --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-tls-term spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 EOF
- Describe Kubernetes Service (i.e., the one created above):
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-tls-term LoadBalancer 10.43.236.159 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-0000000000.us-west-2.elb.amazonaws.com 443:30253/TCP 16h $ kubectl describe service nginx-tls-term Name: nginx-tls-term Namespace: default Labels: <none> Annotations: external-dns.alpha.kubernetes.io/hostname: nginx.example.com field.cattle.io/publicEndpoints: [{"addresses":["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-0000000000.us-west-2.elb.amazonaws.com"],"port":443,"protocol":"TCP","serviceName":"defau... kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"external-dns.alpha.kubernetes.io/hostname":"nginx.example.com","service.... service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:000000000000:certificate/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https ...
- Get headers from
nginx.example.com
:
$ curl -Iv https://nginx.example.com ... Server certificate: * subject: CN=*.example.com * start date: Mar 19 00:00:00 2020 GMT * expire date: Apr 19 12:00:00 2021 GMT * subjectAltName: host "nginx.example.com" matched cert's "*.example.com" * issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon * SSL certificate verify ok. HTTP/1.1 200 OK Accept-Ranges: bytes Content-Length: 612 Content-Type: text/html Date: Fri, 20 Mar 2020 17:07:57 GMT ETag: "5e5e6a8f-264" Last-Modified: Tue, 03 Mar 2020 14:32:47 GMT Server: nginx/1.17.9 Connection: keep-alive
- If using AWS Route53 for your domain name (FQDN =>
nginx.example.com
):
$ aws --profile ${AWS_PROFILE} \ --region ${AWS_REGION} \ route53 test-dns-answer \ --hosted-zone-id ZAAAAAAAAAAAA \ --record-name nginx.example.com \ --record-type A { "Protocol": "UDP", "RecordType": "A", "RecordName": "nginx.example.com", "Nameserver": "ns-167.awsdns-20.com", "RecordData": [ "35.164.250.42", "52.41.247.10" ], "ResponseCode": "NOERROR" }
- If you used AWS's ACM service to create your TLS certification (wildcard in my case):
$ aws --profile ${AWS_PROFILE} \ acm describe-certificate \ --certificate-arn ${AWS_ACM_ARN} { "Certificate": { ... "DomainValidationOptions": [ { "ValidationStatus": "SUCCESS", "ResourceRecord": { "Type": "CNAME", "Name": "_00000000000000000000000000000000.example.com.", "Value": "_00000000000000000000000000000000.aaaaaaaaaa.acm-validations.aws." }, "ValidationDomain": "*.example.com", "ValidationMethod": "DNS", "DomainName": "*.example.com" } ], "Subject": "CN=*.example.com" } }