Terraform

From Christoph's Personal Wiki
Revision as of 19:34, 31 March 2020 by Christoph (Talk | contribs)

Jump to: navigation, search

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. It was created by HashiCorp and first released in 2014. Terraform can manage existing and popular service providers as well as custom in-house solutions. It is a popular tool in DevOps.

Introduction

  • Infrastructure as Code
  • Used for the automation of your infrastructure
  • It keeps your infrastructure in a certain state (compliant)
    • E.g., 2 web instances and 2 volumes and 1 load balancer
  • It makes your infrastructure auditable
    • That is, you can keep your infrastructure change history in a version control system (e.g., git)

A high-level difference and/or reason to use Terraform over CAPS (Chef, Ansible, Puppet, Salt) is that these others have a focus on automating the installation and configuration of software (i.e., keeping the machines in compliance and in a certain state). Terraform, however, can automate provisioning of the infrastructure itself (e.g., in AWS or Google). One can, of course, do the same with, say, Ansible. However, Terraform really shines in infrastructure management and automation.

Examples

Basic example #1

The following is a super simple example of how to use Terraform to spin up a single AWS EC2 instance.

  • Create a working directory for your Terraform project:
$ mkdir ~/dev/terraform
  • Create a Terraform file describing the AWS EC2 instance to create:
$ cat << EOF >> instance.tf
provider "aws" {
  access_key = "<REDACTED>"
  secret_key = "<REDACTED>"
  region     = "us-west-2"
}

resource "aws_instance" "xtof-terraform" {
  ami           = "ami-a042f4d8"  # CentOS 7.4
  instance_type = "t2.micro"
}
EOF
  • Initialize your Terraform working directory:
$ terraform init
  • Create your EC2 instance:
$ terraform plan
$ terraform apply

Note: A better method to use is:

$ terraform plan -out myinstance.terraform
$ terraform apply myinstance.terraform

By using the two separate above commands, Terraform will first show you what changes it will make without doing the actual changes. The second command will ensure that only the changes you saw on screen are applied. If you would just use terraform apply, more changes could have been added, because the remote infrastructure can change or files could have been edited (e.g., by someone else on your team). In short, always use the plan/apply file method.

  • Destroy the above instance:
$ terraform destroy

Basic example #2

The following expounds upon what we did in "Basic example #1", except we are building a more "Best Practices" approach. We will continue to build these examples.

  • Create a working directory (aws.create_ec2_instance) with the following files:
aws.create_ec2_instance/
├── .gitignore
├── instance.tf
├── provider.tf
├── terraform.tfvars
└── vars.tf
$ cat << EOF > .gitignore
# Compiled files
*.tfstate
*.tfstate.backup

# Variables files with secrets
*.tfvars

# Plan files
*.plan

# Certificate files
*.pem
*.pfx
*.crt
*.key
EOF

The contents of each of the above files should look like the following:

$ cat << EOF >> instance.tf 
resource "aws_instance" "example" {
  ami           = "${lookup(var.AMIS, var.AWS_REGION)}"
  instance_type = "t2.micro"
}
EOF

$ cat << EOF >> provider.tf 
provider "aws" {
  access_key = "${var.AWS_ACCESS_KEY}"
  secret_key = "${var.AWS_SECRET_KEY}"
  region = "${var.AWS_REGION}"
}
EOF

$ cat << EOF >> terraform.tfvars 
AWS_ACCESS_KEY = "<REDACTED>"
AWS_SECRET_KEY = "<REDACTED>"
EOF

$ cat << EOF >> vars.tf 
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
variable "AWS_REGION" {
  default = "us-west-2"
}
variable "AMIS" {
  type = "map"
  default = {
    us-west-2 = "ami-b2d463d2"
    us-east-1 = "ami-13be557e"
    eu-west-1 = "ami-0d729a60"
  }
}
EOF
  • Initialize the Terraform working directory:
$ terraform init
  • Now, "plan" your execution with:
$ terraform plan -out myinstance.plan
...
+ aws_instance.example
    ami:                         "ami-b2d463d2"
    associate_public_ip_address: "<computed>"
    availability_zone:           "<computed>"
    ebs_block_device.#:          "<computed>"
    ephemeral_block_device.#:    "<computed>"
    instance_state:              "<computed>"
    instance_type:               "t2.micro"
    key_name:                    "<computed>"
    network_interface_id:        "<computed>"
    placement_group:             "<computed>"
    private_dns:                 "<computed>"
    private_ip:                  "<computed>"
    public_dns:                  "<computed>"
    public_ip:                   "<computed>"
    root_block_device.#:         "<computed>"
    security_groups.#:           "<computed>"
    source_dest_check:           "true"
    subnet_id:                   "<computed>"
    tenancy:                     "<computed>"
    vpc_security_group_ids.#:    "<computed>"

Plan: 1 to add, 0 to change, 0 to destroy.
  • Now, "apply" (or actually create the EC2 instance):
$ terraform apply myinstance.plan

Basic example #3

Pull down a Docker image

This example will create a very simple Terraform file that will pull down an image (ghost) from Docker Hub.

  • Set up the environment:
$ mkdir -p terraform/ghost && cd terraform/ghost
  • Create a Terraform script:
$ cat << EOF > main.tf
# Download the latest Ghost image
resource "docker_image" "image_id" {
  name = "ghost:latest"
}
EOF
  • Initialize Terraform:
$ terraform init
  • Validate the Terraform file:
$ terraform validate
  • List providers in the folder:
ls .terraform/plugins/linux_amd64/
  • List providers used in the configuration:
$ terraform providers
.
└── provider.docker
  • Terraform Plan:
$ terraform plan -out=project.plan
  • Terraform Apply:
$ terraform apply "project.plan"
  • List Docker ghost image:
$ docker image ls | grep ^ghost
ghost   latest   ebaf3206b9da   5 days ago   380MB
  • Terraform Show:
$ terraform show
docker_image.image_id:
  id = sha256:ebaf3206b9da09b0999b9d2db7c84bb6f78586b7b9f8595d046b7eca571a07f5ghost:latest
  latest = sha256:ebaf3206b9da09b0999b9d2db7c84bb6f78586b7b9f8595d046b7eca571a07f5
  name = ghost:latest
  • Destroy Terraform project (i.e., do the reverse of the above):
$ terraform destroy
  • Verify Docker image has been removed:
$ docker image ls | grep ^ghost
$ terraform show

Both of the above commands should return nothing.

Deploy a Docker container

In this section, we will expand upon what we did above (pull down a Docker image) by creating a container.

  • Start up a container running the Ghost Blog:
$ cat << EOF > main.tf
# Download the latest Ghost image
resource "docker_image" "image_id" {
  name = "ghost:latest"
}

# Start the Container
resource "docker_container" "container_id" {
  name  = "ghost_blog"
  image = "${docker_image.image_id.latest}"
  ports {
    internal = "2368"
    external = "80"
  }
}
EOF

$ terraform validate
$ terraform plan -out=project.plan
$ terraform apply
  • Verify that the Ghost blog is running:
$ $ docker container ls
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS                  NAMES
babe7db87d51   ebaf3206b9da   "docker-entrypoint.s…"   7 seconds ago   Up 4 seconds   0.0.0.0:80->2368/tcp   ghost_blog

$ curl -I localhost
HTTP/1.1 200 OK
X-Powered-By: Express
Cache-Control: public, max-age=0
Content-Type: text/html; charset=utf-8
Content-Length: 21694
ETag: W/"54be-JQLstl8ocjMgh3/fswe5SP78jTg"
Vary: Accept-Encoding
Date: Wed, 18 Sep 2019 22:03:04 GMT
Connection: keep-alive

$ curl -s localhost | grep -E "<title>"
    <title>Ghost</title>
  • Cleanup:
$ terraform destroy

Concepts

Provisioners

File uploads
resource "aws_instance" "example" {
  ami           = "${lookup(var.AMIS, var.AWS_REGION)}"
  instance_type = "t2.micro"

  provisioner "file" {
    source      = "app.conf"
    destination = "/etc/myapp.conf"
  }
}
Connection
# Copies the file as the instance_username user using SSH
provisioner "file" {
  source      = "conf/myapp.conf"
  destination = "/etc/myapp.conf"

  connection {
    type     = "ssh"
    user     = "${var.instance_username}"
    password = "${var.instance_password}"
  }
}
  • Copy a script to the instance and execute it:
resource "aws_key_pair" "mykey" {
  key_name   = "christoph-aws-key"
  #public_key = "ssh-rsa my-public-key"
  public_key = "${file("${var.PATH_TO_PUBLIC_KEY}")}"
}

resource "aws_instance" "example" {
  ami           = "${lookup(var.AMIS, var.AWS_REGION)}"
  instance_type = "t2.micro"
  key_name      = "${aws_key_pair.mykey.key_name}"

  provisioner "file" {
    source      = "src/script.sh"
    destination = "/tmp/script.sh"
  }
  provisioner "remote-exec" {
    inline = [
      "chmod +x /tmp/script.sh",
      "sudo /tmp/script.sh"
    ]
  }

  connection {
    type        = "ssh"
    user        = "${var.instance_username}"
    private_key = "${file("${var.PATH_TO_PRIVATE_KEY}")}"
  }
}

Outputs

Outputs define values that will be highlighted to the user when Terraform applies, and can be queried easily using the output command.

resource "aws_instance" "example" {
  ami           = "${lookup(var.AMIS, var.AWS_REGION)}"
  instance_type = "t2.micro"
}

output "ip" {
  value = "${aws_instance.example.public_ip}"
}

You can refer to any attribute by specifying the following elements in your variable:

  • The resource type (e.g., aws_instance)
  • The resource name (e.g., example)
  • The attribute name (e.g., public_ip)

See here for a complete list of attributes for AWS EC2 instances.

  • You can also use the attributes found in a script:
resource "aws_instance" "example" {
  ami           = "${lookup(var.AMIS, var.AWS_REGION)}"
  instance_type = "t2.micro"

  provisioner "local-exec" {
    command = "echo ${aws_instance.example.private_ip} >> private_ips.txt"
  }
}

Terraform state

  • Terraform keeps the remote state of the infrastructure
  • It stores it in a file called terraform.tfstate
  • There is also a backup of the previous state in terraform.tfstate.backup
  • When you execute terraform apply, a new terraform.tfstate and backup is created
  • This is how Terraform keeps track of the remote state
    • If the remote state changes and you run terraform apply again, Terraform will make changes to meet the correct remote state again.
    • E.g., you manually terminate an instance that is managed by Terraform, after you run terraform apply, it will be started again.
  • You can keep the terraform.tfstate in version control (e.g., git).
    • This will give you a history of your terraform.tfstate file (which is just a big JSON file)
    • This allows you to collaborate with other team members (however, you can get conflicts when two or more people make changes at the same time)
  • Local state works well with simple setups. However, if your project involves multiple team members working on a larger setup, it is better to store your state remotely
    • The Terraform state can be saved remotely, using the backend functionality in Terraform.
    • Using a remote store for the Terraform state will ensure that you always have the latest version of the state.
    • It avoids having commit and push the terraform.tfstate file to version control.
    • However, make sure the Terraform remote store you choose supports locking! (note: both s3 and consul support locking)
  • The default state is a local backend (the local Terraform state file)
  • Other backends include:
    • AWS S3 (with a locking mechanism using DynamoDB)
    • Consul (with locking)
    • Terraform Enterprise (the commercial solution)
  • Using the backend functionality has definite benefits:
    • Working in a team, it allows for collaboration (the remote state will always be available for the whole team)
    • The state file is not stored locally and possible sensitive information is only stored in the remote state
    • Some backends will enable remote operations. The terraform apply will then run completely remotely. These are called enhanced backends.
  • There are two steps to configure a remote state:
    1. Add the back code to a .tf file
    2. Run the initialization process
Consul backend
  • To configure a Consul remote store, you can add a file (backend.tf) with the following contents:
terraform {
  backend "consul" {
    address = "demo.consul.io"  # hostname of consul cluster
    path    = "terraform/myproject"
  }
}
S3 backend
  • Create a backend.tf file with (note: you cannot use Terraform variables in your backend .tf file):
terraform {
  backend "s3" {
    bucket = "mybucket"
    key    = "terraform/myproject.json"
    region = "us-west-2"
  }
}
  • The initialize with:
$ terraform init
$ cat myproject.json | jq -crM '.modules[].resources."aws_instance.example".primary.attributes.public_ip'
1.2.3.4
  • Configure a read-only remote store directly in the .tf file (note: this is actually a "datasource"):
data "terraform_remote_state" "aws-state" {
  backend = "s3"
  config {
    bucket     = "mybucket"
    key        = "terraform.tfstate"
    access_key = "${var.AWS_ACCESS_KEY}"
    secret_key = "${var.AWS_SECRET_KEY}"
    region     = "${var.AWS_REGION}"
  }
}

Datasources

  • For certain providers (e.g., AWS), Terraform provides "datasources"
  • Datasources provide you with dynamic information
    • A lot of data is available from AWS in a structure format using their API (e.g., list of AMIs, list of availability zones, etc.)
    • Terraform also exposes this information using datasources
  • Another example is a datasource that provides you with a list of all IP addresses in use by AWS (useful if you want to filter traffic based on an AWS region)
    • E.g., Allow all traffic from AWS EC2 instances in Europe
  • Filtering traffic in AWS can also be done using security groups
    • Incoming and outgoing traffic can be filtered by protocol, IP range, and port
  • Example datasource:
data "aws_ip_ranges" "european_ec2" {
  regions  = ["eu-west-1", "eu-central-1"]
  services = ["ec2"]
}

resource "aws_security_group" "from_europe" {
  name = "from_europe"

  ingress {
    from_port = "443"
    to_port   = "443"
    protocol  = "tcp"
    cidr_blocks = ["${data.aws_ip.ranges.european_ec2.cidr_blocks}"]
  }
  tags {
    CreateDate = "${data.aws_ip_ranges.european_ec2.create_date}"
    SyncToken  = "${data.aws_ip_ranges.european_ec2.sync_token}"
  }
}

Template provider

  • The template provider can help with creating customized configuration files
  • You can build templates based on variables from Terraform resource attributes (e.g., a public IP address)
  • The result is a string, which can be used as a variable in Terraform
    • The string contains a template (e.g., a configuration file)
  • Can be used to create generic templates or cloud init configs
  • In AWS, you can pass commands that need to be executed when the instance starts for the first time (called "user-data")
    • If you want to pass user-data that depends on other information in Terraform (e.g., IP addresses), you can use the provider template
Example template provider
  • First, create a template file:
$ cat << EOF > templates/init.tpl
#!/bin/bash
echo "database-ip = ${myip}" >> /etc/myapp.config
EOF
  • Then, create a template_file resource that will read the template file and replace ${myip} with the IP address of an AWS instance created by Terraform:
data "template_file" "my-template" {
  template = "${file("templates/init.tpl")}"

  vars {
    myip = "${aws_instance.database1.private_ip}"
  }
}
  • Finally, use the "my-template" resource when creating a new instance:
resource "aws_instance" "web" {
  ...
  user_data = "${data.template_file.my-template.rendered}"
  ...
}

When Terraform runs, it will see that it first need to spin up the database1 instance, then generate the template, and only then spin up the web instance.

The web instance will have the template injected in the user-data, and when it launches, the user-data will create a file (/etc/myapp.config) with the IP address of the database.

Modules

  • You can use modules to make your Terraform project more organized
  • You can use third-party modules (e.g., modules from GitHub)
  • You can re-use parts of your code (e.g., to set up a network in AWS -> VPC)
  • Example of using a module from GitHub:
module "module-example" {
  source = "github.com/foobar/terraform-module-example"
}
  • Use a module from a local folder:
module "module-example" {
  source = "./module-example"
}
  • Pass arguments to a module:
module "module-example" {
  source = "./module-example"
  region = "us-west-2"
  ip-range = "10.0.0.0/8"
  cluster-size = "3"
}
  • Inside the module folder (e.g., module-example), you just have the normal Terraform files:
$ cat module-example/vars.tf
# the module input parameters
variable "region" {}
variable "ip-range" {}
variable "cluster-size {}

$ cat module-example/cluster.tf
# variables can be used here
resource "aws_instance" "instance-1" {}
...

$ cat module-example/output.tf
output "aws-cluster" {
  value = "${aws_instance.instance-1.public_ip},${aws_instance.instance-2.public_ip},...
}
  • Use the output from the module in the main part of your code:
output "some-output" {
  value = "${module.module-example.aws-cluster}"
}

Bash completion

 $ cat << EOF | sudo tee /etc/bash_completion.d/terraform
_terraform()
{
   local cmds cur colonprefixes
   cmds="apply destroy fmt get graph import init \
      output plan push refresh remote show taint \
      untaint validate version state"

   COMPREPLY=()
   cur=${COMP_WORDS[COMP_CWORD]}
   # Work-around bash_completion issue where bash interprets a colon
   # as a separator.
   # Work-around borrowed from the darcs work-around for the same
   # issue.
   colonprefixes=${cur%"${cur##*:}"}
   COMPREPLY=( $(compgen -W '$cmds'  -- $cur))
   local i=${#COMPREPLY[*]}
   while [ $((--i)) -ge 0 ]; do
      COMPREPLY[$i]=${COMPREPLY[$i]#"$colonprefixes"}
   done

        return 0
} &&
complete -F _terraform terraform
EOF

External links