Docker
Docker is an open-source project that automates the deployment of applications inside software containers. Quote of features from docker web page:
- Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.[1]
Contents
Introduction
Note: The following is based on content found on the official Docker website, Wikipedia, and various other locations.
A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.
- Lightweight
- Docker containers running on a single machine share that machine's operating system kernel; they start instantly and use less compute and RAM. Images are constructed from filesystem layers and share common files. This minimizes disk usage and image downloads are much faster.
- Standard
- Docker containers are based on open standards and run on all major Linux distributions, Microsoft Windows, and on any infrastructure including VMs, bare-metal and in the cloud.
- Secure
- Docker containers isolate applications from one another and from the underlying infrastructure. Docker provides the strongest default isolation to limit app issues to a single container instead of the entire machine.
As actions are done to a Docker base image, union file-system layers are created and documented, such that each layer fully describes how to recreate an action. This strategy enables Docker's lightweight images, as only layer updates need to be propagated (compared to full VMs, for example).
Building on top of facilities provided by the Linux kernel (primarily cgroups and namespaces), a Docker container, unlike a virtual machine, does not require or include a separate operating system. Instead, it relies on the kernel's functionality and uses resource isolation for CPU and memory, and separate namespaces to isolate the application's view of the operating system. Docker accesses the Linux kernel's virtualization features directly using the libcontainer
library (written in the Go programming language).
Comparing Containers and Virtual Machines
Containers and virtual machines have similar resource isolation and allocation benefits, but function differently because containers virtualize the operating system instead of hardware. Containers are more portable and efficient.
- Virtual Machines
- Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. The hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating system, one or more apps, necessary binaries and libraries - taking up tens of GBs. VMs can also be slow to boot.
- Containers
- Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space. Containers take up less space than VMs (container images are typically tens of MBs in size), and start almost instantly.
Components
The Docker software as a service offering consists of three components:
- Software
- The Docker daemon, called "
dockerd
" is a persistent process that manages Docker containers and handles container objects. The daemon listens for API requests sent by the Docker Engine API. The Docker client, which identifies itself as "docker
", allows users to interact with Docker through CLI. It uses the Docker REST API to communicate with one or more Docker daemons. - Objects
- Docker objects refer to different entities used to assemble an application in Docker. The main Docker objects are images, containers, and services.
- A Docker container is a standardized, encapsulated environment that runs applications. A container is managed using the Docker API or CLI.
- A Docker image is a read-only template used to build containers. Images are used to store and ship applications.
- A Docker service allows containers to be scaled across multiple Docker daemons. The result is known as a "swarm", cooperating daemons that communicate through the Docker API.
- Registries
- A Docker registry is a repository for Docker images. Docker clients connect to registries to download ("pull") images for use or upload ("push") images that they have built. Registries can be public or private. Two main public registries are Docker Hub and Docker Cloud. Docker Hub is the default registry where Docker looks for images.
Docker commands
I will provide detailed examples on all of the following commands throughout this article.
- Basics
The following are the most common Docker commands (i.e., the ones you will most likely use the most day-to-day):
- Show all running containers:
$ docker ps
- Show all containers (including stopped and failed ones):
$ docker ps -a
- Show all images in your local repository:
$ docker images
- Create an image based on the instructions in a
Dockerfile
:
$ docker build
- Start a container from an image (either from your local repository or from a remote repository {e.g., Docker Hub}):
$ docker run
- Remove/delete all stopped/failed containers (leaves running containers alone):
$ docker rm $(docker ps -a -q)
Container commands
- Container lifecycle
- Create a container but do not start it:
$ docker create
- Rename a container:
$ docker rename
- Create and start a container in one operation:
$ docker run
- Delete a container:
$ docker rm
- Update a container's resource limits:
$ docker update
- Starting and stopping containers
- Start a container:
$ docker start
- Stop a running container:
$ docker stop
- Stop and start start a container:
$ docker restart
- Pause a running container ("freeze" it in place):
$ docker pause
- Un-pause a paused container:
$ docker unpause
- Attach/connect to a running container:
$ docker attach
- Block until running container stops (and print exit code):
$ docker wait
- Send
SIGKILL
to a running container:
$ docker kill
- Information
- Show all running containers:
$ docker ps
- Get the logs for a given container:
$ docker logs
- Get all of the metadata about a container (e.g., IP address, etc.):
$ docker inspect
- Get real-time events from Docker Engine (e.g., start/stop containers, attach, create, etc.):
$ docker events
- Get the public-facing ports of a given container:
$ docker port
- Show running processes in a given container:
$ docker top
- Show a given container's resource usage statistics:
$ docker stats
- Show changed files in the container's filesystem (i.e., those changed from the original base image):
$ docker diff
- Miscellaneous
- Get the environment variables for a given container:
$ docker run ubuntu env
- IP address of host machine:
$ ip -4 -o addr show eth0 2: eth0 inet 10.0.0.166/23
- IP address of a container:
$ docker run ubuntu ip -4 -o addr show eth0 2: eth0 inet 172.17.0.2/16
Image commands
- Lifecycle
- Show all images in your local repository:
$ docker images
- Create an image from a tarball:
$ docker import
- Create an image from a
Dockerfile
$ docker build
- Create an image from a container (note: it will pause the container, if it is running, during the commit process):
$ docker commit
- Remove/delete an image:
$ docker rmi
- Load an image from a tarball as STDIN (including images and tags):
$ docker load
- Save an image to a tarball (streamed to STDOUT with all parents lays, tags, and versions):
$ docker save
- Info
- Show the history of an image:
$ docker history
- Tag an image:
$ docker tag
Dockerfile directives
USER
$ cat << EOF > Dockerfile # Non-privileged user entry FROM centos:latest MAINTAINER xtof@example.com RUN useradd -ms /bin/bash xtof USER xtof EOF
Note: The use of MAINTAINER
has been deprecated in newer versions of Docker. You should use LABEL
instead, as it is much more flexible and its key/values show up in docker inspect
. From here forward, I will only use LABEL
.
$ docker build -t centos7/nonroot:v1 . $ docker exec -it <container_name> /bin/bash
We are user "xtof" and are unable to become root. The workaround (i.e., how to become root) is like so:
$ docker exec -u 0 -it <container_name> /bin/bash
NOTE: For the remainder of this section, I will omit the $ cat << EOF > Dockerfile
part in the examples for brevity.
RUN
Notes on the order of execution
FROM centos:latest LABEL maintainer="xtof@example.com" RUN useradd -ms /bin/bash xtof USER xtof RUN echo "export PATH=/path/to/my/app:$PATH" >> /etc/bashrc
$ docker build -t centos7/config:v1 . ... /bin/sh: /etc/bashrc: Permission denied
The order of execution matters! Prior to the directive USER xtof
, the user was root. After that directive, the user is now xtof, who does not have super-user privileges. Move the RUN echo ...
directive to before the USER xtof
directive for a successful build.
ENV
Note: The following is a _terrible_ way of building a container. I am purposely doing it this way so I can show you a much better way later (see below).
- Build a CentOS 7 Docker image with Java 8 installed:
# SEE: https://gist.github.com/P7h/9741922 for various Java versions FROM centos:latest LABEL maintainer="xtof@example.com" RUN yum update -y RUN yum install -y net-tools wget RUN echo "SETTING UP JAVA" # The tarball method: #RUN cd ~ && wget --no-cookies --no-check-certificate \ # --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" \ # "http://download.oracle.com/otn-pub/java/jdk/8u91-b14/jdk-8u91-linux-x64.tar.gz" #RUN tar xzvf jdk-8u91-linux-x64.tar.gz #RUN mv jdk1.8.0_91 /opt #ENV JAVA_HOME /opt/jdk1.8.0_91/ # The rpm method: RUN cd ~ && wget --no-cookies --no-check-certificate \ --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" \ "http://download.oracle.com/otn-pub/java/jdk/8u161-b12/2f38c3b165be4555a1fa6e98c45e0808/jdk-8u161-linux-x64.rpm" RUN yum localinstall -y /root/jdk-8u161-linux-x64.rpm RUN useradd -ms /bin/bash xtof USER xtof # User specific environment variable RUN cd ~ && echo "export JAVA_HOME=/usr/java/jdk1.8.0_161/jre" >> ~/.bashrc # Global (system-wide) environment variable ENV JAVA_BIN /usr/java/jdk1.8.0_161/jre/bin
$ docker build -t centos7/java8:v1 .
CMD vs. RUN
FROM centos:latest LABEL maintainer="xtof@example.com" RUN useradd -ms /bin/bash xtof CMD ["echo", "Hello from within my container"]
The CMD
directive only executes when the container is started, whereas the RUN
directive is executed during the build of the image.
$ docker build -t centos7/echo:v1 . $ docker run centos7/echo:v1 Hello from within my container
The container starts, echos out that message, then exits.
ENTRYPOINT
FROM centos:latest LABEL maintainer="xtof@example.com" RUN useradd -ms /bin/bash xtof ENTRYPOINT "This command will display this message on EVERY container that is run from it"
$ docker build -t centos7/entry:v1 . $ docker run centos7/entry:v1 This command will display this message on EVERY container that is run from it $ docker run centos7/entry:v1 /bin/echo "Can you see me?" This command will display this message on EVERY container that is run from it $ docker run centos7/echo:v1 /bin/echo "Can you see me?" Can you see me?
Note the difference.
EXPOSE
FROM centos:latest LABEL maintainer="xtof@example.com" RUN yum update -y RUN yum install -y httpd net-tools RUN echo "This is a custom index file built during the image creation" > /var/www/html/index.html ENTRYPOINT apachectl -DFOREGROUND # BAD WAY TO DO THIS!
$ docker build -t centos7/apache:v1 . $ docker run -d --name webserver centos7/apache:v1 $ docker exec webserver /bin/cat /var/www/html/index.html This is a custom index file built during the image creation $ docker inspect webserver -f '{{.NetworkSettings.IPAddress}}' # => 172.17.0.6 #~OR~ $ docker inspect webserver | jq -crM '.[] | .NetworkSettings.IPAddress' # => 172.17.0.6 $ curl 172.17.0.6 This is a custom index file built during the image creation $ curl -sI 172.17.0.6 | awk '/^HTTP|^Server/{print}' HTTP/1.1 200 OK Server: Apache/2.4.6 (CentOS) $ time docker stop webserver real 0m10.275s # <- notice how long it took to stop the container user 0m0.008s sys 0m0.000s $ docker rm webserver
It took ~10 seconds to stop the above container. This is because of the way we are (incorrectly) using ENTRYPOINT
. The SIGTERM
signal when running `docker stop webserver`
actually timed out instead of exiting gracefully. A much better method is shown below, which will exit gracefully and in less than 300 ms.
- Expose ports from the CLI
$ docker run -d --name webserver -p 8080:80 centos7/apache:v1 $ curl localhost:8080 This is a custom index file built during the image creation $ docker stop webserver && docker rm webserver
- Explicitly expose a port in the Docker image:
FROM centos:latest LABEL maintainer="xtof@example.com" RUN yum update -y && \ yum install -y httpd net-tools && \ yum autoremove -y && \ echo "This is a custom index file built during the image creation" > /var/www/html/index.html EXPOSE 80 ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]
$ docker build -t centos7/apache:v1 . $ docker run -d --rm --name webserver -P centos7/apache:v1 $ docker container ls --format '{{.Names}} {{.Ports}}' webserver 0.0.0.0:32769->80/tcp #~OR~ $ docker port webserver | cut -d: -f2 32769 #~OR~ $ docker inspect webserver | jq -crM '[.[] | .NetworkSettings.Ports."80/tcp"[] | .HostPort] | .[]' 32769 $ curl localhost:32769 This is a custom index file built during the image creation $ time docker stop webserver real 0m0.283s user 0m0.004s sys 0m0.008s
Note that I passed --rm
to the `docker run`
command so that the container will be removed when I stop the container. Also note how much faster the container stopped (~300ms vs. 10 seconds above).
Container volume management
$ docker run -it --name voltest -v /mydata centos:latest /bin/bash [root@bffdcb88c485 /]# df -h Filesystem Size Used Avail Use% Mounted on none 213G 173G 30G 86% / tmpfs 7.8G 0 7.8G 0% /dev tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup /dev/mapper/ubuntu--vg-root 213G 173G 30G 86% /mydata shm 64M 0 64M 0% /dev/shm tmpfs 7.8G 0 7.8G 0% /sys/firmware [root@bffdcb88c485 /]# echo "testing" >/mydata/mytext.txt $ docker inspect voltest | jq -crM '.[] | .Mounts[].Source' /var/lib/docker/volumes/2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b/_data $ sudo cat /var/lib/docker/volumes/2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b/_data/mytext.txt testing $ sudo /bin/bash -c \ "echo 'this is from the host OS' >/var/lib/docker/volumes/2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b/_data/host.txt" [root@bffdcb88c485 /]# cat /mydata/host.txt this is from the host OS
- Cleanup
$ docker rm voltest $ docker volume rm 2a53fd295595690200a63def8a333b54682174923339130d560fb77ecbe41a3b
- Mount host's current working directory inside container:
$ echo "my config" >my.conf $ echo "my message" >message.txt $ echo "aerwr3adf" >app.bin $ chmod +x app.bin $ docker run -it --name voltest -v ${PWD}:/mydata centos:latest /bin/bash [root@f5f34ccb54fb /]# ls -l /mydata/ total 24 -rwxrwxr-x 1 1000 1000 10 Mar 8 19:29 app.bin -rw-rw-r-- 1 1000 1000 11 Mar 8 19:29 message.txt -rw-rw-r-- 1 1000 1000 10 Mar 8 19:29 my.conf [root@f5f34ccb54fb /]# touch /mydata/foobar $ ls -l ${PWD} total 24 -rwxrwxr-x 1 xtof xtof 10 Mar 8 11:29 app.bin -rw-r--r-- 1 root root 0 Mar 8 11:36 foobar -rw-rw-r-- 1 xtof xtof 11 Mar 8 11:29 message.txt -rw-rw-r-- 1 xtof xtof 10 Mar 8 11:29 my.conf $ docker rm voltest
Images
Saving and loading images
$ docker pull centos:latest $ docker run -it centos:latest /bin/bash [root@29fad368048c /]# yum update -y [root@29fad368048c /]# echo xtof >/root/built_by.txt $ docker commit reverent_elion centos:xtof $ docker rm reverent_elion $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE centos xtof e0c8bd35ba50 3 seconds ago 463MB centos latest 980e0e4c79ec 1 minute ago 197MB $ docker history centos:xtof IMAGE CREATED CREATED BY SIZE e0c8bd35ba50 27 seconds ago /bin/bash 266MB 980e0e4c79ec 18 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B <missing> 18 months ago /bin/sh -c #(nop) LABEL name=CentOS Base ... 0B <missing> 18 months ago /bin/sh -c #(nop) ADD file:e336b45186086f7... 197MB <missing> 18 months ago /bin/sh -c #(nop) MAINTAINER https://gith... 0B
- Save the original
centos:latest
image we pulled from Docker Hub:
$ docker save --output centos-latest.tar centos:latest
Note that the above command essentially tars up the contents of the image found in /var/lib/docker/image
directory.
$ tar tvf centos-latest.tar -rw-r--r-- 0/0 2309 2016-09-06 14:10 980e0e4c79ec933406e467a296ce3b86685e6b42eed2f873745e6a91d718e37a.json drwxr-xr-x 0/0 0 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/ -rw-r--r-- 0/0 3 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/VERSION -rw-r--r-- 0/0 1391 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/json -rw-r--r-- 0/0 204305920 2016-09-06 14:10 ad96ed303040e4a7d1ee0596bb83db3175388259097dee50ac4aaae34e90c253/layer.tar -rw-r--r-- 0/0 202 1969-12-31 16:00 manifest.json -rw-r--r-- 0/0 89 1969-12-31 16:00 repositories
- Save space by compressing the tar file:
$ gzip centos-latest.tar # .tar -> 195M; .tar.gz -> 68M
- Delete the original
centos:latest
image:
$ docker rmi centos:latest
- Restore (or load) the image back to our local repository:
$ docker load --input centos-latest.tar.gz
Tagging images
- List our current images:
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE centos xtof e0c8bd35ba50 About an hour ago 463MB
- Tag the above image:
$ docker tag e0c8bd35ba50 xtof/centos:v1 $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE centos xtof e0c8bd35ba50 About an hour ago 463MB xtof/centos v1 e0c8bd35ba50 About an hour ago 463MB
Note that we did not create a new image, we just created a new tag of the same/original centos:xtof
image.
Note: The maximum number of characters in a tag is 128.
Docker networking
Default networks
$ ip addr show docker0 4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:c0:75:70:13 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:c0ff:fe75:7013/64 scope link valid_lft forever preferred_lft forever #~OR~ $ ifconfig docker0 docker0 Link encap:Ethernet HWaddr 02:42:c0:75:70:13 inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::42:c0ff:fe75:7013/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:420654 errors:0 dropped:0 overruns:0 frame:0 TX packets:1162975 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:85851647 (85.8 MB) TX bytes:1196235716 (1.1 GB)
$ docker network inspect bridge | jq '.[] | .IPAM.Config[].Subnet' "172.17.0.0/16"
So, the usable range of IP addresses in our 172.17.0.0/16 subnet is: 172.17.0.1 - 172.17.255.254
$ docker network ls NETWORK ID NAME DRIVER SCOPE bf831059febc bridge bridge local 266f6df5c44e host host local ce79e4043a20 none null local $ docker ps -q | wc -l #~OR~ $ docker container ls --format '{{.Names}}' | wc -l 4 # => 4 running containers $ docker network inspect bridge | jq '.[] | .Containers[].IPv4Address' "172.17.0.2/16" "172.17.0.5/16" "172.17.0.4/16" "172.17.0.3/16"
The output from the last command are the IP addresses of the 4 containers currently running on my host.
Custom networks
- Create a Docker network
$ man docker-network-create # for details $ docker network create --subnet 10.1.0.0/16 --gateway 10.1.0.1 --ip-range=10.1.4.0/24 \ --driver=bridge --label=host4network br04
- Use the above network with a given container:
$ docker run -it --name net-test --net br04 centos:latest /bin/bash
- Assign a static IP to a given container in the above (user created) network:
$ docker run -it --name net-test --net br04 --ip 10.1.4.100 centos:latest /bin/bash
Note: You can only assign static IPs to user created networks (i.e., you cannot assign them to the default "bridge" network).
Monitoring
$ docker top <container_name> $ docker stats <container_name>
Logs
- Fetch logs of a given container:
$ docker logs <container_name>
- Fetch logs of a given container prefixed with timestamps (UTC format by default):
$ docker logs --timestamps <container_name>
Events
$ docker events $ docker events --since '1h' $ docker events --since '2018-03-08T16:00' $ docker events --filter event=attach $ docker events --filter event=destroy $ docker events --filter event=attach --filter event=die --filter event=stop
Examples
Simple Nginx server
- Create an index.html file:
$ mkdir html $ cat << EOF >html/index.html Hello from Docker EOF
- Create a Dockerfile:
FROM nginx COPY html /usr/share/nginx/html
- Build the image:
$ docker build -t test-nginx .
- Start up container, using image built above:
$ docker run --name check-nginx -d -p 8080:80 test-nginx
- Check that it works:
$ curl http://localhost:8080 Hello from Docker
Connecting two containers
In this example, we will start up a Postgres container and then start up another container and make a connection to the original Postgres container:
$ docker pull postgres $ docker run --name test-postgres -e POSTGRES_PASSWORD=mypassword -d postgres $ docker run -it --rm --link test-postgres:postgres postgres psql -h postgres -U postgres
Password for user postgres: psql (11.0 (Debian 11.0-1.pgdg90+2)) Type "help" for help. postgres=# SELECT 1; ?column? ---------- 1 (1 row) postgres=# \q
Connection was successful!
Web farm
Docker compose
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application's services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see the list of features.
Using Compose is basically a three-step process:
- Define your app's environment with a
Dockerfile
. so it can be reproduced anywhere. - Define the services that make up your app in
docker-compose.yml
so they can be run together in an isolated environment. - Run
docker-compose up
and Compose starts and runs your entire app.
Basic example
Note: This is based off of this article.
In this basic example, we will build a simple Python web application running on Docker Compose. The application uses the Flask framework and maintains a hit counter in Redis.
Note: This section assumes you already have Docker Engine and Docker Compose installed.
- Create a directory for the project:
$ mkdir compose-test && cd $_
- Create a file called
app.py
in your project directory and paste this in:
import time import redis from flask import Flask app = Flask(__name__) cache = redis.Redis(host='redis', port=6379) def get_hit_count(): retries = 5 while True: try: return cache.incr('hits') except redis.exceptions.ConnectionError as exc: if retries == 0: raise exc retries -= 1 time.sleep(0.5) @app.route('/') def hello(): count = get_hit_count() return 'Hello World! I have been seen {} times.\n'.format(count) if __name__ == "__main__": app.run(host="0.0.0.0", debug=True)
In this example, redis
is the hostname of the redis container on the application's network. We use the default port for Redis: 6379
.
- Create another file called
requirements.txt
in your project directory and paste this in:
flask redis
- Create a Dockerfile
- This Dockerfile will be used to build an image that contains all the dependencies the Python application requires, including Python itself.
FROM python:3.4-alpine ADD . /code WORKDIR /code RUN pip install -r requirements.txt CMD ["python", "app.py"]
- Create a file called
docker-compose.yml
in your project directory and paste the following:
version: '3' services: web: build: . ports: - "5000:5000" redis: image: "redis:alpine"
- Build and run this app with Docker Compose:
$ docker-compose up
Compose pulls a Redis image, builds an image for your code, and starts the services you defined. In this case, the code is statically copied into the image at build time.
- Test the application:
$ curl localhost:5000 Hello World! I have been seen 1 times. $ for i in $(seq 1 10); do curl -s localhost:5000; done Hello World! I have been seen 2 times. Hello World! I have been seen 3 times. Hello World! I have been seen 4 times. Hello World! I have been seen 5 times. Hello World! I have been seen 6 times. Hello World! I have been seen 7 times. Hello World! I have been seen 8 times. Hello World! I have been seen 9 times. Hello World! I have been seen 10 times. Hello World! I have been seen 11 times.
- List containers:
$ docker-compose ps Name Command State Ports ------------------------------------------------------------------------------------- compose-test_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp compose-test_web_1 python app.py Up 0.0.0.0:5000->5000/tcp
- Display the running processes:
$ docker-compose top compose-test_redis_1 UID PID PPID C STIME TTY TIME CMD -------------------------------------------------------------------- systemd+ 29401 29367 0 15:28 ? 00:00:00 redis-server compose-test_web_1 UID PID PPID C STIME TTY TIME CMD -------------------------------------------------------------------------------- root 29407 29373 0 15:28 ? 00:00:00 python app.py root 29545 29407 0 15:28 ? 00:00:00 /usr/local/bin/python app.py
- Shutdown app:
$ Ctrl+C #~OR~ $ docker-compose down
Install docker
Debian-based distros
- Ubuntu 16.04 (Xenial Xerus)
Note: For this install, I will be using Ubuntu 16.04 LTS (Xenial Xerus). Docker requires a 64-bit version of Ubuntu as well as a kernel version equal to or greater than 3.10. My system satisfies both requirements.
- Setup the docker repo to install from:
$ sudo apt-get update -y $ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D $ echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" | sudo tee /etc/apt/sources.list.d/docker.list $ sudo apt-get update -y
Make sure you are about to install from the Docker repo instead of the default Ubuntu 16.04 repo:
$ apt-cache policy docker-engine
The output of the above command show look something like the following:
docker-engine: Installed: (none) Candidate: 17.05.0~ce-0~ubuntu-xenial Version table: 17.05.0~ce-0~ubuntu-xenial 500 500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages 17.04.0~ce-0~ubuntu-xenial 500 500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages ...
- Install docker:
$ sudo apt-get install -y docker-engine
- Ubuntu 18.04 (Bionic Beaver)
$ sudo apt update $ sudo apt install -y apt-transport-https ca-certificates curl software-properties-common $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - $ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" $ sudo apt update $ apt-cache policy docker-ce
docker-ce: Installed: (none) Candidate: 5:18.09.0~3-0~ubuntu-bionic Version table: 5:18.09.0~3-0~ubuntu-bionic 500 500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
$ sudo apt install docker-ce -y $ sudo systemctl status docker
● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2018-12-04 13:40:36 PST; 4s ago Docs: https://docs.docker.com Main PID: 6134 (dockerd) Tasks: 16 CGroup: /system.slice/docker.service └─6134 /usr/bin/dockerd -H unix://
Red Hat-based distros
Note: For this install, I will be using CentOS 7 (release 7.2.1511). Docker requires a 64-bit version of CentOS as well as a kernel version equal to or greater than 3.10. My system satisfies both requirements.
- Install Docker (the fast way):
$ sudo yum update -y $ curl -fsSL https://get.docker.com/ | sh
- Install Docker (via a yum repo):
$ sudo yum update -y $ sudo pip install docker-py $ cat << EOF > /etc/yum.repos.d/docker.repo [dockerrepo] name=Docker Repository baseurl=https://yum.dockerproject.org/repo/main/centos/7/ enabled=1 gpgcheck=1 gpgkey=https://yum.dockerproject.org/gpg EOF $ sudo rpm -vv --import https://yum.dockerproject.org/gpg $ sudo yum update -y $ sudo yum install docker-engine -y
Post-installation steps
- Check on the status of docker:
$ sudo systemctl status docker
● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2016-07-12 12:31:08 PDT; 6s ago Docs: https://docs.docker.com Main PID: 3392 (docker) CGroup: /system.slice/docker.service ├─3392 /usr/bin/docker daemon -H fd:// └─3411 docker-containerd -l /var/run/docker/libcontainerd/docker-containerd.sock --runtime docker-runc --start-timeout 2m
- Make sure the docker service automatically starts after a machine reboot:
$ sudo systemctl enable docker
- Execute docker without
`sudo`
:
$ sudo usermod -aG docker $(whoami) #~OR~ $ sudo usermod -aG docker $USER
Log out and log back in to use docker without `sudo`
.
- Check version of Docker installed:
$ docker version Client: Version: 17.05.0-ce API version: 1.29 Go version: go1.7.5 Git commit: 89658be Built: Thu May 4 22:10:54 2017 OS/Arch: linux/amd64 Server: Version: 17.05.0-ce API version: 1.29 (minimum version 1.12) Go version: go1.7.5 Git commit: 89658be Built: Thu May 4 22:10:54 2017 OS/Arch: linux/amd64 Experimental: false
- Check that docker has been successfully installed and configured:
$ docker run hello-world
... This message shows that your installation appears to be working correctly. ...
As the above message shows, you now have a successful install of Docker on your machine and are ready to start building images and creating containers.
Install your own Docker private registry
Note: I will use CentOS 7 for this install and assume you already have docker and docker-compose installed (see above).
For this install, I will assume you have a domain name registered somewhere. I will use docker.example.com
as my example domain. Replace anywhere you see that below with your actual domain name.
- Install dependencies:
$ yum install -y nginx # used for the registry endpoint $ yum install -y httpd-tools # for the htpasswd utility
- Setup docker registry directory structure:
$ mkdir -p /opt/docker-registry/{data,nginx{/conf.d,/certs},log} $ cd /opt/docker-registry
- Create a docker-compose file:
$ vim docker-compose.yml # and add the following:
nginx: image: "nginx:1.9" ports: - 5043:443 links: - registry:registry volumes: - ./log/nginx/:/var/log/nginx:rw - ./nginx/conf.d:/etc/nginx/conf.d:ro - ./nginx/certs:/etc/nginx/certs:ro registry: image: registry:2 ports: - 127.0.0.1:5000:5000 environment: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data volumes: - ./data:/data
- Create an Nginx configuration file:
$ vim /opt/docker-registry/nginx/conf.d/registry.conf # and add the following:
upstream docker-registry { server registry:5000; } server { listen 443; server_name docker.example.com; # SSL ssl on; ssl_certificate /etc/nginx/certs/docker.example.com.crt; ssl_certificate_key /etc/nginx/certs/docker.example.com.key; # disable any limits to avoid HTTP 413 for large image uploads client_max_body_size 0; # required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486) chunked_transfer_encoding on; location /v2/ { # Do not allow connections from docker 1.5 and earlier # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) { return 404; } proxy_pass http://docker-registry; proxy_set_header Host $http_host; # required for docker client's sake proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_read_timeout 900; add_header 'Docker-Distribution-Api-Version:' 'registry/2.0' always; # To add basic authentication to v2 use auth_basic setting plus add_header auth_basic "Restricted access to Docker Registry"; auth_basic_user_file /etc/nginx/conf.d/registry.htpasswd; } }
$ cd /opt/docker-registry/nginx/conf.d $ htpasswd -c registry.htpasswd <username> # replace <username> with your actual username $ htpasswd registry.htpasswd <username2> # [optional] add a 2nd user
- Setup your own certificate signing authority (for use with SSL):
$ cd /opt/docker-registry/nginx/certs
- Generate a new root key:
$ openssl genrsa -out docker-registry-CA.key 2048
- Generate a root certificate (enter anything you like at the prompts):
$ openssl req -x509 -new -nodes -key docker-registry-CA.key -days 3650 -out docker-registry-CA.crt
Then generate a key for your server (this is the file referenced by ssl_certificate_key
in the Nginx configuration above):
$ openssl genrsa -out docker.example.com.key 2048
Now we have to make a certificate signing request (CSR). After you type the following command, OpenSSL will prompt you to answer a few questions. Enter anything you like for the first few, however, when OpenSSL prompts you to enter the "Common Name", make sure to enter the domain or IP of your server.
$ openssl req -new -key docker.example.com.key -out docker.example.com.csr
- Sign the certificate request:
$ openssl x509 -req -in docker.example.com.csr -CA docker-registry-CA.crt -CAkey docker-registry-CA.key -CAcreateserial -out docker.example.com.crt -days 3650
- Force any clients that will use the certificate authority we created above to accept that it is a "legitimate" certificate. Run the following commands on the Docker registry server and on any hosts that will be communicating with the Docker registry server:
$ sudo cp /opt/docker-registry/nginx/certs/docker-registry-CA.crt /usr/local/share/ca-certificates/ $ sudo update-ca-trust
- Restart the Docker daemon in order for it to pick up the changes to the certificate store:
$ sudo systemctl restart docker.service
- Bring up the associated Docker containers:
$ docker-compose up -d
- Your Docker registry directory structure should look like the following:
$ cd /opt/docker-registry && tree . . ├── data ├── docker-compose.yml ├── log │ └── nginx │ ├── access.log │ └── error.log └── nginx ├── certs │ ├── docker-registry-CA.crt │ ├── docker-registry-CA.key │ ├── docker-registry-CA.srl │ ├── docker.example.com.crt │ ├── docker.example.com.csr │ └── docker.example.com.key └── conf.d ├── registry.conf └── registry.htpasswd
- To access the private Docker registry from a client machine (any machine, really), first add the SSL certificate you created earlier to the client machine:
$ cat /opt/docker-registry/nginx/certs/docker-registry-CA.crt # copy contents # On client machine: $ sudo vim /usr/local/share/ca-certificates/docker-registry-CA.crt # paste contents $ sudo update-ca-certificates # You should see "1 added" in the output
- Restart Docker on the client machine to make sure it reloads the system's CA certificates:
$ sudo service docker restart
- Test that you can reach your private Docker registry:
$ curl -k https://USERNAME:PASSWORD@docker.example.com:5043/v2/ {} # <- proper output
- Now, test that you can login with Docker:
$ docker login https://docker.example.com:5043
If that returns with "Login Succeeded", your private Docker registry is up and running!
This section is incomplete. It will be updated when I have time.
Docker environment variables
Note: See here for the most up-to-date list of environment variables.
The following list of environment variables are supported by the docker command line:
DOCKER_API_VERSION
- The API version to use (e.g., 1.19)
DOCKER_CONFIG
- The location of your client configuration files.
DOCKER_CERT_PATH
- The location of your authentication keys.
DOCKER_DRIVER
- The graph driver to use.
DOCKER_HOST
- Daemon socket to connect to.
DOCKER_NOWARN_KERNEL_VERSION
- Prevent warnings that your Linux kernel is unsuitable for Docker.
DOCKER_RAMDISK
- If set this will disable "pivot_root".
DOCKER_TLS_VERIFY
- When set Docker uses TLS and verifies the remote.
DOCKER_CONTENT_TRUST
- When set Docker uses notary to sign and verify images. Equates to
--disable-content-trust=false
for build, create, pull, push, run. DOCKER_CONTENT_TRUST_SERVER
- The URL of the Notary server to use. This defaults to the same URL as the registry.
DOCKER_TMPDIR
- Location for temporary Docker files.
Because Docker is developed using "Go", one can also use any environment variables used by the "Go" runtime. In particular, the following might be useful:
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
- Example usage:
$ export DOCKER_API_VERSION=1.19