Deploy application infrastructure
Estimated reading time: 13 minutesYou are viewing docs for legacy standalone Swarm. These topics describe standalone Docker Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users should use integrated Swarm mode — a good place to start is Getting started with swarm mode, Swarm mode CLI commands, and the Get started with Docker walkthrough). Standalone Docker Swarm is not integrated into the Docker Engine API and CLI commands.
In this step, you create several Docker hosts to run your application stack on. Before you continue, make sure you have taken the time to learn the application architecture.
About these instructions
This example assumes you are running on a Mac or Windows system and enabling
Docker Engine docker
commands by provisioning local VirtualBox virtual
machines using Docker Machine. For this evaluation installation, you need 6 (six)
VirtualBox VMs.
While this example uses Docker Machine, this is only one example of an infrastructure you can use. You can create the environment design on whatever infrastructure you wish. For example, you could place the application on another public cloud platform such as Azure or DigitalOcean, on premises in your data center, or even in a test environment on your laptop.
Finally, these instructions use some common bash
command substitution techniques to
resolve some values, for example:
$ eval $(docker-machine env keystore)
In a Windows environment, these substitution fail. If you are running in
Windows, replace the substitution $(docker-machine env keystore)
with the
actual value.
Task 1. Create the keystore server
To enable a Docker container network and Swarm discovery, you must deploy (or supply) a key-value store. As a discovery backend, the key-value store maintains an up-to-date list of cluster members and shares that list with the Swarm manager. The Swarm manager uses this list to assign tasks to the nodes.
An overlay network requires a key-value store. The key-value store holds information about the network state which includes discovery, networks, endpoints, IP addresses, and more.
Several different backends are supported. This example uses Consul container.
-
Create a “machine” named
keystore
.$ docker-machine create -d virtualbox --virtualbox-memory "2000" \ --engine-opt="label=com.function=consul" keystore
You can set options for the Engine daemon with the
--engine-opt
flag. In this command, you use it to label this Engine instance. -
Set your local shell to the
keystore
Docker host.$ eval $(docker-machine env keystore)
-
Run the consul container.
$ docker run --restart=unless-stopped -d -p 8500:8500 -h consul progrium/consul -server -bootstrap
The
-p
flag publishes port 8500 on the container which is where the Consul server listens. The server also has several other ports exposed which you can see by runningdocker ps
.$ docker ps CONTAINER ID IMAGE ... PORTS NAMES 372ffcbc96ed progrium/consul ... 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp dreamy_ptolemy
-
Use a
curl
command to test the server by listing the nodes.$ curl $(docker-machine ip keystore):8500/v1/catalog/nodes [{"Node":"consul","Address":"172.17.0.2"}]
Task 2. Create the Swarm manager
In this step, you create the Swarm manager and connect it to the keystore
instance. The Swarm manager container is the heart of your Swarm cluster.
It is responsible for receiving all Docker commands sent to the cluster, and for
scheduling resources against the cluster. In a real-world production deployment,
you should configure additional replica Swarm managers as secondaries for high
availability (HA).
Use the --eng-opt
flag to set the cluster-store
and
cluster-advertise
options to refer to the keystore
server. These options
support the container network you create later.
-
Create the
manager
host.$ docker-machine create -d virtualbox --virtualbox-memory "2000" \ --engine-opt="label=com.function=manager" \ --engine-opt="cluster-store=consul://$(docker-machine ip keystore):8500" \ --engine-opt="cluster-advertise=eth1:2376" manager
You also give the daemon a
manager
label. -
Set your local shell to the
manager
Docker host.$ eval $(docker-machine env manager)
-
Start the Swarm manager process.
$ docker run --restart=unless-stopped -d -p 3376:2375 \ -v /var/lib/boot2docker:/certs:ro \ swarm manage --tlsverify \ --tlscacert=/certs/ca.pem \ --tlscert=/certs/server.pem \ --tlskey=/certs/server-key.pem \ consul://$(docker-machine ip keystore):8500
This command uses the TLS certificates created for the
boot2docker.iso
or the manager. This is key for the manager when it connects to other machines in the cluster. -
Test your work by displaying the Docker daemon logs from the host.
$ docker-machine ssh manager <-- output snipped --> docker@manager:~$ tail /var/lib/boot2docker/docker.log time="2016-04-06T23:11:56.481947896Z" level=debug msg="Calling GET /v1.15/version" time="2016-04-06T23:11:56.481984742Z" level=debug msg="GET /v1.15/version" time="2016-04-06T23:12:13.070231761Z" level=debug msg="Watch triggered with 1 nodes" discovery=consul time="2016-04-06T23:12:33.069387215Z" level=debug msg="Watch triggered with 1 nodes" discovery=consul time="2016-04-06T23:12:53.069471308Z" level=debug msg="Watch triggered with 1 nodes" discovery=consul time="2016-04-06T23:13:13.069512320Z" level=debug msg="Watch triggered with 1 nodes" discovery=consul time="2016-04-06T23:13:33.070021418Z" level=debug msg="Watch triggered with 1 nodes" discovery=consul time="2016-04-06T23:13:53.069395005Z" level=debug msg="Watch triggered with 1 nodes" discovery=consul time="2016-04-06T23:14:13.071417551Z" level=debug msg="Watch triggered with 1 nodes" discovery=consul time="2016-04-06T23:14:33.069843647Z" level=debug msg="Watch triggered with 1 nodes" discovery=consul
The output indicates that the
consul
and themanager
are communicating correctly. -
Exit the Docker host.
docker@manager:~$ exit
Task 3. Add the load balancer
The application uses Interlock and Nginx as a loadbalancer. Before you build the load balancer host, create the configuration for Nginx.
-
On your local host, create a
config
directory. -
Change directories to the
config
directory.$ cd config
-
Get the IP address of the Swarm manager host.
For example:
$ docker-machine ip manager 192.168.99.101
-
Use your favorite editor to create a
config.toml
file and add this content to the file:ListenAddr = ":8080" DockerURL = "tcp://SWARM_MANAGER_IP:3376" TLSCACert = "/var/lib/boot2docker/ca.pem" TLSCert = "/var/lib/boot2docker/server.pem" TLSKey = "/var/lib/boot2docker/server-key.pem" [[Extensions]] Name = "nginx" ConfigPath = "/etc/nginx/nginx.conf" PidPath = "/var/run/nginx.pid" MaxConn = 1024 Port = 80
-
In the configuration, replace the
SWARM_MANAGER_IP
with themanager
IP you got in Step 4.You use this value because the load balancer listens on the manager’s event stream.
-
Save and close the
config.toml
file. -
Create a machine for the load balancer.
$ docker-machine create -d virtualbox --virtualbox-memory "2000" \ --engine-opt="label=com.function=interlock" loadbalancer
-
Switch the environment to the
loadbalancer
.$ eval $(docker-machine env loadbalancer)
-
Start an
interlock
container running.$ docker run \ -P \ -d \ -ti \ -v nginx:/etc/conf \ -v /var/lib/boot2docker:/var/lib/boot2docker:ro \ -v /var/run/docker.sock:/var/run/docker.sock \ -v $(pwd)/config.toml:/etc/config.toml \ --name interlock \ ehazlett/interlock:1.0.1 \ -D run -c /etc/config.toml
This command relies on the
config.toml
file being in the current directory. After running the command, confirm the image is running:$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d846b801a978 ehazlett/interlock:1.0.1 "/bin/interlock -D ru" 2 minutes ago Up 2 minutes 0.0.0.0:32770->8080/tcp interlock
If you don’t see the image running, use
docker ps -a
to list all images to make sure the system attempted to start the image. Then, get the logs to see why the container failed to start.$ docker logs interlock INFO[0000] interlock 1.0.1 (000291d) DEBU[0000] loading config from: /etc/config.toml FATA[0000] read /etc/config.toml: is a directory
This error usually means you weren’t starting the
docker run
from the sameconfig
directory where theconfig.toml
file is. If you run the command and get a Conflict error such as:docker: Error response from daemon: Conflict. The name "/interlock" is already in use by container d846b801a978c76979d46a839bb05c26d2ab949ff9f4f740b06b5e2564bae958. You have to remove (or rename) that container to reuse that name.
Remove the interlock container with the
docker container rm interlock
and try again. -
Start an
nginx
container on the load balancer.$ docker run -ti -d \ -p 80:80 \ --label interlock.ext.name=nginx \ --link=interlock:interlock \ -v nginx:/etc/conf \ --name nginx \ nginx nginx -g "daemon off;" -c /etc/conf/nginx.conf
Task 4. Create the other Swarm nodes
A host in a Swarm cluster is called a node. You’ve already created the manager node. Here, the task is to create each virtual host for each node. There are three commands required:
- create the host with Docker Machine
- point the local environment to the new host
- join the host to the Swarm cluster
If you were building this in a non-Mac/Windows environment, you’d only need to
run the join
command to add a node to the Swarm cluster and register it with
the Consul discovery service. When you create a node, you also give it a label,
for example:
--engine-opt="label=com.function=frontend01"
These labels are used later when starting application containers. In the commands below, notice the label you are applying to each node.
-
Create the
frontend01
host and add it to the Swarm cluster.$ docker-machine create -d virtualbox --virtualbox-memory "2000" \ --engine-opt="label=com.function=frontend01" \ --engine-opt="cluster-store=consul://$(docker-machine ip keystore):8500" \ --engine-opt="cluster-advertise=eth1:2376" frontend01 $ eval $(docker-machine env frontend01) $ docker run -d swarm join --addr=$(docker-machine ip frontend01):2376 consul://$(docker-machine ip keystore):8500
-
Create the
frontend02
VM.$ docker-machine create -d virtualbox --virtualbox-memory "2000" \ --engine-opt="label=com.function=frontend02" \ --engine-opt="cluster-store=consul://$(docker-machine ip keystore):8500" \ --engine-opt="cluster-advertise=eth1:2376" frontend02 $ eval $(docker-machine env frontend02) $ docker run -d swarm join --addr=$(docker-machine ip frontend02):2376 consul://$(docker-machine ip keystore):8500
-
Create the
worker01
VM.$ docker-machine create -d virtualbox --virtualbox-memory "2000" \ --engine-opt="label=com.function=worker01" \ --engine-opt="cluster-store=consul://$(docker-machine ip keystore):8500" \ --engine-opt="cluster-advertise=eth1:2376" worker01 $ eval $(docker-machine env worker01) $ docker run -d swarm join --addr=$(docker-machine ip worker01):2376 consul://$(docker-machine ip keystore):8500
-
Create the
dbstore
VM.$ docker-machine create -d virtualbox --virtualbox-memory "2000" \ --engine-opt="label=com.function=dbstore" \ --engine-opt="cluster-store=consul://$(docker-machine ip keystore):8500" \ --engine-opt="cluster-advertise=eth1:2376" dbstore $ eval $(docker-machine env dbstore) $ docker run -d swarm join --addr=$(docker-machine ip dbstore):2376 consul://$(docker-machine ip keystore):8500
-
Check your work.
At this point, you have deployed on the infrastructure you need to run the application. Test this now by listing the running machines:
$ docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS dbstore - virtualbox Running tcp://192.168.99.111:2376 v1.10.3 frontend01 - virtualbox Running tcp://192.168.99.108:2376 v1.10.3 frontend02 - virtualbox Running tcp://192.168.99.109:2376 v1.10.3 keystore - virtualbox Running tcp://192.168.99.100:2376 v1.10.3 loadbalancer - virtualbox Running tcp://192.168.99.107:2376 v1.10.3 manager - virtualbox Running tcp://192.168.99.101:2376 v1.10.3 worker01 * virtualbox Running tcp://192.168.99.110:2376 v1.10.3
-
Make sure the Swarm manager sees all your nodes.
$ docker -H $(docker-machine ip manager):3376 info Containers: 4 Running: 4 Paused: 0 Stopped: 0 Images: 3 Server Version: swarm/1.1.3 Role: primary Strategy: spread Filters: health, port, dependency, affinity, constraint Nodes: 4 dbstore: 192.168.99.111:2376 └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 2.004 GiB └ Labels: com.function=dbstore, executiondriver=native-0.2, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.10.3 (TCL 6.4.1); master : 625117e - Thu Mar 10 22:09:02 UTC 2016, provider=virtualbox, storagedriver=aufs └ Error: (none) └ UpdatedAt: 2016-04-07T18:25:37Z frontend01: 192.168.99.108:2376 └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 2.004 GiB └ Labels: com.function=frontend01, executiondriver=native-0.2, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.10.3 (TCL 6.4.1); master : 625117e - Thu Mar 10 22:09:02 UTC 2016, provider=virtualbox, storagedriver=aufs └ Error: (none) └ UpdatedAt: 2016-04-07T18:26:10Z frontend02: 192.168.99.109:2376 └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 2.004 GiB └ Labels: com.function=frontend02, executiondriver=native-0.2, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.10.3 (TCL 6.4.1); master : 625117e - Thu Mar 10 22:09:02 UTC 2016, provider=virtualbox, storagedriver=aufs └ Error: (none) └ UpdatedAt: 2016-04-07T18:25:43Z worker01: 192.168.99.110:2376 └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 2.004 GiB └ Labels: com.function=worker01, executiondriver=native-0.2, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.10.3 (TCL 6.4.1); master : 625117e - Thu Mar 10 22:09:02 UTC 2016, provider=virtualbox, storagedriver=aufs └ Error: (none) └ UpdatedAt: 2016-04-07T18:25:56Z Plugins: Volume: Network: Kernel Version: 4.1.19-boot2docker Operating System: linux Architecture: amd64 CPUs: 4 Total Memory: 8.017 GiB Name: bb13b7cf80e8
The command is acting on the Swarm port, so it returns information about the entire cluster. You have a manager and no nodes.
Next step
Your key-value store, load balancer, and Swarm cluster infrastructure are up. You are ready to build and run the voting application on it.