Get Started, Part 6: Deploy your app

Estimated reading time: 7 minutes

Prerequisites

Introduction

You’ve been editing the same Compose file for this entire tutorial. Well, we have good news. That Compose file works just as well in production as it does on your machine. In this section, we will go through some options for running your Dockerized application.

Choose an option

Customers of Docker Enterprise Edition run a stable, commercially-supported version of Docker Engine, and as an add-on they get our first-class management software, Docker Datacenter. You can manage every aspect of your application through the interface using Universal Control Plane, run a private image registry with Docker Trusted Registry, integrate with your LDAP provider, sign production images with Docker Content Trust, and many other features.

Bringing your own server to Docker Enterprise and setting up Docker Datacenter essentially involves two steps:

  1. Get Docker Enterprise for your server’s OS from Docker Hub.
  2. Follow the instructions to install Docker Enterprise on your own host.

Note: Running Windows containers? View our Windows Server setup guide.

Once you’re all set up and Docker Enterprise is running, you can deploy your Compose file from directly within the UI.

Deploy an app on Docker Enterprise

After that, you can see it running, and can change any aspect of the application you choose, or even edit the Compose file itself.

Managing app on Docker Enterprise

Install Docker Engine --- Community

Find the install instructions for Docker Engine --- Community on the platform of your choice.

Create your swarm

Run docker swarm init to create a swarm on the node.

Deploy your app

Run docker stack deploy -c docker-compose.yml getstartedlab to deploy the app on the cloud hosted swarm.

docker stack deploy -c docker-compose.yml getstartedlab

Creating network getstartedlab_webnet
Creating service getstartedlab_web
Creating service getstartedlab_visualizer
Creating service getstartedlab_redis

Your app is now running on your cloud provider.

Run some swarm commands to verify the deployment

You can use the swarm command line, as you’ve done already, to browse and manage the swarm. Here are some examples that should look familiar by now:

  • Use docker node ls to list the nodes in your swarm.
[getstartedlab] ~ $ docker node ls
ID                            HOSTNAME                                      STATUS              AVAILABILITY        MANAGER STATUS
n2bsny0r2b8fey6013kwnom3m *   ip-172-31-20-217.us-west-1.compute.internal   Ready               Active              Leader
  • Use docker service ls to list services.
[getstartedlab] ~/sandbox/getstart $ docker service ls
ID                  NAME                       MODE                REPLICAS            IMAGE                             PORTS
ioipby1vcxzm        getstartedlab_redis        replicated          0/1                 redis:latest                      *:6379->6379/tcp
u5cxv7ppv5o0        getstartedlab_visualizer   replicated          0/1                 dockersamples/visualizer:stable   *:8080->8080/tcp
vy7n2piyqrtr        getstartedlab_web          replicated          5/5                 sam/getstarted:part6    *:80->80/tcp
  • Use docker service ps <service> to view tasks for a service.
[getstartedlab] ~/sandbox/getstart $ docker service ps vy7n2piyqrtr
ID                  NAME                  IMAGE                            NODE                                          DESIRED STATE       CURRENT STATE            ERROR               PORTS
qrcd4a9lvjel        getstartedlab_web.1   sam/getstarted:part6   ip-172-31-20-217.us-west-1.compute.internal   Running             Running 20 seconds ago                       
sknya8t4m51u        getstartedlab_web.2   sam/getstarted:part6   ip-172-31-20-217.us-west-1.compute.internal   Running             Running 17 seconds ago                       
ia730lfnrslg        getstartedlab_web.3   sam/getstarted:part6   ip-172-31-20-217.us-west-1.compute.internal   Running             Running 21 seconds ago                       
1edaa97h9u4k        getstartedlab_web.4   sam/getstarted:part6   ip-172-31-20-217.us-west-1.compute.internal   Running             Running 21 seconds ago                       
uh64ez6ahuew        getstartedlab_web.5   sam/getstarted:part6   ip-172-31-20-217.us-west-1.compute.internal   Running             Running 22 seconds ago        

Open ports to services on cloud provider machines

At this point, your app is deployed as a swarm on your cloud provider servers, as evidenced by the docker commands you just ran. But, you still need to open ports on your cloud servers in order to:

  • if using many nodes, allow communication between the redis service and web service

  • allow inbound traffic to the web service on any worker nodes so that Hello World and Visualizer are accessible from a web browser.

  • allow inbound SSH traffic on the server that is running the manager (this may be already set on your cloud provider)

These are the ports you need to expose for each service:

Service Type Protocol Port
web HTTP TCP 80
visualizer HTTP TCP 8080
redis TCP TCP 6379

Methods for doing this vary depending on your cloud provider.

We use Amazon Web Services (AWS) as an example.

What about the redis service to persist data?

To get the redis service working, you need to ssh into the cloud server where the manager is running, and make a data/ directory in /home/docker/ before you run docker stack deploy. Another option is to change the data path in the docker-stack.yml to a pre-existing path on the manager server. This example does not include this step, so the redis service is not up in the example output.

Iteration and cleanup

From here you can do everything you learned about in previous parts of the tutorial.

  • Scale the app by changing the docker-compose.yml file and redeploy on-the-fly with the docker stack deploy command.

  • Change the app behavior by editing code, then rebuild, and push the new image. (To do this, follow the same steps you took earlier to build the app and publish the image).

  • You can tear down the stack with docker stack rm. For example:

    docker stack rm getstartedlab
    

Unlike the scenario where you were running the swarm on local Docker machine VMs, your swarm and any apps deployed on it continue to run on cloud servers regardless of whether you shut down your local host.

Congratulations!

You’ve taken a full-stack, dev-to-deploy tour of the entire Docker platform.

There is much more to the Docker platform than what was covered here, but you have a good idea of the basics of containers, images, services, swarms, stacks, scaling, load-balancing, volumes, and placement constraints.

Want to go deeper? Here are some resources we recommend:

  • Samples: Our samples include multiple examples of popular software running in containers, and some good labs that teach best practices.
  • User Guide: The user guide has several examples that explain networking and storage in greater depth than was covered here.
  • Admin Guide: Covers how to manage a Dockerized production environment.
  • Training: Official Docker courses that offer in-person instruction and virtual classroom environments.
  • Blog: Covers what’s going on with Docker lately.
Rate this page:

 
705
 
135