Exercise 3: Full stack Docker

Using Docker to run a single Python process is nice, but it’s not very realistic.

In this exercise, you will use Docker to run a more complex stack. After that, you will use a new Docker tool to seamlessly push your stack to the cloud.

Join the chat room for this exericse: https://gitter.im/atbaker/oscon-exercise-3

Warning

Remember, you will need to reference the Docker documentation to work through these exercises.

Meet our Django app

Django is another Python web framework, more extensive than Flask. Believe it or not, Django turned 10 this year!

In this exercise, we will work with a sample Django application I have already created. You should start by cloning the repository at https://github.com/atbaker/docker-django. You can do this with git clone if you have git installed, or by clicking the “Download ZIP” button on GitHub.

This Django application has more dependencies than our Flask app. It uses Gunicorn as its WSGI server, PostgreSQL as its database, and Memcached for caching.

When you get it all running, it looks like this:

../_images/docker-django.png

Introducing docker-compose

Docker-compose is a tool that makes it easier to run multiple Docker containers at once. When you use docker-compose, you create a single configuration file, docker-compose.yml, defining each service in your stack. When you run docker-compose up, docker-compose will create a new container for each service you specified.

Take a quick look at the docker-compose documentation (don’t worry about installation - it’s already installed on your machine).

Now check out the docker-compose.yml file in our repo. It defines three services - django, postgres, and memcached - one for each component in our stack.

You goal is to complete this docker-compose configuration file. Here are some hints:

  • The postgres and memcached services are simple - they each just need an image setting.
    • The postgres service should run a container from the postgres:9.3 image (the :9.3 tells Docker we want that specific version of Postgres)
    • The memcached service should run a container from the atbaker/memcached-verbose image
  • The django service is a little more complicated:
    • You should include a build setting telling docker-compose to create an image for this service by building the Dockerfile in this directory. I have already completed the Dockerfile for you
    • You should specify two container links - one to the postgres service and one to the memcached service
    • You should expose port 8000 from the django service’s container to port 8000 on your host

Note

You may also find the “docker-compose.yml reference” section of the docs useful: http://docs.docker.com/compose/compose-file/

When your docker-compose.yml file is complete, try running docker-compose up. If everything worked correctly, you should check out the app in your browser (port 8000) and see something that looks like the screenshot above.

Note

If you get stuck, you can reference the solution branch of the docker-django repository on GitHub: https://github.com/atbaker/docker-django/blob/solution/docker-compose.yml

Working with docker-compose

docker-compose up is probably the command you’ll use most often with docker-compose, but there are some other features you should know about as well:

Detached mode

Just like docker run, you can pass a -d option to docker-compose up to regain control of your terminal session after docker-compose has started all your services.

Be careful, though, because your containers may fail as soon as docker-compose starts them. I usually run docker-compose ps after docker-compose up -d to make sure everything’s good.

Try starting your containers in detached mode with docker-compose now.

Running one-off commands

Try clicking on the “Login” button on our Django site. You will get a 500 error because we haven’t migrated our database yet. To do so, we need to run the manage.py migrate command from our django service while it’s connected to the postgres service.

Fortunately, docker-compose makes these one-off commands easy. Use the docker-compose run command with the django service to run the command python django-example/manage.py migrate

If you succeed, you should be able to refresh your browser and see a nice, normal login screen.

Note

You should also check out the docker exec command, which is part of the core Docker client.

While docker-compose run creates a new container with all the links and options specified in your docker-compose.yml file, docker exec spawns a new process inside an existing container.

docker exec is handy for really sticky debugging situations, but most of the time I use docker-compose run.

Using volumes

When you want to share a filesystem directory between your host computer and your container, you use Docker volumes. This is an extremely handy feature for local development with Docker - without it, you would have to rebuild your Docker images after every code change.

First, let’s make a change to our Django base template. You can find it at django-example/templates/base.html. Add some extra text to the paragraph on line 63.

If we were running this Django project natively, we would see that change reflected in our browser as soon as we refreshed. But Docker is serving our site through a container which was created from an image we built when we first ran docker-compose up. Docker images always stay the same, so our template change won’t show up unless we run docker-compose build; docker-compose up -d.

To fix this, add a volumes entry to the django service in our docker-compose.yml. You want to map the current directory from the host (signified by a period, .) to the /usr/src/app directory in the container.

Now run docker-compose up -d again to re-create our containers with the new volume settings. You should see our change appear immediately, and any changes you make after that will be shown as well.

Other fun commands

Before you move on to the next section, try to do the following with docker-compose:

  • Look at the logs for all your services, and also just the memcached service. Refresh your web browser while watching the memcached service logs
  • Manually re-build your django service
  • Pull down the latest versions of all your services

Using docker-machine to deploy to the cloud

Now that we have Dockerized a full stack application, it’s time to finally taste that sweet payoff - effortlessly taking it to the cloud.

In this section, we’re going to use docker-machine, another Docker command line tool. Docker-machine helps you create and manage Docker hosts across environments.

Warning

Docker-machine is very much in beta. You might run into some bugs, but I chose to use it today because of its value and importance to Docker’s future.

Installation

First, install docker-machine by downloading the latest binary listed in the official documentation: https://docs.docker.com/machine/.

OS X and Linux users will need to drop it in someplace like /usr/local/bin and then rename it to docker-machine. You may also need to make it executable by running chmod +x docker-machine.

You can test that you have installed it correctly by running docker-machine help.

Carving out a slice of the cloud

Docker-machine’s primary purpose is to create Docker-ready servers for us to use with docker-compose. Let’s do just that, creating a new Amazon Web Services EC2 server.

The command below will use my credentials to give you a new server, but you must change the name at the end to something unique to you. I recommend something like your name:

docker-machine create --driver=amazonec2 --amazonec2-access-key=AKIAIZK7X5MLYA3MIPQA --amazonec2-secret-key=Zc9FTePGX1supCWN8bdho4glB1D3cNVP2a1EvWwd  --amazonec2-instance-type="t2.small" --amazonec2-region="us-east-1" --amazonec2-vpc-id="vpc-5aa6b93f" YOUR_NAME_HERE

Warning

If you are not participating in one of my in-person workshops, you will need to replace the AWS access key and secret key values with those from your own AWS account.

Note

Obviously, these AWS EC2 API keys could do a lot of damage. Please don’t do something mean like mine Bitcoins on my dollar during our session!

Docker-machine will take a few minutes to provision the server. When it’s done, you will see a message like:

Launching instance...
To see how to connect Docker to this machine, run: docker-machine env YOUR_NAME_HERE

That means your brand-new Ubuntu server with Docker is ready to go!

Tweaking our docker-compose.yml

While our new server is provisioning, we need to make a tweak to our docker-compose.yml file.

Our new server won’t have the source code from our GitHub repository, so we can’t use any docker-compose options that reference the current directory.

That means we need to make some changes to our django service. Comment out the build setting and add an image one. You should also comment out the volumes entry.

For the new image value, you can either use atbaker/docker-django or build, tag, and push your version of the Django app using the same workflow we did for our Flask app in the previous exercise.

Don’t forget to save your changes to your docker-compose.yml file before moving to the next step!

Note

In a real business, our build process would automatically create and push these images to the Docker Hub for us.

That’s one small step for man...

By setting some environment variables in our terminal session, we can point our local Docker client to our new server’s Docker daemon. Do that by running eval $(docker-machine env YOUR_NAME_HERE) if you haven’t already.

Now’s the big moment - to stand up our Dockerized Django app on our new server, just run docker-compose up -d.

You will notice that Docker needs to pull down all the images we use - that’s because all that work is happening on our server, which doesn’t have any Docker images at all yet.

When docker-compose finishes standing up our containers, do a quick docker-compose ps to make sure all are up and running. Then run docker-machine ip YOUR_NAME_HERE to see what IP address you should use in your web browser.

Then just head to http://[YOUR IP ADDRESS]:8000 to see your production site in all its glory!

In the next section, you will try your hand at building your own Docker-powered Platform-as-a-Service. When you’re ready, head to Exercise 4: Your own PaaS!