Docker Meet WordPress: A production deploy for the average person

A lot of the WordPress docker blog posts that I have encountered seem to skip over some important parts. I have seen posts that have both mysql and apache running in a single container, as well as posts that ignore the use of volumes that could lead to data loss. The docker ecosystem has evolved a lot in just the past few months and I figured it was worth writing a post showing a more robust way to deploy and manage WordPress with Docker.

This post makes a few sane assumptions, one of which is that you care about you data. The second, is that we are going to assume you may want to run other containers with various services on the same host. Whether that be because you want to run multiple WordPress sites, or maybe other services. Third we assume you have some basic Linux administration experience. Finally, this guide makes the assumption that you have either Docker for Mac or Docker Machine. It also assumes you have gone through the basic tutorials here and here.

This guide also assumes that you are deploying on a single host for now.

One thing this guide does not assume is what provider you might be deploying at. This guide should work if you are deploying on a server at home, DigitalOcean, Linode, or AWS.

It is highly recommend that if you deploy to a provider where your host has a direct public IP address that you setup iptables to restrict access to the Traefik admin, and docker ports except from authorized IPs.

You can see a complete example of the code used here.

Building our WordPress Container

The first thing we want to do is create a directory that will store the various configuration files that are required to manage our docker containers.

Next, we want to create a directory called wordpress. Within this directory we will create a Dockerfile with the following contents.

The reason for the Dockerfile is so that we can install additional PHP plugins. We inherit from the official wordpress docker image so our dockerfile is going to be pretty light. It’s also important to pin yourself to a specific source container. You can read more about why here

Next we need to create our docker-compose.yml file. This should be placed in the docker-wordpress directory. You can see documentation for the compose file and its features here.

The compose file format is pretty simple and easy to read. We have done several things in just a few lines of code.

  1. You can see that we have defined a new service container called wordpress.
  2. We are asking it to build using the Dockerfile inside the wordpress directory that we had just created in the previous step.
  3. We have created a volume called wordpress-data, and its mounted in that container at /var/www/html.
  4. We have established a network for back-end communication between our wordpress container and the database.

Now lets create a docker-compose.override.yml file in the same directory as the docker-compose.yml file.

We have defined a port mapping to map port 8080 on our host machine to 80 in the container. This is only for development (IE running on your local laptop) which is why we placed this in the override file. When invoking docker-compose commands, by default both docker-compose.yml and docker-compose.override.yml are merged.

We can now run our wordpress container by invoking $ docker-compose up -d .

If you run $ docker-compose ps  you will see that the container exited and did not startup properly.

We can examine the logs to see why this container exited with $ docker-compose logs wordpress

This error makes sense, we havent defined the environment variables that we need in order to point wordpress to our mysql database (which we also havent created). Move on to the next section and we can clear this up.

Building our MySQL Container

We can now update our docker-compose.yml file and include a mysql container.

We have made several additions to the compose file.

  1. We added a mysql container, and we can use the mysql:5.7 image from the docker hub.
  2. Similar to the wordpress container, we have created a data volume for /var/lib/mysql.
  3. We have associated our mysql container with our back-end network
  4. We have configured several environment variables on the mysql container. This will cause the container to automatically create a database and a user for us
  5. We have established a container link from wordpress to the mysql container.
  6. We added environment variables to the wordpress container to point it to mysql.
  7. You should also take the time to change any passwords in this file to be unique.

Testing everything locally

Run $ docker-compose up -d mysql  to start the mysql container first, as it may take a minute to initialize the database when running locally.

Now that MySQL is running, we can start the wordpress container, see that its running, and tail its logs.

If you use your web browser and browse to localhost:8080 (if running Docker for Mac/Windows, or on Linux), or 192.168.99.100:8080 (or whatever $ docker-machine ip  shows). You should see the setup screen.

WordPress_›_Installation

If you walk through and finish the setup we can demonstrate the ability to destroy containers and still have your data persist.

Create a test post after the setup and then run $ docker-compose down  and $ docker-compose up -d  to delete and then recreate the wordpress and mysql containers.

With your browser you can see that your site is still configured and your test post still exists.

Now we can move on to creating a production deployment.

Deploying it to a production instance

For this I am going to use a DigitalOcean droplet, however if you are familiar with docker-machine, I am going to use the generic driver to demonstrate installation on any host.

This next bit, assumes you have a server somewhere that is a fresh install. Run the below command replacing your ip address, username, ssh-key, and a name for the machine. When you invoke this command it will do several things. I highly recommend reading what it will do here.

Invoke $ docker-machine ls  and you should see the remote host listed.

Now we can deploy our stack to the remote host.

Now that the mysql container is running, give it a minute or two and then deploy the wordpress computer.

If you run  $ docker ps  you will see that both containers are now running.

Both services are deployed however we cannot access them yet until we deploy Traefik to route requests to the container.

Setting up Traefik

Traefik is an amazing lightweight service that allows you to route traffic to the appropriate containers based on simple labels in your docker-compose file. It is also incredibly simple to setup.

First, create a completely separate directory from the wordpress one above. I’ll call mine docker-traefik.

In this directory, create another directory called traefik and within that a Dockerfile with the following contents

In the same directory as the Dockerfile create a new file named traefik.toml with the following:

Now in the root of the docker-traefik directory create a docker-compose.yml file with the following:

Now before we start this up, we need to create a special network by hand. If you look back in our wordpress compose file, we created a single network called back-end. However, this network is siloed and only joinable by containers within that project. However we need to create a network that all web based containers can join so that traefik is able to communicate with and proxy requests. You can read more about this here.

To create the network simply run $ docker network create traefik . (This network will only function on this host. If you need cross host functionality look into using the overlay driver instead of bridge (the default).

We can see that the network was created.

We can now start our traefik serve.

 

Connecting your application to Traefik

Now that Traefik is deployed, we just need to make a couple of small tweaks to our wordpress app to ensure that traefik can communicate with out wordpress container.

We made a few changes to this file:

  1. We created a new network at the bottom called traefik, the different here is that we are referencing an external network which is also named traefik.
  2. We configured the wordpress instance to use this new network
  3. We configured a few container labels.
    1. We disabled Traefik for the mysql instance as we wouldn’t ever want to proxy http requests to it
    2. We specified the URL that requests will originate from for routing to this container
    3. We specified which network name Traefik should connect to the container on.

If we run $ docker-compose -f docker-compose.yml up -d  it will recreate both mysql and the WordPress host with the new labels.

And if we browse to the admin port for traefik (8080)

Træfɪk

We can see that the container was discovered and setup.

 

If you browse to the domain you specified in the docker-compose file, you should see the WordPress setup screen.

At this point you are all setup and good to start working with WordPress.

Adding PHP Libraries / Extensions

If you find out you need to install an additional php lib, you can add it to the Dockerfile for the WordPress install and just do a $ docker-compose build  and then $ docker-compose -f docker-compose.yml up -d

WordPress Upgrades

If you need to update WordPress you can safely follow the One-Click Upgrade Process as any file changes are persisted to the data volume.

Realtime (or not) Traffic Replication with Gor

As an operations engineer, we all have various tools that we rely on heavily. Those could be for load testing, monitoring, code analysis or deployments. One tool that I wanted to touch on is gor. Gor is an amazing tool which allows you to replay captured traffic in either real time or from capture files. This is incredibly useful when rewriting applications or even testing refactored changes. You can spot bottlenecks before you push changes to production.

gor

Realtime Replication

Grab the latest build here and deploy or copy it up to machine that you want to be the replayer. The replayer will be mainly CPU bound. Once on this machine, you can run it with the following:

This will setup the replayer to listen on port 28020. Any incoming traffic to this port will have its host header rewritten to match example.org and it will then be sent to staging.example.org limiting to 10 requests/second. You could also change the –output-http line to “http://staging.example.org|10%” to send 10% of incoming requests. Remove |10 or |10% entirely to just mirror 100% of traffic. For performance reasons, I would set GOMAXPROCS equal to the number of CPUs on this server.

Deploy or copy the latest build to a server that you want to replicate traffic from.

This will setup gor to capture traffic from port 80 and to send it to the replayer which we started running in the previous step.

Once these are both running (always ensure the replayer is running first) you will see requests start to hit your target.

Captured Replication

However, if you want to playback traffic faster than 100%, or for whatever reason don’t want to mirror live production traffic then you can create a capture file on the source machine.

Run this for however long you want. It will capture data from port 80 and dump it to requests.gor

Copy this file to our replayer and we can play this back.

Very similar to the first time we ran the replayer. However this will use our requests.gor file as a source, and replay it back at 200% speed. So if your production system saw 100/requests/second this should replay it back at 200/requests/second.

Note that there are a lot of other things you can do with gor and the readme has a lot of great documentation around its use. I have successfully used gor to replicate over 400k/requests/minute from one service to a staging service. In this case, I actually had to run multiple replayers.

 

Docker for Developers Part 2

The Saga continues

This is a continuation of my previous post which was an introduction to docker geared for developers. In the previous post, we got somewhat familiar with the docker ecosystem and built a very simple hello world application. In this part we are going to get dynamodb running locally, run a one off script to preload data, build a sample api service that reads from dynamo, and finally a sample website that reads from that api service. As with the first part, you can retrieve a copy of the completed code base off github or directly download the assets for part 2 here.

Running DynamoDB

Before we can really start building our API server, we need a place to store data. As we learned in the previous part, we can use docker-compose files to orchestrate the various services that make up our application. This could be a postgres or mysql instance, however in this case the application is going to leverage DynamoDB for storage. When developing the application it doesn’t make sense to create an actual dynamo table on AWS as you would incur some costs for something that is only going to be used for development. There are some caveats and limitations to this however which you can read about here. Since we would never want a local dynamodb container running in any environment other than development, we want to go ahead and edit docker-compose.override.yml with the following:

Save that file, and run docker-compose up -d and you should see output that the dynamodb container is running alongside our movie-api instance

Now that DynamoDB is running we need to be able to link our API container to it so that they can communicate. Our Compose file needs one small change..

Rerun docker-compose up -d which will recreate the movie-api-1 container.

Loading data into DynamoDB

Now that we have dynamodb running, we can show how to run a one off command to invoke a script that will seed data into dynamo. First, we can create the directory structure for storing our sample data and a place for our seed scripts. Within the movie-api directory create two new directors. One called scripts and the other data. We can use an amazon published sample data set for building this application. You can retrieve it here and extract this to the data directory.

The application directory structure should now look like this:

Now within the scripts directory lets write a simple script for seeding this data to our local dynamodb instance. Within the movie-api/scripts directory create a file called seed_movie_data.py and open it for editing.

 

This script is pretty straightforward, this script will create the table within dynamodb if it doesnt exist, and then seed it with a little over 4,000 movies from the json file in our data directory.

Before we can run our script, we need to set a few environment variables on our movie-api instance. Go ahead and open up docker-compose.override.yml and adjust it to reflect the following:

The AWS credentials do not need to be anything real, in fact, leave them as above (foo, and bar). They just need to be set to prevent boto from barfing when connecting to the local dynamodb instance. In a production setting, we would leverage IAM roles on the instance to connect which is far more secure than setting credentials via an environment variable.

Once you have created the script, and also set the environment variables we can run the following to recreate the container with the new environment variables and then run the script.

As you can see, we ran the script that we created within the container. We do not need to have our python environment setup locally. This is great, because the containers environment is isolated from every other application we may be developing (and even other services within this application). This provides a high degree of certainty that if we deploy this image, that it will function as we expect. Furthermore, we also know that if another developer pulls this code down and runs the project it will work.

Building our API

Now that we have the data in our datastore  we can build a simple lightweight API to expose this information to a client. To keep things simple, we are going to create a simple endpoint which will return all the movies that were released in a specified year. Lets go ahead and open up movie-api/demo/services/api.py in our IDE and add a bit of code to make this happen.

Save this and then we can try querying our service with a simple curl command:

Building our Website

Now that we have an API that returns the data we need, we can create a simple web interface with a form that will present this data in a nice fashion. There are a few ways to make this happen, normally i’d recommend using something like angular however to further demonstrate container linking and such I will use a seperate flask app.

Within the movie-web directory we need to create our app skeleton. To simplify things, copy the Dockerfile, app.py and requirements.txt files from movie-api to movie-web. Besides the copying of those 3 files, go ahead and create the following directory structure and empty files so that the output of tree matches the below.

In requirements.txt remove the boto reference and add in requests.

Open up app.py and edit the third line to reflect the below

Go ahead and create demo/services/site.py and open it up within your IDE.

Now to get things running we just need to edit our docker-compose.yml and docker-compose.override.yml files.

docker-compose.yml:

docker-compose.override.yml:

With those changes saved. We can run docker-compose up -d which should launch our container. We can verify connectivity with curl.

Now lets create a couple templates, and a build out the endpoints on our webapp so that we can perform a simple search.

Open up demo/services/templates/index.html:

Open up demo/services/templates/results.html:

And finally edit demo/services/site.py:

If you visit your web browser at http://192.168.99.100:8081/ which should show the following:

movie-db-index

Enter in 1993 and you should see the following results:

movie-db-results

At this point that completes our application and this tutorial on docker for developers. To Recap, we installed and setup the docker toolbox on your machine. We then demonstrated how to use docker-machine, docker, and docker-compose to build a sample application that uses dynamodb, an api service, and finally a web application to view a sample dataset. You should be familiar now with creating a Dockerfile to build an image, and using compose to orchestrate and run your application. You have defined environment variables, container links, ports, and even leveraged the ability to map a volume coupled with flasks ability to reload on file changes to rapidly speed up development.

One thing you may have noticed is that we spent most of the time dealing with application code, and not a whole lot of time spent working with docker itself. Thats kind of the point. One of the greatest strengths of docker is that it simply gets out of your way. It mays it incredibly easy to rapidly iterate and start working on your application code.

While this wraps up my 2 part series on docker for developers, ill be writing additional posts centered around docker for QA and Operations Engineers. This will focus on testing, CI/CD, production deployments, service discovery, and more.

Docker for Developers Part 1

Summary

 

Recently I started working on a few projects where docker seemed like a great fit to rapidly speed up development of the project. In one case we wanted to build a prototype service that contained an API endpoint that utilized 4 micro services. The docker landscape is still green with a lot of toolsets not even existing for a year. While I feel the development side of things is great, the production deployment, auto scaling, and release management is still lacking. One of the projects I have been following closely is Rancher which seems to be on track to solve all of these things. This will be a series of posts initially focusing on development and the building of a fully featured sample application demonstrating the power of docker running locally. I will add posts documenting CI with Jenkins, through to a production deploy and management on AWS.

What will we do?

This tutorial is going to walk through the creation of a sample web application that utilizes a sample API service backed by dynamodb. Specifically we will:

  1. Layout the structure of our application and explain why things are laid out like they are.
  2. Build a sample hello world flask app that shows the initial power of docker and docker-compose.
  3. Run a local dynamodb container locally for development.
  4. Load some data into that local dynamodb install.
  5. Build a sample flask API that reads from that local dynamodb instance.
  6. Build a sample flask website that reads from the API and returns some basic data.

Setup

1. So to dive right into it, you will need two things installed on your machine to work through this tutorial. Currently everything below assumes you are running on OSX however it should work just fine under Linux as well.

  • Install Virtualbox: https://www.virtualbox.org/wiki/Downloads
  • Install the docker toolbox: https://docs.docker.com/installation/mac/

2.  Assuming that you have never used boot2docker before (and if you did, you should be prompted with instructions on how to convert to docker machine), run the following command to setup a default docker machine. This will be a virtual machine that will be where all the various containers run that you launch. More on that in a minute.

3. You can now run $ eval “$(docker-machine env default)”  to set the required environment variables. If you will be launching docker containers often, you might even elect to put this in your bashrc or zshrc file.

4. You should be able to run docker ps and see the following:

 

Helpful Commands

This area serves as a quick reference for various commands that may be helpful as a developer working with Docker. A lot of these may not make sense just yet, and thats fine. You will learn more about them below and can always come back to this one spot for reference.

 

Building our application

So now that the tooling is setup we can discuss what our project structure will look like.  You can see a completely functional and done copy of the below project on github and you can grab just the files we create here in part 1.

Initial Skeleton

  1. First create a directory called docker-movie-db-demo somewhere.
  2. Within that directory create two directories. One called movie-api and the other movie-web

It should look like this

We created two directories however, that are going to house two separate applications. The first of which, movie-api is going to be a simple flask API server that is backed by data in dynamodb. The second application called movie-www is going to have a simple web interface with a form, and allow the user to list movies from a certain year.

Our first Flask App

Within movie-api go ahead and create a few empty files and directories so that your structure matches the below. We will go through and touch these files one by one.

app.py

Open app.py up and lets toss a few lines in.

This is pretty basic, but it will initialize a basic flask app from demo/services/api.py and listen on a port that is specified as the first argument when running python app.py 5000.

requirements.txt

Open requirements.txt and add in the following

This is also pretty straightforward, but we are ensuring we install flask. Boto is installed for interfacing with dynamodb. I’ll have to write a separate article on why its important to pin versions and the headaches that can solve down the line.

demo/services/api.py

For now lets just add in the following:

We are adding a simple route for the index page of the api service that for now just returns the text “Hello World!”

Dockerfile

The dockerfile is where the magic happens. Depending on how you are used to doing development, you might create a virtual env somewhere, or possible a vagrant image. This certainly works however you often end up with a bunch of files scattered everywhere, mismatches between your virtual env and someone elses (if you arent careful) and/or multiple vagrant images floating around that slow down your machine.

Open the Dockerfile (Note that the Dockerfile should have a capital D) up and paste in the following:

When running the build command, this tells the system how to build an image. In this case it will use a python 2.7 base image, copy the CWD (movie-api) to /code in the container, set the CWD to /code, run an apt-get update, pip install our requirements and then finally run our application. If you want more details you can read the dockerfile reference here which explains whats going on in detail and whats possible.

At this point we have enough of a skeleton to actually build a container, run it if we wanted to.

We just built an image, spun up a container based off that image, queried the service and got a response, stopped the service, and deleted the container. However, every time you make a code change your going to have to rerun the build command, and then relaunch your container. If your doing quick iterative development this can get annoying quickly. There is a better way.

 

 Introducing docker-compose

The docker-compose files are how we orchestrate the the building and running of our containers in an easier fashion. We can leverage how these files work with flasks built in reload system on file change to enable rapid iterative development

docker-compose.override.yml is special. When you run the docker-compose command, it will look for docker-compose.yml and docker-compose.override.yml. If present, it will go ahead and merge them and then perform actions based on that merged data.

We leverage this behavior to build our development environment. If we wanted a test environment for example, we would add a docker-compose.test.yml file, and when running docker-compose target that environment with docker-compose -f docker-compose.yml -f docker-compose.test.yml. However, this is generally only done by build systems, and so we use the override file for development as it keeps the command for developers simpler as they don’t need to specify -f.

docker-compose.yml

Within the root of our project directory (docker-movie-db-demo) lets create a file called docker-compose.yml and make it look like so:

We have just defined a movie-api service that for now has no image, will always restart on failure, and exposes the container port 5000. You can see the full docker-compose file reference here

As mentioned above, the override file will allow us to override some things in the movie-api base compose definition to make development a little bit faster.

Create and edit a file called docker-compose.override.yml and make it look like so:

If you remember, in our dockerfile, we copy up the movie-api files into the image during its build. This is great when you want to make a container that you start shipping around to various environments such as test, staging, and production. However when you just want to do local development, building that same container every time is time consuming and annoying. With our override file, we have made it so that we will mount our code base within the running container. This allows us to use our favorite IDE locally to do development but immediately see those changes reflected. We also have exposed port 5000 on the container and mapped that to port 8080 on our docker-machine. This makes it a little bit easier to debug and test. In production you generally wouldn’t do this and i’ll detail more in a seperate article focusing around production deployment of this workflow.

Starting our movie-api app.

Now, from the root of our project directory (docker-movie-db-demo) run the following command:

 

You can tail the logs of the container by running:

So already you can see that starting up the container is simpler. However things really shine when you start editing code. Fire vim or your favorite IDE and edit movie-api/demo/services/api.py:

If you kept tailing the logs you will see that it instantly reloaded, and if you run curl 192.168.99.100:8080 again you will see that the output changes.

docker-movie-db-demo-1

Wrapping it up

This concludes part 1 of the tutorial. In summary we laid out the structure for our project, went through how to setup a machine for docker, built a sample flask application that returns a hello world message. We also walked through how to make changes to the application and test those changes in realtime without having to build an image and redeploy the container.

In the next post, i’ll focus on adding a local dynamodb instance, how to run one off data scripts to do things like load data and build a sample web interface that interacts with the API.

Part 2 Here