Docker Meet WordPress: A production deploy for the average person

A lot of the WordPress docker blog posts that I have encountered seem to skip over some important parts. I have seen posts that have both mysql and apache running in a single container, as well as posts that ignore the use of volumes that could lead to data loss. The docker ecosystem has evolved a lot in just the past few months and I figured it was worth writing a post showing a more robust way to deploy and manage WordPress with Docker.

This post makes a few sane assumptions, one of which is that you care about you data. The second, is that we are going to assume you may want to run other containers with various services on the same host. Whether that be because you want to run multiple WordPress sites, or maybe other services. Third we assume you have some basic Linux administration experience. Finally, this guide makes the assumption that you have either Docker for Mac or Docker Machine. It also assumes you have gone through the basic tutorials here and here.

This guide also assumes that you are deploying on a single host for now.

One thing this guide does not assume is what provider you might be deploying at. This guide should work if you are deploying on a server at home, DigitalOcean, Linode, or AWS.

It is highly recommend that if you deploy to a provider where your host has a direct public IP address that you setup iptables to restrict access to the Traefik admin, and docker ports except from authorized IPs.

You can see a complete example of the code used here.

Building our WordPress Container

The first thing we want to do is create a directory that will store the various configuration files that are required to manage our docker containers.

Next, we want to create a directory called wordpress. Within this directory we will create a Dockerfile with the following contents.

The reason for the Dockerfile is so that we can install additional PHP plugins. We inherit from the official wordpress docker image so our dockerfile is going to be pretty light. It’s also important to pin yourself to a specific source container. You can read more about why here

Next we need to create our docker-compose.yml file. This should be placed in the docker-wordpress directory. You can see documentation for the compose file and its features here.

The compose file format is pretty simple and easy to read. We have done several things in just a few lines of code.

  1. You can see that we have defined a new service container called wordpress.
  2. We are asking it to build using the Dockerfile inside the wordpress directory that we had just created in the previous step.
  3. We have created a volume called wordpress-data, and its mounted in that container at /var/www/html.
  4. We have established a network for back-end communication between our wordpress container and the database.

Now lets create a docker-compose.override.yml file in the same directory as the docker-compose.yml file.

We have defined a port mapping to map port 8080 on our host machine to 80 in the container. This is only for development (IE running on your local laptop) which is why we placed this in the override file. When invoking docker-compose commands, by default both docker-compose.yml and docker-compose.override.yml are merged.

We can now run our wordpress container by invoking $ docker-compose up -d .

If you run $ docker-compose ps  you will see that the container exited and did not startup properly.

We can examine the logs to see why this container exited with $ docker-compose logs wordpress

This error makes sense, we havent defined the environment variables that we need in order to point wordpress to our mysql database (which we also havent created). Move on to the next section and we can clear this up.

Building our MySQL Container

We can now update our docker-compose.yml file and include a mysql container.

We have made several additions to the compose file.

  1. We added a mysql container, and we can use the mysql:5.7 image from the docker hub.
  2. Similar to the wordpress container, we have created a data volume for /var/lib/mysql.
  3. We have associated our mysql container with our back-end network
  4. We have configured several environment variables on the mysql container. This will cause the container to automatically create a database and a user for us
  5. We have established a container link from wordpress to the mysql container.
  6. We added environment variables to the wordpress container to point it to mysql.
  7. You should also take the time to change any passwords in this file to be unique.

Testing everything locally

Run $ docker-compose up -d mysql  to start the mysql container first, as it may take a minute to initialize the database when running locally.

Now that MySQL is running, we can start the wordpress container, see that its running, and tail its logs.

If you use your web browser and browse to localhost:8080 (if running Docker for Mac/Windows, or on Linux), or 192.168.99.100:8080 (or whatever $ docker-machine ip  shows). You should see the setup screen.

WordPress_›_Installation

If you walk through and finish the setup we can demonstrate the ability to destroy containers and still have your data persist.

Create a test post after the setup and then run $ docker-compose down  and $ docker-compose up -d  to delete and then recreate the wordpress and mysql containers.

With your browser you can see that your site is still configured and your test post still exists.

Now we can move on to creating a production deployment.

Deploying it to a production instance

For this I am going to use a DigitalOcean droplet, however if you are familiar with docker-machine, I am going to use the generic driver to demonstrate installation on any host.

This next bit, assumes you have a server somewhere that is a fresh install. Run the below command replacing your ip address, username, ssh-key, and a name for the machine. When you invoke this command it will do several things. I highly recommend reading what it will do here.

Invoke $ docker-machine ls  and you should see the remote host listed.

Now we can deploy our stack to the remote host.

Now that the mysql container is running, give it a minute or two and then deploy the wordpress computer.

If you run  $ docker ps  you will see that both containers are now running.

Both services are deployed however we cannot access them yet until we deploy Traefik to route requests to the container.

Setting up Traefik

Traefik is an amazing lightweight service that allows you to route traffic to the appropriate containers based on simple labels in your docker-compose file. It is also incredibly simple to setup.

First, create a completely separate directory from the wordpress one above. I’ll call mine docker-traefik.

In this directory, create another directory called traefik and within that a Dockerfile with the following contents

In the same directory as the Dockerfile create a new file named traefik.toml with the following:

Now in the root of the docker-traefik directory create a docker-compose.yml file with the following:

Now before we start this up, we need to create a special network by hand. If you look back in our wordpress compose file, we created a single network called back-end. However, this network is siloed and only joinable by containers within that project. However we need to create a network that all web based containers can join so that traefik is able to communicate with and proxy requests. You can read more about this here.

To create the network simply run $ docker network create traefik . (This network will only function on this host. If you need cross host functionality look into using the overlay driver instead of bridge (the default).

We can see that the network was created.

We can now start our traefik serve.

 

Connecting your application to Traefik

Now that Traefik is deployed, we just need to make a couple of small tweaks to our wordpress app to ensure that traefik can communicate with out wordpress container.

We made a few changes to this file:

  1. We created a new network at the bottom called traefik, the different here is that we are referencing an external network which is also named traefik.
  2. We configured the wordpress instance to use this new network
  3. We configured a few container labels.
    1. We disabled Traefik for the mysql instance as we wouldn’t ever want to proxy http requests to it
    2. We specified the URL that requests will originate from for routing to this container
    3. We specified which network name Traefik should connect to the container on.

If we run $ docker-compose -f docker-compose.yml up -d  it will recreate both mysql and the WordPress host with the new labels.

And if we browse to the admin port for traefik (8080)

Træfɪk

We can see that the container was discovered and setup.

 

If you browse to the domain you specified in the docker-compose file, you should see the WordPress setup screen.

At this point you are all setup and good to start working with WordPress.

Adding PHP Libraries / Extensions

If you find out you need to install an additional php lib, you can add it to the Dockerfile for the WordPress install and just do a $ docker-compose build  and then $ docker-compose -f docker-compose.yml up -d

WordPress Upgrades

If you need to update WordPress you can safely follow the One-Click Upgrade Process as any file changes are persisted to the data volume.

Debugging Rails with pry within a Docker container

If you are running your rails apps in production using docker, you may be tempted to still run locally outside of docker. One of the big reasons to potentially hold back is the use of pry.

However, its actually very simple to attach to a running docker process which makes using pry a breeze.

 

First, ensure that you have the following lines in your docker-compose file for the service you want to attach to:

Now, rebuilding your container by using docker-compose up -d  which should recreate the container if needed.

Now, use docker ps  to find the container id and then docker attach  in order to attach to the running process.

Now, you can insert a binding.pry in your code somewhere…

 

You can now perform your usual debug commands. When done debugging type exit to leave the pry debug session.

To detach from the container without exiting press control + p  and the control + q . Note that if you hit control + c instead of the escape sequence the container process will exit.

Building a small form factor pfSense Router

About a year ago, I was looking around at building a pfSense server to replace my Netgear Nighthawk as I was bringing in some new hardware and wanted to create a couple of VLANs and setup some more advanced routing and such.

I did some research and stumbled across the apu1c4 which seemed like it would be perfect for my needs. pcengines also carries all the other components including case to build a very small device with a lot of power. Note that the below guide walks through the setup specifically with a machine running OS X 10.10. Instructions for writing the pfSense image for other Operating Systems can be found here while instructions for consoling to the device from other Operating Systems can be found here.

Components

Here is a list of all the components you would need to order:

  • One of either the apu1d (2GB of memory) or the apu1d4 (4GB of memory)
  • One power adapter (Note that PC Engines makes adapters for the EU and UK
  • One case (They do have other colors available however black is recommended by manufacturer for heat reasons)
  • One mSata SSD Drive (16GB) or larger depending on your use case (storing large amounts of logs for example)
  • One Null Modem cable
  • One USB to Serial cable (this one I know works with OS X)

Building the server

  1. Follow this guide here which shows how to install the head spreader and insert the board into the bottom of the case. (Ensure you first remove the hex screws on the serial port)
  2. Install the mSATA drive by inserting it into the socket on the board.
  3. Close the case, and screw the hex nuts back in.

Installation

  1. Download a copy of pfSense here. When prompted choose AMD64 for the Architecture, Live CD with Installer (on USB Memstick) for the Platform, and Serial for the console.
  2. Using diskutil list find your usb device. In the below example, I have inserted a 16GB USB drive.
  3. In the above output, you can see that my usb drive is at /dev/disk4. We need to unmount disk4s1 however before we can write to the device.
  4. We can now dd our pfSense install image to our thumb drive. Note that instead of using /dev/disk4. We are using /dev/rdisk4. In short /dev/rdisk will allow more direct access to the USB device and thus much better performance for writing our image.
  5. Plugin your USB to Serial Adapter, and connect the serial cable to the adapter and to the serial port on the pfSense box.
  6. Plugin your USB drive that has the pfSense image into the pfSense box.
  7. From terminal run ioreg -c IOSerialBSDClient | grep usb which show your usb to serial cable connected. If it doesnt, check that you don’t need special drivers installed.
  8. The output from the above command should show you an IODialinDevice such as /dev/tty.usbserial
  9. In Terminal run the following to attach to the console device
  10. Connect the power cord to the pfSense box.
  11. After a minute or two the device should boot and you can start configuring the device following this guide here.
  12. Port mapping is from left to right. re0, re1, re2 respectively.

Realtime (or not) Traffic Replication with Gor

As an operations engineer, we all have various tools that we rely on heavily. Those could be for load testing, monitoring, code analysis or deployments. One tool that I wanted to touch on is gor. Gor is an amazing tool which allows you to replay captured traffic in either real time or from capture files. This is incredibly useful when rewriting applications or even testing refactored changes. You can spot bottlenecks before you push changes to production.

gor

Realtime Replication

Grab the latest build here and deploy or copy it up to machine that you want to be the replayer. The replayer will be mainly CPU bound. Once on this machine, you can run it with the following:

This will setup the replayer to listen on port 28020. Any incoming traffic to this port will have its host header rewritten to match example.org and it will then be sent to staging.example.org limiting to 10 requests/second. You could also change the –output-http line to “http://staging.example.org|10%” to send 10% of incoming requests. Remove |10 or |10% entirely to just mirror 100% of traffic. For performance reasons, I would set GOMAXPROCS equal to the number of CPUs on this server.

Deploy or copy the latest build to a server that you want to replicate traffic from.

This will setup gor to capture traffic from port 80 and to send it to the replayer which we started running in the previous step.

Once these are both running (always ensure the replayer is running first) you will see requests start to hit your target.

Captured Replication

However, if you want to playback traffic faster than 100%, or for whatever reason don’t want to mirror live production traffic then you can create a capture file on the source machine.

Run this for however long you want. It will capture data from port 80 and dump it to requests.gor

Copy this file to our replayer and we can play this back.

Very similar to the first time we ran the replayer. However this will use our requests.gor file as a source, and replay it back at 200% speed. So if your production system saw 100/requests/second this should replay it back at 200/requests/second.

Note that there are a lot of other things you can do with gor and the readme has a lot of great documentation around its use. I have successfully used gor to replicate over 400k/requests/minute from one service to a staging service. In this case, I actually had to run multiple replayers.

 

Docker for Developers Part 2

The Saga continues

This is a continuation of my previous post which was an introduction to docker geared for developers. In the previous post, we got somewhat familiar with the docker ecosystem and built a very simple hello world application. In this part we are going to get dynamodb running locally, run a one off script to preload data, build a sample api service that reads from dynamo, and finally a sample website that reads from that api service. As with the first part, you can retrieve a copy of the completed code base off github or directly download the assets for part 2 here.

Running DynamoDB

Before we can really start building our API server, we need a place to store data. As we learned in the previous part, we can use docker-compose files to orchestrate the various services that make up our application. This could be a postgres or mysql instance, however in this case the application is going to leverage DynamoDB for storage. When developing the application it doesn’t make sense to create an actual dynamo table on AWS as you would incur some costs for something that is only going to be used for development. There are some caveats and limitations to this however which you can read about here. Since we would never want a local dynamodb container running in any environment other than development, we want to go ahead and edit docker-compose.override.yml with the following:

Save that file, and run docker-compose up -d and you should see output that the dynamodb container is running alongside our movie-api instance

Now that DynamoDB is running we need to be able to link our API container to it so that they can communicate. Our Compose file needs one small change..

Rerun docker-compose up -d which will recreate the movie-api-1 container.

Loading data into DynamoDB

Now that we have dynamodb running, we can show how to run a one off command to invoke a script that will seed data into dynamo. First, we can create the directory structure for storing our sample data and a place for our seed scripts. Within the movie-api directory create two new directors. One called scripts and the other data. We can use an amazon published sample data set for building this application. You can retrieve it here and extract this to the data directory.

The application directory structure should now look like this:

Now within the scripts directory lets write a simple script for seeding this data to our local dynamodb instance. Within the movie-api/scripts directory create a file called seed_movie_data.py and open it for editing.

 

This script is pretty straightforward, this script will create the table within dynamodb if it doesnt exist, and then seed it with a little over 4,000 movies from the json file in our data directory.

Before we can run our script, we need to set a few environment variables on our movie-api instance. Go ahead and open up docker-compose.override.yml and adjust it to reflect the following:

The AWS credentials do not need to be anything real, in fact, leave them as above (foo, and bar). They just need to be set to prevent boto from barfing when connecting to the local dynamodb instance. In a production setting, we would leverage IAM roles on the instance to connect which is far more secure than setting credentials via an environment variable.

Once you have created the script, and also set the environment variables we can run the following to recreate the container with the new environment variables and then run the script.

As you can see, we ran the script that we created within the container. We do not need to have our python environment setup locally. This is great, because the containers environment is isolated from every other application we may be developing (and even other services within this application). This provides a high degree of certainty that if we deploy this image, that it will function as we expect. Furthermore, we also know that if another developer pulls this code down and runs the project it will work.

Building our API

Now that we have the data in our datastore  we can build a simple lightweight API to expose this information to a client. To keep things simple, we are going to create a simple endpoint which will return all the movies that were released in a specified year. Lets go ahead and open up movie-api/demo/services/api.py in our IDE and add a bit of code to make this happen.

Save this and then we can try querying our service with a simple curl command:

Building our Website

Now that we have an API that returns the data we need, we can create a simple web interface with a form that will present this data in a nice fashion. There are a few ways to make this happen, normally i’d recommend using something like angular however to further demonstrate container linking and such I will use a seperate flask app.

Within the movie-web directory we need to create our app skeleton. To simplify things, copy the Dockerfile, app.py and requirements.txt files from movie-api to movie-web. Besides the copying of those 3 files, go ahead and create the following directory structure and empty files so that the output of tree matches the below.

In requirements.txt remove the boto reference and add in requests.

Open up app.py and edit the third line to reflect the below

Go ahead and create demo/services/site.py and open it up within your IDE.

Now to get things running we just need to edit our docker-compose.yml and docker-compose.override.yml files.

docker-compose.yml:

docker-compose.override.yml:

With those changes saved. We can run docker-compose up -d which should launch our container. We can verify connectivity with curl.

Now lets create a couple templates, and a build out the endpoints on our webapp so that we can perform a simple search.

Open up demo/services/templates/index.html:

Open up demo/services/templates/results.html:

And finally edit demo/services/site.py:

If you visit your web browser at http://192.168.99.100:8081/ which should show the following:

movie-db-index

Enter in 1993 and you should see the following results:

movie-db-results

At this point that completes our application and this tutorial on docker for developers. To Recap, we installed and setup the docker toolbox on your machine. We then demonstrated how to use docker-machine, docker, and docker-compose to build a sample application that uses dynamodb, an api service, and finally a web application to view a sample dataset. You should be familiar now with creating a Dockerfile to build an image, and using compose to orchestrate and run your application. You have defined environment variables, container links, ports, and even leveraged the ability to map a volume coupled with flasks ability to reload on file changes to rapidly speed up development.

One thing you may have noticed is that we spent most of the time dealing with application code, and not a whole lot of time spent working with docker itself. Thats kind of the point. One of the greatest strengths of docker is that it simply gets out of your way. It mays it incredibly easy to rapidly iterate and start working on your application code.

While this wraps up my 2 part series on docker for developers, ill be writing additional posts centered around docker for QA and Operations Engineers. This will focus on testing, CI/CD, production deployments, service discovery, and more.

Docker for Developers Part 1

Summary

 

Recently I started working on a few projects where docker seemed like a great fit to rapidly speed up development of the project. In one case we wanted to build a prototype service that contained an API endpoint that utilized 4 micro services. The docker landscape is still green with a lot of toolsets not even existing for a year. While I feel the development side of things is great, the production deployment, auto scaling, and release management is still lacking. One of the projects I have been following closely is Rancher which seems to be on track to solve all of these things. This will be a series of posts initially focusing on development and the building of a fully featured sample application demonstrating the power of docker running locally. I will add posts documenting CI with Jenkins, through to a production deploy and management on AWS.

What will we do?

This tutorial is going to walk through the creation of a sample web application that utilizes a sample API service backed by dynamodb. Specifically we will:

  1. Layout the structure of our application and explain why things are laid out like they are.
  2. Build a sample hello world flask app that shows the initial power of docker and docker-compose.
  3. Run a local dynamodb container locally for development.
  4. Load some data into that local dynamodb install.
  5. Build a sample flask API that reads from that local dynamodb instance.
  6. Build a sample flask website that reads from the API and returns some basic data.

Setup

1. So to dive right into it, you will need two things installed on your machine to work through this tutorial. Currently everything below assumes you are running on OSX however it should work just fine under Linux as well.

  • Install Virtualbox: https://www.virtualbox.org/wiki/Downloads
  • Install the docker toolbox: https://docs.docker.com/installation/mac/

2.  Assuming that you have never used boot2docker before (and if you did, you should be prompted with instructions on how to convert to docker machine), run the following command to setup a default docker machine. This will be a virtual machine that will be where all the various containers run that you launch. More on that in a minute.

3. You can now run $ eval “$(docker-machine env default)”  to set the required environment variables. If you will be launching docker containers often, you might even elect to put this in your bashrc or zshrc file.

4. You should be able to run docker ps and see the following:

 

Helpful Commands

This area serves as a quick reference for various commands that may be helpful as a developer working with Docker. A lot of these may not make sense just yet, and thats fine. You will learn more about them below and can always come back to this one spot for reference.

 

Building our application

So now that the tooling is setup we can discuss what our project structure will look like.  You can see a completely functional and done copy of the below project on github and you can grab just the files we create here in part 1.

Initial Skeleton

  1. First create a directory called docker-movie-db-demo somewhere.
  2. Within that directory create two directories. One called movie-api and the other movie-web

It should look like this

We created two directories however, that are going to house two separate applications. The first of which, movie-api is going to be a simple flask API server that is backed by data in dynamodb. The second application called movie-www is going to have a simple web interface with a form, and allow the user to list movies from a certain year.

Our first Flask App

Within movie-api go ahead and create a few empty files and directories so that your structure matches the below. We will go through and touch these files one by one.

app.py

Open app.py up and lets toss a few lines in.

This is pretty basic, but it will initialize a basic flask app from demo/services/api.py and listen on a port that is specified as the first argument when running python app.py 5000.

requirements.txt

Open requirements.txt and add in the following

This is also pretty straightforward, but we are ensuring we install flask. Boto is installed for interfacing with dynamodb. I’ll have to write a separate article on why its important to pin versions and the headaches that can solve down the line.

demo/services/api.py

For now lets just add in the following:

We are adding a simple route for the index page of the api service that for now just returns the text “Hello World!”

Dockerfile

The dockerfile is where the magic happens. Depending on how you are used to doing development, you might create a virtual env somewhere, or possible a vagrant image. This certainly works however you often end up with a bunch of files scattered everywhere, mismatches between your virtual env and someone elses (if you arent careful) and/or multiple vagrant images floating around that slow down your machine.

Open the Dockerfile (Note that the Dockerfile should have a capital D) up and paste in the following:

When running the build command, this tells the system how to build an image. In this case it will use a python 2.7 base image, copy the CWD (movie-api) to /code in the container, set the CWD to /code, run an apt-get update, pip install our requirements and then finally run our application. If you want more details you can read the dockerfile reference here which explains whats going on in detail and whats possible.

At this point we have enough of a skeleton to actually build a container, run it if we wanted to.

We just built an image, spun up a container based off that image, queried the service and got a response, stopped the service, and deleted the container. However, every time you make a code change your going to have to rerun the build command, and then relaunch your container. If your doing quick iterative development this can get annoying quickly. There is a better way.

 

 Introducing docker-compose

The docker-compose files are how we orchestrate the the building and running of our containers in an easier fashion. We can leverage how these files work with flasks built in reload system on file change to enable rapid iterative development

docker-compose.override.yml is special. When you run the docker-compose command, it will look for docker-compose.yml and docker-compose.override.yml. If present, it will go ahead and merge them and then perform actions based on that merged data.

We leverage this behavior to build our development environment. If we wanted a test environment for example, we would add a docker-compose.test.yml file, and when running docker-compose target that environment with docker-compose -f docker-compose.yml -f docker-compose.test.yml. However, this is generally only done by build systems, and so we use the override file for development as it keeps the command for developers simpler as they don’t need to specify -f.

docker-compose.yml

Within the root of our project directory (docker-movie-db-demo) lets create a file called docker-compose.yml and make it look like so:

We have just defined a movie-api service that for now has no image, will always restart on failure, and exposes the container port 5000. You can see the full docker-compose file reference here

As mentioned above, the override file will allow us to override some things in the movie-api base compose definition to make development a little bit faster.

Create and edit a file called docker-compose.override.yml and make it look like so:

If you remember, in our dockerfile, we copy up the movie-api files into the image during its build. This is great when you want to make a container that you start shipping around to various environments such as test, staging, and production. However when you just want to do local development, building that same container every time is time consuming and annoying. With our override file, we have made it so that we will mount our code base within the running container. This allows us to use our favorite IDE locally to do development but immediately see those changes reflected. We also have exposed port 5000 on the container and mapped that to port 8080 on our docker-machine. This makes it a little bit easier to debug and test. In production you generally wouldn’t do this and i’ll detail more in a seperate article focusing around production deployment of this workflow.

Starting our movie-api app.

Now, from the root of our project directory (docker-movie-db-demo) run the following command:

 

You can tail the logs of the container by running:

So already you can see that starting up the container is simpler. However things really shine when you start editing code. Fire vim or your favorite IDE and edit movie-api/demo/services/api.py:

If you kept tailing the logs you will see that it instantly reloaded, and if you run curl 192.168.99.100:8080 again you will see that the output changes.

docker-movie-db-demo-1

Wrapping it up

This concludes part 1 of the tutorial. In summary we laid out the structure for our project, went through how to setup a machine for docker, built a sample flask application that returns a hello world message. We also walked through how to make changes to the application and test those changes in realtime without having to build an image and redeploy the container.

In the next post, i’ll focus on adding a local dynamodb instance, how to run one off data scripts to do things like load data and build a sample web interface that interacts with the API.

Part 2 Here

Software RAID 1 with Xenserver 5.5

If you happen to have the need to run a software raid 1 setup with your xenserver install below are a simple step by step to get this working.

This setup assumes that during the install, you elected not to setup any storage repositories. I generally do that separately.

In the below, pay attention to — and ‘ as sometimes these characters get mangled during copy/pastes. Comments inline.

Reboot, and within the bios set your machine to boot of /dev/sdb. Once xenserver starts up verify that you have booted to the proper disk by running a df -h.

You should see something like..

Once verified, run the following to resetup /dev/sda

Now if you run mdadm --detail /dev/md0 you can see that its syncing data from /dev/sdb.

You can point your swap to /dev/md1. Note that ideally you would never use swap. If for whatever reason you regularly plan to use swap space, doing this will impact performance.

also don’t forget to edit /etc/fstab and change it to use /dev/md1

watch mdadm --detail /dev/md0 until the state is clean then reboot again and this time boot from sda.

This completes the setup, and you have properly tested that you can boot cleanly from either drive.

Connecting a chromecast to a wireless network that has a captive portal

In September I rented a large house in the Poconos that to my surprise required users to go through a captive portal before being able to access the internet. This is certainly common in hotel networks, and these days most consumer routers even offer this functionality although in my experience its rare to see it utilized. I’d say of the 3-4 dozen houses we have rented i’ve seen it maybe 2 or three times. Regardless, one thing I love to always have in my backpack is a spare chromecast as its great for streaming media from plex or netflix while on the go.

One thing that became immediately apparent was that the chromecast lacks the ability for the user to provide any type of input so if you run into a captive portal your out of luck. However there are a couple of ways to make this work. First, I would not recommend following the below steps to get your chromecast working if you are on a hotel network. While it will work (although they may restrict certain kinds of traffic) you open yourself up to having any other user also connected to the network. Most likely someone would start playing something random and screw with you. For this though there is still a solution. Look into getting a travel router. This would also easily work for the scenario above as well.

However, if you dont want to spend any coin and happen to have a *nix laptop handy theres an easy way to make this work. Temporarily spoof your chromecast’s MAC address on your laptop, auth with the captive portal, turn wifi off, reset your MAC back to factory setting, and use your phone to join the chromecast to the network.

I’ll show you how to do this for OS X.

First, record your current MAC Address by typing the following:

Copy ether 3c:15:c2:b8:ad:be to your notepad

in terminal, type the following:

This will change your MAC address. Then run the following commands

Now, reconnect to the wifi network, and auth with the captive portal

Once thats done, turn off your wifi card again and use your phone to go through the normal setup to put a chromecast on the network. This time it should properly connect.

Run the following to reset your MAC address to its original

All done!

Ubuntu 14.04 mdadm raid failure to boot due to failure to mount /root

A few days ago I had a drive fail in my software raid 1 array. Figuring this would be no big deal, I powered off the server, swapped out the bad drive for a good one and powered it back on. (and made sure to tell the bios to boot off the second good drive)

Initially things look to be loading great however I was suddenly dropped into busybox shell with the following message.

I attempted to mount the array manually but got an error that the device or resource was busy

I booted up with a live CD and could browse and access the data just fine.

Things that didn’t work but put here because it could help someone else:

From a live CD I tried running an fsck which reported back no errors. I forced one just to be safe however had no luck.

I mounted the array to /mnt, chrooted to /mnt and ran an initframfs-update.

I stumbled upon this bug which turned out to be the issue and the suggested solution worked great.

From my live cd and chrooted to the array.

Then rebooting was enough to get the OS to properly boot.

 

Redundant SFTP Servers with AWS: Route53

With Amazon I recently ran into a case where I needed an SFTP service that was accessible to the internet but could survive up to 2x AWS availability zone outages. Sadly, amazon only allows certain ports to be load balanced using the Elastic Load Balancers. Since we could not use any other port but 22 we were forced to look at a few solutions such as HAProxy, and some commercial solutions. The latter being very expensive from a licensing and time perspective.

In order to survive two availability zone outages any infrastructure that the SFTP process relies on also needs to be available. Below is a list of the various systems that are required for this whole process to run.

  • Shared Authentication system
  • Shared Filesystem
  • SFTP Server
  • Some way to automatically route traffic between healthy hosts.

I will touch upon each of these in seperate blog posts but for this one I want to discuss the overall arcitecture of this system.

For centralized authentication you have a few options. One of which would be some sort of master/slave LDAP system that spans availability zones. Another might be using SQLAuthTypes with an RDS instance.

One important thing is to ensure that your SFTP Servers share a common file system. This can be accomplished a myriad of ways, but if you want to go the cheaper route then I recommend GlusterFS. I wont dive into the setup of this too much as that could be a whole article in itself. However with AWS you would want three glusterFS servers each in an availability zone. You would configure a replica volume (Data is replicated across all three glusterFS servers) and this volume would be mounted on each SFTP server. In the event of a glusterFS failure the client would immediately start reading/writing from one of the other gluster servers.

Another important thing to remember is that you want things to be transparent to the end user. You want to sync the ssh host keys in /etc/ssh across the servers. This way if a user connects via SFTP and gets routed to a diffent SFTP server they wont get any warnings from their client.

What ties all of this together is route53. Amazon recently introduced the ability to create health checks within the DNS Service that they offer customers. The documentation for configuring health checks is certainly worth reading over.

First health checks must be external, so lets assume you have three servers with elastic IPs.

Hostname Public IP
sftp-1.example.com 57.108.34.22
sftp-2.example.com 57.118.11.90
sftp-3.example.com 57.107.39.93

We first want to configure three TCP health checks for the above IPs that check for port 22.

content_route53-1

We then want to add record sets inside of the zone file we have. Make sure that you have the TTL set to either 30 or 60 seconds. In the event an instance goes down you want that record to be yanked out as quickly as possible. Depending on how you want to configure things (active/active or active/passive) you may want to adjust the Weighted value. If each record has the same weight then a record will be returned at random. If you weight sftp-1 to 10 and sftp-2/3 to 20 then sftp-1 will always be returned unless it is unavailable then either of the other two will.

content_route53-2

As you can see, the final configuration shows three servers that will response for sftp.example.com

content_route53-3

Now that we have the above configured. If you run dig you will see that the TTL value is low, if you stop SFTP then that record will no longer be returned.

In my next post in this series, I will discuss glusterFS.