Dev Ops

Disposable Infrastructure Part 1: Building Your Vagrant Control Node

In this initial portion of the Disposable Infrastructure series we will detail how to create a local control node for your Infrastructure. Using Vagrant has the advantage of greatly simplifying the overhead of developer workstation maintenance. The Vagrant will be loaded with all the necessary tools for managing both the code and the infrastructure of the entire platform we are building, therefore the only dependencies required on the workstation are those of running vagrant, and code editors. When new dependencies are added, updating the source will update all users systems.

By the end of this article you will have built a system in which you can create and provision other environments, control your systems, and have a build environment for your various projects.


The Vagrant Project

Vagrant is an invaluable tool used primarily for building development environments by software developers. While that will be a function of this project when we are done, we will also use it to develop our environment.

Installing Vagrant and it's dependencies

In order to make this project accessible to a wider audience we will use Virtual Box as the Vagrant Provider in lieu of paid virtualization technologies. If you prefer you can adjust the Vagrantfile's in this series to support your preferred provider.

Prerequisites

Any modern operating system should be able to manage this infrastructure. Though I will be using OSX, I will try to highlight the subtle differences where there may be problems on other host systems. If you have an issue on another platform such as Windows or Linux, please leave a comment and we will update accordingly if possible.

  1. Install virtualbox
  2. Install vagrant
  3. Install git
  4. Install your favorite editor or IDE

That's it! Your ready to start coding.

Creating your Vagrant project

Now we are at a point where there is a bit of a 'chicken vs. egg' scenario. As a best practice EVERYTHING will be in source control, as far as code is concerned. But unless you're using a hosted source code repository such as github.com or gitlab.com you will need your own git repository server.

While gitlab.com and github.com are both great platforms, they should NOT be part of your infrastructure if your attempting to minimize external dependencies. So for the time being, we will keep all sources we are generating locally. That's not to say that you cannot or should not use a hosted solution, but you consider using it only as a mirror and base your infrastructure on your own infrastructure, so to say.

We will now begin with initializing the vagrant project, just as the documentation states.

  1. Make a local directory for your infrastructure. For this tutorial we will use ~/infra_template as the project root.
  2. From inside that directory run vagrant init
  3. Initialize the git project in the same directory by running git init
  4. Add the Vagrantfile git add Vagrantfile
  5. Commit the changes git commit -m 'Initial commit'

Vagrantfile customization

We now have a default Vagrantfile with the basics to get started on our infrastructure. The first modification we will make will be to simplify the vagrant provisioner choice. At the top of the Vagrantfile add this line
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'virtualbox'

just above the
Vagrant.configure(2) do |config|

line. This may be sort of an anti-pattern for vagrant, but I find it simplifies things especially when there are virtualbox specifics hardwired into your Vagrantfile's. This also prevents the necessity of the additional --provider=virtualbox flag when running vagrant up. Again, see the docs for details.

Defining your control node VM

It may be best to completely remove everything inside the configure block, and apply our customizations. We will start by defining a single VM, which allows us flexibility in having vagrant create additional VM's in the future with a single vagrant up command.

Add the following just inside the configure function:

#
# Create the dev VM
#
config.vm.define :dev do |dev|
    # Define the base box and operating system for our server
    dev.vm.box = "ubuntu/trusty64"
    dev.vm.box_url ="https://atlas.hashicorp.com/ubuntu/boxes/trusty64/versions/14.04/providers/virtualbox.box"

    # Configure the network
    dev.vm.network :private_network, ip: "192.168.55.123"
    dev.vm.hostname = "dev.local"
    dev.ssh.forward_agent = true
end   

Let's break that down.
First we are telling Vagrant to define 1 virtual machine. This VM is based on ubuntu/trusty64 from hashicorp repository and we are naming it dev. Then we create a private network using a local ip of 192.168.55.123 which will avoid any potential port conflicts on the host workstation. You may need to alter that though, if your host workstation is on the 192.168.55.0 subnet. You want to pick something, not in-use. Finally we set a hostname and tell vagrant to pass in any ssh keys we have on the host to inside the VM. This will become important later when we need to pull and update source code from a remote git repository later on.

Custom Virtualbox settings

You may find that the default VM created by vagrant is rather weak, and may not be adequate for extensive development. We will bump up some defaults to give us some memory overhead. Also, virtualbox by default does not allow symlinks in a shared folder. To avoid a future 'gotcha' we will go ahead and enable that now. Add the following code inside the config.vm.define block that was added in the last section:

# Allow symlinks, set available ram to 2 GB
dev.vm.provider :virtualbox do |vb|
    vb.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate/vagrant", "1"]
    vb.customize ["modifyvm", :id, "--memory", "2048"]
end

It will also be much easier to directly modify the sources inside the VM from your machine, so we will create a shared folder to do so. Add the below code just below the block we previously added:

# Sync the /vagrant folder on the VM with this project directory
dev.vm.synced_folder ".", "/vagrant"
Initial provisioning the of VM

Remember the premise of this project is to define our entire infrastructure in code and make everything completely disposable. Our vagrant VM is no exception. At any point we should be able to destroy our vagrant VM by running vagrant destroy and recreate it. This is a critical point, that ensures any user that pulls down our system, has exactly the same VM as we do. It also reduces the risk in major refactoring. We are free to do whatever the heck we want in our own VM as a test that won't affect other users. Feel free to apt-get to your heart's content to test your new thing. But remember to always update the code to make it permanent.

This is accomplished by way of a provisioning file. Personally I like to separate a tiny bit of my Vagrant provisioning from the provisioning tasks of my remote infrastructure. This is done by way of a provisioning.sh shell script, which we will create in the project root now.

In provisioning.sh, we will script out some basic stuff for our vagrant. Add the following to get started:

#!/bin/bash
# -*- mode: bash -*-
# vi: set ft=bash :

sudo apt-get update

echo "Installing common items and ansible dependencies"
sudo apt-get install curl unzip software-properties-common python-dev python-setuptools -y
sudo easy_install pip

echo "Installing ansible"
sudo apt-add-repository ppa:ansible/ansible -y
sudo apt-get update
sudo apt-get install ansible -y

echo "Installing ansible dependencies"
/usr/local/bin/pip install git+git://github.com/openstack/python-novaclient
/usr/local/bin/pip install docker-py
/usr/local/bin/pip install pyrax

As you can see, the provisioning.sh file is simply installing ansible. We will now instruct Vagrant to use this during the provisioning phase.

In the Vagrantfile add the following at the end of the vm.define block.

    # Initial provisioning for the vagrant VM
    dev.vm.provision :shell, :path => "provisioning.sh"

Now of course we will need more than ansible as a dependency to build our infrastructure. Remember though, that provisioning.sh is just a bootstrap to get ansible going. We won't need to install ansible anywhere else (other than maybe on our CI server) so we keep this separated. All other provisioning tasks will be done using ansible directly. This allows us to share the ansible roles everywhere in our environment.

Testing the setup

Before we get to far, let's make sure this thing actually runs. From our project root. We will issue the vagrant up command to start the VM.

If all goes well, vagrant will start the VM and kick off provisioning. Resolve any errors you may have by fixing syntax errors in your Vagrantfile or provisioning.sh file. Don't worry if there are errors. Running vagrant destroy --force and then vagrant up again will completely wipe out the VM and start everything over, leaving you with a clean VM to work with. You may be doing this many, many times when building your environment.

Now attempt to login to the vagrant by running vagrant ssh. You should be dropped into a new bash shell inside the VM as the 'vagrant' user. Run ls -la /vagrant and make sure your project files are visible. Right now, that's mainly the Vagrantfile, and provisioning.sh. Let me know in the comments if you have issues.

At this point you'll want to commit your changes if you haven't already. Remember that everything in code also has a history that can be tracked at to any point in time. This has a great side effect of allowing you to see a picture of your environment as it was in the past. I'll stop mentioning the source management aspects of this build now, and leave it to you to determine where and when you want to commit your changes.


Ansible

Ansible was built as an answer to the overly complex and difficult to setup provisioning tools of the past. While it's certainly possible to utilize puppet, salt, chef, or any other ungoogleable provisioning library for this task, we are going to stick with Anisble here for it's simplicity and ease of installation.

Setting up the ansible project

The next step in provisioning is running the ansible task to install necessary software. Before we get ahead of ourselves though, we need to setup our ansible project. We will follow the Ansible best practices guide as closely as possible for this step, but you may notice a little deviation here to make things a bit more consistant. Go ahead and stop now to read that page thoroughly. You'll want to bookmark and refer back to that in the future when building out your own ansible workflow.

We will first begin by placing an ansible folder in our project and nesting our ansible stuff within. Run this command in your project root to shell out the initial folder structure:

mkdir -p ansible/{bin,etc/ansible,group_vars,inventory,library,playbooks,roles,tests}

I'll explain in detail the use for each folder in a minute.

Inventory files

Inside the ansible/inventory folder make 4 empty files named localhost.yml, control.yml, staging.yml, and production.yml.

touch ansible/inventory/{localhost.yml,control.yml,staging.yml,production.yml}


These will be the 4 environments we will manage with ansible. These environments may consist of multiple hosts of course, and even have slightly different configurations, but they would each be very similar.

The inventory files define variables specific to the hosts your controlling and provides connection information for each host.

We will only be working with localhost.yml for now. This will be the inventory file for your local vagrant VM, the others are for remote servers.

Make ansible/inventory/localhost.yml look like so:

[localhost]
127.0.0.1 ansible_connection=local ansible_python_interpreter=python2

[localhost:vars]
env_shortname=localhost

This tells ansible that the 'localhost' host is available at 127.0.0.1, but even further, the host is local to the ansible install and no remote or 'ansible_connection' is necessary. And incase you also have python3 on your systems, always add ansible_python_interpreter=python2 to your inventory line. This just avoids a future python compiler headache.

ansible.cfg and default hosts file

Now, I would like to show you some tricks on simplifying ansible workflow when it's used in this manner. We are going to copy the ansible.cfg that was installed into the project root so we can version it with everything else. From inside the vagrant ssh session, copy the config file into your project tree:

cp /etc/ansible/ansible.cfg /vagrant/ansible/etc/ansible/ansible.cfg

You may be asking how ansible knows to use this config, and I have an answer. Sometimes a hammer IS the right tool for the job, sometimes it's not. Ansible by default, looks in /etc/ansible for a default configuration, but this can be overridden on the command line. I'm lazy and can't remember to type that every time, so we are going to delete the original /etc/ansible folder and replace it with /vagrant/ansible/etc/ansible, hence the long path name.

Add the following lines at the end of your provisioning.sh file:

echo "Removing the default /etc/ansible directory, and symlink it to the vagrant folder"
rm -rf /etc/ansible/
ln -s /vagrant/ansible/etc/ansible/ /etc/ansible

Now that our ansible.cfg is tracked in source (and you have it checked in right?) let's
customize it to our install.

At the top of the file you'll find the the [defaults] section. Adjust the inventory, and library, and roles_path variables to our vagrant's path in our source tree. They should look like this:

...
inventory      = /vagrant/ansible/inventory/
library        = /vagrant/ansible/library/
...
roles_path    = /vagrant/ansible/roles
...

After you have reprovisioned the vagrant, ansible will use your versioned sources for it's config. Let's do that now. On your host machine, from inside the project root run:

vagrant destroy --force && vagrant up

Once it's back up bring the vagrant ssh session back up and test that ansible is working.

In this setup we always run ansible commands from /vagrant/ansible directory, so drill into that folder now then run ansible localhost -m ping. We are expecting a success like so:

127.0.0.1 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

If not, check over what you've done and try again.

Your first ansible jobs

Since we now have a working ansible, at least against our local vagrant. We are going to begin creating a playbook and an ansible role. Let's start by making a folder for our new role. From inside your vagrant ssh connection:

mkdir -p /vagrant/ansible/roles/standard-tools/tasks

This follows the Ansible best practices guide for the folder structure for roles. Now create a main.yml file in the folder you just made with the following content.

---
- name: "Update apt cache"
  become: yes
  apt:
    update_cache: yes

- name: "Install vim"
  become: yes
  apt:
    name: "vim"
    state: present

- name: "Install curl"
  become: yes
  apt:
    name: "curl"
    state: present

- name: "Install htop"
  become: yes
  apt:
    name: "htop"
    state: present

- name: "Install man"
  become: yes
  apt:
    name: "man"
    state: present

- name: "Install unzip"
  become: yes
  apt:
    name: "unzip"
    state: present

I would like all of these things installed on all of my servers. Now I have a role that I can use in all my playbooks to do that. Speaking of playbooks, we are going to need one to execute this role, so let's make that now.

Create a common-provisioning.yml file under the playbooks directory with the following:

#
# common-provisioning.yml
# Loads up common softare on all hosts
#
- name: "Install common software"
  hosts: all # All hosts run this playbook
  roles:
    - standard-tools

Test it! From inside your vagrant ssh session (remember you'll always run ansible from there) and inside the /vagrant/ansible folder run:

ansible-playbook -i inventory/localhost.yml playbooks/common-provisioning.yml

If all is well, you'll not see any errors, and you should be able to run, for example htop. Which wouldn't have been installed before.

This concludes Part 1 of our Disposable Infrastructure series. I hope you enjoyed this section and I promise you the next section will be a bit shorter.


Andrew Cope
DevOps Lead
or drop us a note and say hello!