Then, clone this repository somewhere and, in a terminal, cd into the directory you cloned the repo into. There:
$ vagrant plugin install vagrant-hosts
You need to do that step only once. On my mac, it failed at first because it didn't find some Ruby gem, but when I just tried again a few minutes later, it worked. No idea whether that was just bad luck.
Then, to build and start all the demo VMs:
$ vagrant up
That will take a bit of time, certainly the first time you run this. When everything is up and running, you have 3 VirtualBox VMs running on your machine: "control", "target1", and "target2". You can ssh to either of them like this:
$ vagrant ssh NAME
The directory you've been in on the host OS (the one with the Vagrantfile) is mapped into the guest VM at
When you're done playing and want to ditch the VMs, do this:
$ vagrant destroy
For anything beyond this, consult the Vagrant documentation.
These are the things I demonstrate in the talk.
$ vagrant ssh control vagrant@demo-control:~$ sudo apt-get update vagrant@demo-control:~$ sudo apt-get install -y ansible
Ubuntu 14.04's Ansible version is a bit outdated, but it's recent enough for us.
First, cd to /vagrant, which is the directory containing the cloned repository, mapped into the VM. Then
vagrant@demo-control:/vagrant$ ansible all -i inventory/demo -m ping
You need to add the host keys now. This happens only the first time you connect to the hosts, as usual with SSH.
Run a few other commands to get a feel for what's happening.
vagrant@demo-control:/vagrant$ ansible all -i inventory/demo -m shell -a "free -h" vagrant@demo-control:/vagrant$ ansible target1 -i inventory/demo -m shell -a "free -h"
Ignore the -m and -i things for now, we'll get to them later.
Here's one that spews out a lot of info about target1:
vagrant@demo-control:/vagrant$ ansible target1 -i inventory/demo -m setup
Since Ansible s using SSH under the hood and you will very likely require root privieleges on your target systems, it is important to be able to get root privileges in one way or another. If you don't specify anything to that effect, Ansible will just try to log in as "the current user":
vagrant@demo-control:/vagrant$ ansible target1 -i inventory/demo -m shell -a "whoami"
There are two principle ways to become root with Ansible: Either SSH as root to the target (
-u root will do this), or log in as whichever user and use sudo on the target (
--sudo). In both cases, you have to be able to do the respective thing without password. There are options you can specify that lets Ansible ask you for a password, but that is really not recommended.
On our Vagrant boxes, passwordless SSH for the root user is not set up, but the 'vagrant' user can sudo without a password. So let's use sudo:
vagrant@demo-control:/vagrant$ ansible target1 -i inventory/demo -m shell -a "whoami" --sudo
Let's do something.
Say we want pip (a Python package manager) on one of the target machines. We tell Ansible that we want the apt package named "python-pip" present:
vagrant@demo-control:/vagrant$ ansible target1 --sudo -i inventory/demo -m apt -a "pkg=python-pip state=present"
Note that we didn't say "install python-pip", but rather "have python-pip installed". It is up to Ansible, more specififcally the 'apt' module, to figure out whether anything has to be done or not. Go on, just do the same thing again. All you get back now is "changed: false". So Ansible only does something if something needs to be done. That's why you specify what state you want in the end, not how to get there. That last bit is an implementation detail you usually don't concern yourself with!
If you want to break the rules, however, nothing stops you from running "apt-get install -y python-pip" with the "shell" module.
4. On second "concepts" slide: re-usable descriptions of small aspects of the system that you can combine in endless ways
We have a minimal web app in the webapp directory. We want to deploy this to two app servers (target1 and target2), and put them behind a simple load balancer on target1.
The playbook in deploy_and_run_web_app.yml does all of this.
The playbook contains two 'plays', i.e., sequences of tasks and role invocations that are applied to a set of hosts. In the first play, the playbook sets up the web app instances on all hosts in the 'app_servers' group (as defined in the inventory file). For this, we have a role 'runs_web_app'. In it, its sequence of tasks is (per convenion) in tasks/main.yml. Files, templates and other role specific things can also be found in the role's directory. For this role, we just have one file in its 'files' subdirectory. Check out the Ansible documentation on roles for more details about what you can do with and inside roles.
The second play of the playbook sets up the load balancer on all hosts in the 'load_balancer' group. In our inventory, that group contains only one host. For running a simple load balancer, we have the role 'runs_simple_load_balancer'. This role can be configured a bit through variables that are passed into it on invocation: on which hosts and ports does the app run that should be load-balanced. There is a template in the role (the nginx config file snippet) that uses these variables. Variables can also be used nearly anywhere else outside of templates; think of most of the playbook text content as template-able content. Nearly everything is passed through the jinja2 templating engine that comes bundled with Ansible.