Provisioning AWS EC2 with LinchPin

LinchPin can be used to provision compute instances on Amazon Web Services. If you need to familiarize yourself with EC2, read this. Now let’s step through the process of creating a new workspace for provisioning EC2

Fetch

It is possible that you want to access a workspace that already exists. If that workspace exists online, linchpin fetch can be used to clone the repository. For example, the OpenShift on OpenStack example from release 1.7.2 in the linchpin repository can be cloned as follows:

$ linchpin fetch --root docs/source/examples/workspaces openshift-on-openstack --branch 1.7.2 --dest ./fetch-example https://github.com/CentOS-PaaS-SIG/linchpin

You can even choose to fetch only a certain component of the workspace. For example, if you only wish to fetch the topologies you can add --type topologies. If you were able to fetch a complete workspace, you can skip to Up

Initialization

Assuming you are creating a workspace from scratch, you can run linchpin init to initialize a workspace. The following line of code will create a linchpin.conf, dummy PinFile, and README.rst in a directory called “simple”

$ linchpin init simple

The PinFile contains a single target, called simple, which contains a topology but no layout. A group of related provisioning tasks is called a target. Each target has a topology, which can contain many resource groups, and an optional layout. We’ll explain what each of those means later on in further detail

Creating a Topology

Now that we have a PinFile, its time to add the code for an AWS EC2 instance. Edit your PinFile so it looks like the one below.

---
simple:
  topology:
    topology_name: simple
    resource_groups:
      - resource_group_name: aws_simple
        resource_group_type: aws
        resource_definitions:
          - name: simple_ec2
            role: aws_ec2
            flavor: m1.small
            count: 1

There are a number of other fields available for these two roles. Information about those fields as well as the other AWS roles can be found on the AWS provider page.

A resource group is a group of resources related to a single provider. In this example we have an AWS resource group that defines one type of AWS resources. We could also define an OpenStack resource group below it that provisions a handful of OpenStack Server nodes. A single resource group can contain many resource definitions. A resource definition details the requirements for a specific resource. We could add another resource definition to this topology to create a security group for our EC2 nodes. Multiple resources can be provisioned from a single resource definition by editing the count field, but all non-unique properties of the resources will be identical. So the flavor will be the same, but each node will have a unique name. The name will be {{ name }}_0, {{ name }}_1, etc. from 0 to count.

Credentials

Finally, we need to add credentials to the resource group. AWS provides several ways to provide credentials. LinchPin supports some of these methods for passing credentials for use with AWS resources.

One method to provide AWS credentials that can be loaded by LinchPin is to use the INI format that the AWS CLI tool uses.

Credentials File

An example credentials file may look like this for aws.

$ cat aws.key
[default]
aws_access_key_id=ARYA4IS3THE3NO7FACEB
aws_secret_access_key=0Hy3x899u93G3xXRkeZK444MITtfl668Bobbygls

[herlo_aws1_herlo]
aws_access_key_id=JON6SNOW8HAS7A3WOLF8
aws_secret_access_key=Te4cUl24FtBELL4blowSx9odd0eFp2Aq30+7tHx9

See also

providers for provider-specific credentials examples.

To use these credentials, the user must tell LinchPin two things. The first is which credentials to use. The second is where to find the credentials data.

Using Credentials

In the topology, a user can specific credentials. The credentials are described by specifying the file, then the profile. As shown above, the filename is ‘aws.key’. The user could pick either profile in that file.

---
topology_name: ec2-new
resource_groups:
  - resource_group_name: "aws"
    resource_group_type: "aws"
    resource_definitions:
      - name: demo-day
        flavor: m1.small
        role: aws_ec2
        region: us-east-1
        image: ami-984189e2
        count: 1
    credentials:
      filename: aws.key
      profile: default

The important part in the above topology is the credentials section. Adding credentials like this will look up, and use the credentials provided.

Credentials Location

By default, credential files are stored in the default_credentials_path, which is ~/.config/linchpin.

Hint

The default_credentials_path value uses the interpolated :dirs1.5:`default_config_path <workspace/linchpin.conf#L22>` value, and can be overridden in the :docs1.5:`linchpin.conf`.

The credentials path (or creds_path) can be overridden in two ways.

It can be passed in when running the linchpin command.

$ linchpin -vvv --creds-path /dir/to/creds up aws-ec2-new

Note

The aws.key file could be placed in the default_credentials_path. In that case passing --creds-path would be redundant.

Or it can be set as an environment variable.

$ export CREDS_PATH=/dir/to/creds
$ linchpin -v up aws-ec2-new

Creating a Layout

LinchPin can use layouts to describe what an Ansible inventory might look like after provisioning. Layouts can include information such as IP addresses, zones, and FQDNs. Under the simple key, put the following data:

---
layout:
  inventory_layout:
    vars:
      hostname: __IP__
    hosts:
      server:
        count: 1
        host_groups:
          - frontent
    host_groups:
      all:
        vars:
          ansible_user: root
        frontend:
          vars:
            ansible_ssh_common_args: -o StrictHostKeyChecking=no

After provisioning the hosts, LinchPin will through each host type in the inventory_layout, pop count hosts off of the list, and add them to the relevant host groups. The host_groups section of the layout is used to set environment variables for each of the hosts in a given host group

Up

Once the resources have been defined, LinchPin can be run as follows:

$ linchpin --workspace . -vv up simple

The --workspace flag references the relevant workspace. By default, the workspace is the current working directory. If the PinFile has a name (or path) other than {{workspace}}/PinFile, the --pinfile flag can override that. Finally, -vv sets a verbosity level of 2. As with Ansible, the verbosity can be set between 0 and 4.

If the provisioning was successful, you should see some output at the bottom that looks something like this:

ID: 122
Action: up

Target                  Run ID  uHash   Exit Code
-------------------------------------------------
simple                     1    3a0c59          0

You can use that uhash value to get the inventory generated according to the layout we discussed above. The file will be titled inventories/${target}-${uhash} but you can change this naming schema by editing the inventory_file field in the inventory_layout section of the layout. When linchpin up is run, each target will generate its own inventory layout. The inventories folder and inventory_path can also be set in the evars section of linchpin.conf

Destroy

At some point you’ll no longer need the machines you provisioned. You can destroy the provisioned machines with linchpin destroy. However, you may not want to remove every single target from your last provision. For example, lets say you ran the simple provision above, then ran a few others. You could use the transaction ID, labeled “ID” above, to do so.

$ linchpin -vv destroy -t 122

You may also have provisioned multiple targets at once. If you only want to destroy one of them, you can do so with the name of the target and the run ID.

$ linchpin -vv destroy -r 1 simple

Journal

Each time you provision or destroy resources with LinchPin, information about the run is stored in the Run Database, or RunDB. Data from the RunDB can be printed using linchpin journal. This allows you to keep track of which resources you have provisioned but haven’t destroyed and gather the transaction and run IDs for those resources. To list each resource by target, simply run:

$ linchpin journal

Target: simple
run_id      action           uhash              rc
--------------------------------------------------
2         destroy          bb8064               0
1              up          bb8064               0

Target: beaker-openstack
run_id      action           uhash              rc
--------------------------------------------------
2         destroy          b1e364               2
1              up          b1e364               2

Target: os-subnet
run_id      action           uhash              rc
--------------------------------------------------
3         destroy          c619ac               0
2              up          c619ac               0
1         destroy          ab9d81               0

As you can see, linchpin printed out the run data for the simple target that we provisioned and destroyed above, but also printed out information for a number of other targets which had been provisioned recently. You can provide a target as an argument to only print out the given target. You can also group by transaction id with the flag --view tx. Click here to read more about linchpin journal