Provisioning Beaker Server with LinchPin

LinchPin can be used to provision compute instances on Beaker. If you need to familiarize yourself with Beaker Server, read this. Now let’s step through the process of creating a new workspace for provisioning Beaker

Fetch

It is possible that you want to access a workspace that already exists. If that workspace exists online, linchpin fetch can be used to clone the repository. For example, the OpenShift on OpenStack example from release 1.7.2 in the linchpin repository can be cloned as follows:

$ linchpin fetch --root docs/source/examples/workspaces openshift-on-openstack --branch 1.7.2 --dest ./fetch-example https://github.com/CentOS-PaaS-SIG/linchpin

You can even choose to fetch only a certain component of the workspace. For example, if you only wish to fetch the topologies you can add --type topologies. If you were able to fetch a complete workspace, you can skip to Up

Initialization

Assuming you are creating a workspace from scratch, you can run linchpin init to initialize a workspace. The following line of code will create a linchpin.conf, dummy PinFile, and README.rst in a directory called “simple”

$ linchpin init simple

The PinFile contains a single target, called simple, which contains a topology but no layout. A group of related provisioning tasks is called a target. Each target has a topology, which can contain many resource groups, and an optional layout. We’ll explain what each of those means later on in further detail

Creating a Topology

Now that we have a PinFile, its time to add the code for a Beaker server. Edit your PinFile so it looks like the one below.

---
simple:
  topology:
    topology_name: simple
    resource_groups:
      - resource_group_name: bkr_simple
        resource_group_type: beaker
        resource_definitions:
          - role: bkr_server
            recipesets:
              - distro: RHEL-7.5
                name: rhelsimple
                arch: x86_64
                variant: Server
                count: 1
                hostrequires:
                  - rawxml: '<key_value key="model" op="=" value="KVM"/>'

There are a number of other fields available for these two roles. Information about those fields as well as the other Beaker roles can be found on the Beaker provider page.

A resource group is a group of resources related to a single provider. In this example we have a Beaker resource group that defines two different types of Beaker resources. We could also define an AWS resource group below it that provisions a handful of EC2 nodes. A single resource group can contain many resource definitions. A resource definition details the requirements for a specific resource. Multiple resources can be provisioned from a single resource definition by editing the count field, but all non-unique properties of the resources will be identical. So the distro will be the same, but each node will have a unique name. The name will be {{ name }}_0, {{ name }}_1, etc. from 0 to count.

Credentials

Finally, we need to add credentials to the resource group.

Beaker provides several ways to authenticate. LinchPin supports these methods.

  • Kerberos

  • OAuth2

Note

LinchPin doesn’t support the username/password authentication mechanism. It’s also not recommended by the Beaker Project, except for initial setup.

Creating a Layout

LinchPin can use layouts to describe what an Ansible inventory might look like after provisioning. Layouts can include information such as IP addresses, zones, and FQDNs. Under the simple key, put the following data:

---
layout:
  inventory_layout:
    vars:
      hostname: __IP__
    hosts:
      server:
        count: 1
        host_groups:
          - frontent
    host_groups:
      all:
        vars:
          ansible_user: root
        frontend:
          vars:
            ansible_ssh_common_args: -o StrictHostKeyChecking=no

After provisioning the hosts, LinchPin will through each host type in the inventory_layout, pop count hosts off of the list, and add them to the relevant host groups. The host_groups section of the layout is used to set environment variables for each of the hosts in a given host group

Up

Once the resources have been defined, LinchPin can be run as follows:

$ linchpin --workspace . -vv up simple

The --workspace flag references the relevant workspace. By default, the workspace is the current working directory. If the PinFile has a name (or path) other than {{workspace}}/PinFile, the --pinfile flag can override that. Finally, -vv sets a verbosity level of 2. As with Ansible, the verbosity can be set between 0 and 4.

If the provisioning was successful, you should see some output at the bottom that looks something like this:

ID: 122
Action: up

Target                  Run ID  uHash   Exit Code
-------------------------------------------------
simple                     1    3a0c59          0

You can use that uhash value to get the inventory generated according to the layout we discussed above. The file will be titled inventories/${target}-${uhash} but you can change this naming schema by editing the inventory_file field in the inventory_layout section of the layout. When linchpin up is run, each target will generate its own inventory layout. The inventories folder and inventory_path can also be set in the evars section of linchpin.conf

Destroy

At some point you’ll no longer need the machines you provisioned. You can destroy the provisioned machines with linchpin destroy. However, you may not want to remove every single target from your last provision. For example, lets say you ran the simple provision above, then ran a few others. You could use the transaction ID, labeled “ID” above, to do so.

$ linchpin -vv destroy -t 122

You may also have provisioned multiple targets at once. If you only want to destroy one of them, you can do so with the name of the target and the run ID.

$ linchpin -vv destroy -r 1 simple

Journal

Each time you provision or destroy resources with LinchPin, information about the run is stored in the Run Database, or RunDB. Data from the RunDB can be printed using linchpin journal. This allows you to keep track of which resources you have provisioned but haven’t destroyed and gather the transaction and run IDs for those resources. To list each resource by target, simply run:

$ linchpin journal

Target: simple
run_id      action           uhash              rc
--------------------------------------------------
2         destroy          bb8064               0
1              up          bb8064               0

Target: beaker-openstack
run_id      action           uhash              rc
--------------------------------------------------
2         destroy          b1e364               2
1              up          b1e364               2

Target: os-subnet
run_id      action           uhash              rc
--------------------------------------------------
3         destroy          c619ac               0
2              up          c619ac               0
1         destroy          ab9d81               0

As you can see, linchpin printed out the run data for the simple target that we provisioned and destroyed above, but also printed out information for a number of other targets which had been provisioned recently. You can provide a target as an argument to only print out the given target. You can also group by transaction id with the flag --view tx. Click here to read more about linchpin journal