Deploying Concourse as BOSH Deployment

May 7, 2018

Concourse is a highly versatile continuous-thing-doer. At mimacom, we use Concourse a lot to automate Cloud Foundry deployments and naturally, when there is Cloud Foundry there is also BOSH as underlying deployment automation. Concourse has different deployment options, of which one is a BOSH deployment. In this blog post we will walk through how to prepare and setup a Concourse BOSH deployment.

By the end of this post you will have:

The deployment of Concourse boils down to these three steps:

  1. Deploy a new BOSH director
  2. Set the cloud config we want to use for the Concourse deployment
  3. Deploy Concourse

Basic Directory Structure

It is best practice to store the deployment scripts and all BOSH ops files in a git repository. We have identified a useful directory layout which we can leverage to store the deployment scripts for all of the three steps:

├── bosh/
│   ├── environment/
│   │   └── creds.yml
│   ├── ops-files/
│   │   └── 10-add-credhub-uaa-client.yml
│   ├── resources/
│   │   ├── bosh-deployment/
│   │   └── local/
│   ├── 001-env.sh
│   └── 010-deploy.sh
├── ...

The above example stores all files related to the bosh deployment, i.e. the BOSH director. The directory contains three folders:

Next to the three folders, we store all deployment shell scripts, which we will have a closer look at in the following sections.

Deploying the BOSH Director

To deploy a BOSH director we need to use the bosh-deployment BOSH release, which we will integrate as a Git submodule:

git submodule add \
    https://github.com/cloudfoundry/bosh-deployment.git \
    bosh/resources/bosh-deployment
git submodule update --init

This has a some great advantages:

Once we have added the submodule we can write the deployment script for the BOSH director:

### bosh/010-bosh-create-env.sh ###

#!/bin/bash

if [ -z $BOSH_ENV_FILE_LOADED ]; then
    echo "Please load 001-env.sh:"
    echo "source ./001-env.sh"
    exit 1
fi
set -eu

bosh create-env resources/bosh-deployment/bosh.yml \
    --state=environment/state.json \
    --vars-store=environment/creds.yml \
    -v director_name=$BOSH_DIRECTOR_NAME \
    -v internal_cidr=$INTERNAL_CIDR \
    -v internal_gw=$INTERNAL_GW \
    -v internal_ip=$INTERNAL_IP \
    -v network_name=$VCENTER_NETWORK_NAME \
    -o resources/bosh-deployment/vsphere/cpi.yml \
        -v vcenter_dc=$VCENTER_DATACENTER \
        -v vcenter_ds=$VCENTER_DATASTORE \
        -v vcenter_ip=$VCENTER_IP \
        -v vcenter_user=$VCENTER_USER \
        -v vcenter_password=$VCENTER_PASSWORD \
        -v vcenter_templates=$VCENTER_FOLDER/templates \
        -v vcenter_vms=$VCENTER_FOLDER/vms \
        -v vcenter_disks=$VCENTER_FOLDER/disks \
        -v vcenter_cluster=$VCENTER_CLUSTER \
    -o resources/bosh-deployment/uaa.yml \
    -o ops-files/10-add-credhub-uaa-client.yml \
    -o resources/bosh-deployment/credhub.yml \
        -v credhub_encryption_password=$CREDHUB_ENCRYPTION_PASSWORD

You may use the above deployment script as a template and add ops files and variables as you like. The environment variables used by the script are defined in a separate file called 001-env.sh in order to keep the script reusable (to deploy the same configuration to a different environment, for example).

Note the custom operations file 10-add-credhub-uaa-client.yml which adds a UAA user for Concourse to access the BOSH CredHub instance:

# add UAA client for Concourse -> CredHub communication
- type: replace
  path: /instance_groups/name=bosh/jobs/name=uaa/properties/uaa/clients/concourse_to_credhub?
  value:
    override: true
    authorized-grant-types: client_credentials
    scope: ""
    authorities: credhub.read,credhub.write
    access-token-validity: 3600
    secret: ((uaa_clients_concourse_to_credhub))
- type: replace
  path: /variables/-
  value:
    name: uaa_clients_concourse_to_credhub
    type: password

The ops file adds a new user with authorities credhub.read and credhub.write. Using BOSH variables a random password will be generated for the user.

Setting the BOSH Cloud Config

Setting the BOSH cloud configuration is a very important part in preparing the Concourse deployment. The cloud config defines the IP address ranges that we can use, the VM and disk sizes, availability zones, and lots more.

### cloud-config/110-update-cloud-config.sh ###

#!/bin/bash
bosh update-cloud-config ../bosh/resources/bosh-deployment/vsphere/cloud-config.yml \
    -o ops-files/10-vm-types.yml \
    -v internal_cidr=$INTERNAL_CIDR \
    -v internal_gw=$INTERNAL_GW \
    -v internal_ip=$INTERNAL_IP \
    -v network_name=$VCENTER_NETWORK_NAME \
    -v vcenter_cluster=$VCENTER_CLUSTER

The custom operations file is used to customize the vm types and disk sizes available for BOSH deployments. Use the cloud config to vertically scale your Concourse deployment to your liking.

### cloud-config/ops-files/10-vm-types.yml ###

# add more vm types
- type: replace
  path: /vm_types/name=large?
  value:
    name: large
    cloud_properties:
      cpu: 2
      ram: 8192
      disk: 50000

The names and sizings you choose in the cloud config are used in the next section, the actual Concourse deployment.

Deploying Concourse as BOSH Deployment

Concourse ships as BOSH release which serves as the foundation for this deployment. As we are going to deploy from the Concourse BOSH release repository we add that as submodule:

git submodule add \
    https://github.com/concourse/concourse-bosh-deployment.git \
    concourse/resources/concourse-bosh-deployment
git submodule update --init

Before we bosh deploy the Concourse deployment first a compatible stemcell needs to be uploaded to the fresh BOSH director:

### concourse/303-upload-concourse-stemcell.sh ###

#!/bin/bash
bosh upload-stemcell https://s3.amazonaws.com/bosh-core-stemcells/vsphere/bosh-stemcell-3541.12-vsphere-esxi-ubuntu-trusty-go_agent.tgz

Now that we have the stemcell, we can configure the Concourse BOSH deployment in a separate script:

### concourse/310-bosh-deploy-concourse.sh ###

#!/bin/bash
bosh deploy -d concourse resources/concourse-bosh-deployment/cluster/concourse.yml \
  -v deployment_name=concourse \
  -v network_name=default \
  -v web_vm_type=default \
  -v db_vm_type=default \
  -v db_persistent_disk_type=default \
  -v worker_vm_type=large \
  -l resources/concourse-bosh-deployment/versions.yml \
  -o ops-files/10-add-credhub-integration.yml \
    -v credhub_url=$CREDHUB_SERVER \
    --var-file uaa_ca_cert=<(bosh int $BOSH_CREDS_YML_FILENAME --path /credhub_ca/ca) \
    --var-file credhub_ca_cert=<(bosh int $BOSH_CREDS_YML_FILENAME --path /credhub_tls/ca) \
    -v credhub_client_id="concourse_to_credhub" \
    -v credhub_client_secret=$(bosh int $BOSH_CREDS_YML_FILENAME --path /uaa_clients_concourse_to_credhub)
  -o ops-files/20-basic-authentication.yml \
    -v concadmin_username=$CONCOURSE_ADMIN_USERNAME \
  -o ops-files/30-certificates.yml \
    -v concourse_hostname=$CONCOURSE_IP

The script defines the variables required for BOSH to deploy the manifest. It also adds operations files along with some specific variables.

Note that bosh int <FILE> command is used to extract values at a YAML path, e.g. /credhub_tls/ca from a YAML file. This is used to extract required variables from the creds.yml file obtained while creating the BOSH environment. The <(...) syntax uses BASH process substitution to provide the STDOUT of the inner command as file to the outer command. The process substitution ensures that newlines in certificate files are properly retained when passed to BOSH.

Adding CredHub integration for Concourse

The following operations file is used to configure Concourse for the CredHub integration. You may refer to the BOSH documentation of the Concourse release for all available properties.

### concourse/10-add-credhub-integration.yml ###

- type: replace
  path: /instance_groups/name=web/jobs/name=atc/properties/credhub?
  value:
    url: ((credhub_url))
    tls:
      insecure_skip_verify: true
      ca_cert:
        certificate: ((credhub_ca_cert)) ((uaa_ca_cert))
    client_id: ((credhub_client_id))
    client_secret: ((credhub_client_secret))
    path_prefix: /concourse

The ops file sets the URL to credhub and supplies the generated credentials for the integration. Note that with the path_prefix property you may customize the path Concourse uses to lookup varibles for pipelines in CredHub.

Authentication for Concourse

To authenticate with Concourse, we generated a secure admin password using the BOSH variables feature using another ops file:

### concourse/20-basic-authentication.yml ###

# Set username, password and concourse certificates
- type: replace
  path: /instance_groups/name=web/jobs/name=atc/properties/basic_auth_username?
  value: ((concadmin_username))

- type: replace
  path: /instance_groups/name=web/jobs/name=atc/properties/basic_auth_password?
  value: ((concadmin_password))

#  Generate admin password
- type: replace
  path: /variables?/-
  value:
    name: concadmin_password
    type: password

To retrieve the generated password you may use the CredHub CLI as follows:

credhub get -n /<BOSH_DIRECTOR_NAME>/concourse/concadmin_password

Self-signed Certificate for Concourse

As a last operations file we have added a self-signed certificate to Concourse. Of course, we're using BOSH variables to generate a certificate authority and a certificate from that:

### concourse/30-certificates.yml ###

# This ops file generates a CA and SSL certs for the concourse deployment

- type: replace
  path: /instance_groups/name=web/jobs/name=atc/properties/tls_cert?
  value: ((concourse_tls.certificate))

- type: replace
  path: /instance_groups/name=web/jobs/name=atc/properties/tls_key?
  value: ((concourse_tls.private_key))

- type: replace
  path: /variables/-
  value:
    name: concourse_ca
    type: certificate
    options:
      is_ca: true
      common_name: "concourse CA"

- type: replace
  path: /variables/-
  value:
    name: atc_tls
    type: certificate
    options:
      ca: concourse_ca
      common_name: ((concourse_hostname))
      alternative_names: [((concourse_hostname))]

The certificate is valid for the provided concourse_hostname. The BOSH variables feature is very powerful for these kinds of operations and makes generating certificates from an authority a breeze.

Testing the Concourse Server

When the Concourse deployment succeeded we need to make sure that the server works and also that CredHub integration works as expected. In order to do that, we'll create a simple hello-world pipeline that echoes a variable stored in CredHub. Here goes the pipeline definition:

### pipelines/hello-world.yml ###

jobs:
- name: hello-world
  plan:
  - task: say-hello
    config:
      platform: linux
      image_resource:
        type: docker-image
        source: {repository: ubuntu}
      run:
        path: bash
        args:
        - -c
        - | 
          echo "A secret message from CredHub: ${CREDHUB_VAR}!"
      params:
        CREDHUB_VAR: ((hello_credhub))

We can push the pipeline to Concourse using:

fly -t <TARGET> set-pipeline -p hello-world -c ./hello-world.yml
fly -t <TARGET> unpause-pipeline -p hello-world
fly -t <TARGET> trigger-job -j hello-world/say-hello

Of course, the variable hello_credhub is not set and the build will fail. Now let's set the variable and restart the build:

credhub set -t value -n /concourse/hello-world/hello_credhub -v "Hi, this is CredHub!"
fly -t <TARGET> trigger-job -j hello-world/say-hello

Have a look in the build logs, we should now see the secret message from CredHub! :)

Conclusion

Concourse is a very versatile and powerful continuous-thing-doer, which we use heavily to automate a lot of administration tasks for Cloud Foundry. Being able to easily deploy a Concourse server is very important, as these servers are usually installed closely located to the BOSH directors of Cloud Foundry installations in a protected environment. With the help of the above scripts you are now able to tailor and deploy a Concourse installation to suit your needs.

About the author: Fabian Keller

Loves cloud technologies, high code quality and craftsmanship. Passionate about woodworking when not facing a screen.