Saturday, July 1, 2017

Scaling Atlassian Bamboo Builds & Branches

Designing a build strategy that is both practical and simple when dealing with hundreds of components is not a trivial task. Maintaining your sanity when talking few hundreds builds and many branches can be quite challenging. This is one simple design allowing compilation, testing, high parallelism of jobs and extremely simple to build.


The concept relies on N + 1 builds, N components each of them capable of running compilation and unit testing independently and producing artifacts. The artifacts should be exposed to Bamboo using standard "Shared artifact" functionality.

I feel comfortable using develop as a default branch in Bamboo, this is allowing me to play nice with feature branches, created out of develop branch.

The extra build will be responsible for aggregation, it will download the artifacts from the N builds and produce a "release". Bamboo is respecting the branch names when downloading the artifacts between build plans:

Aggregator master = sum(component 1 ...N master )
Aggregator develop = sum(component 1 ...N develop )
Aggregator featureX = sum(component 1 ...N develop ) - component 3 develop + component 3 featureX**

** For the example diagram above

The aggregation build has all artifacts at its disposal, doing your performance, integration or governance tests at this stage becomes trivial and a green build here can mean much more then only "I managed to find installers for the 5 applications downloaded"

A "feature" release can prove quite useful, especially when we talk about complex features. Of course features should be small, easy to test but sometimes complexity cannot be avoided and it is much nicer to run integration tests or even deploy it fully to some environment.

The setup works very nice also with automatic branch creation and cleanup of Bamboo, no need to create builds or branches on demand, let the system take care of itself.

Using the setup above you can produce easily releases, tie them to deployment projects and deploy to environments specific branches.

This setup works nicely with independent building blocks that can be mixed and matched to create a customer centric solution or tailored products targeting a specific need ! If you are having a platform to maintain you can tie your aggregation master to your production and have the freedom of testing develop & branches in other environments.


Saturday, April 22, 2017

Adding some automated testing via TMS

A simple automation scenario is to execute test cases in sequence via some external tool like maven, QTP etc.

As main trigger in this sample scenario I am using test execution plans assigned to a special user "agent1". Once some plans are assigned to it a python script can search for issues, execute and transition execution to pass or failed. This will allow you to distribute execution across multiple servers.

Lets get started with the agent preparation, we will need a python installation and the jira pip package (pip install jira).

The following script will be responsible for search and execution of tests (this can also scheduled in cron or systemd for example:

from jira import JIRA
import json
import logging
import shlex, subprocess
import re

FORMAT = '%(asctime)-15s %(message)s'logging.basicConfig(format=FORMAT)
logger = logging.getLogger()
logger.setLevel(logging.INFO)

jira = JIRA(basic_auth=('agent1', 'agent1'), server='http://192.168.0.1:8080')

allfields=jira.fields()
nameMap = {field['name']:field['id'] for field in allfields}

my_test_plans = jira.search_issues('assignee=currentUser() and issueType="Test Execution Plan" and status=Open')

for issue in my_test_plans:
    logger.info ( "Test execution plan: " + str(issue) )
    transitions = jira.transitions(issue)
    inprogid = jira.find_transitionid_by_name(issue, 'In Progress')

    jira.transition_issue(issue, str(inprogid))

    subtask_issues = issue.fields.subtasks
    for subtask in subtask_issues:
        passid = jira.find_transitionid_by_name(subtask, 'Passed')
        failid = jira.find_transitionid_by_name(subtask, 'Failed')
        subtask_issue = jira.issue(subtask.key)
        steps =  getattr(subtask_issue.fields, nameMap['Execution Steps'])
        count = 0        stop_execution = False        for step in steps:
            count=count + 1            logger.info ( str(count) + ": " + str(step))
            jsonsteps = json.loads(step)
            if str(jsonsteps['step']).startswith("execute"):
                try:
                    ret=subprocess.check_output(
                        str(jsonsteps['step']).replace("execute",""),
                        stderr=subprocess.STDOUT,
                        shell=True)
                    logger.info ( "Execution finished ok ... checking output: " + ret )
                    pattern = re.compile(str(jsonsteps['expected']))
                    if pattern.match(ret):
                        logger.info ( "Output looks ok !"  )
                    else:
                        logger.info ( "Output looks BAD !"  )
                        stop_execution = True                        jira.transition_issue(subtask_issue, str(failid))
                        break                except:
                    logger.info ( "Exception occured" )
                    stop_execution = True                    jira.transition_issue(subtask_issue, str(failid))
                    break                jira.transition_issue(subtask_issue, str(passid))

        if stop_execution:
            break

 So how can something like that be used ? The whole script actually relies on 2 conventions, when you got some automation you start the step with the word "execute" while in the expected results field you place a regular expression that will match the output of the command.


Also most applications on the market will return some error code if they encounter an error, the tests are failed also in this case.

The sample script above has quite limited functionality, error handling etc, there is room for improvement and is not intended for production use in the current format, it is only intended to demonstrate the capabilities.

Monday, April 17, 2017

Adding new features to TMS addon for Jira, improved reporting

Adding, running test cases are not the only activities a test team needs, easy exploration of stories, test plans, executions are often required.

The issue explorer is using the issue linking feature of Jira to indicate relationship between story/requirements and test plans or from a plan to its executions.

Most links are created automatically during scheduling of executions the only missing link is story to test plan or epic to test plan. As soon as a Jira issue is being linked to a test plan using "Is verified by" link the report should provide a comprehensive view over tests linked and their progress.


Quick reference of the links used:

Story "is verified by" test plan which has "subtasks" - test cases which are "related" to executions.


Monday, April 10, 2017

A new JIRA addon for the test teams out there

Checkout the new test management inside JIRA

https://marketplace.atlassian.com/plugins/com.valens.testmanagement.tmsframework/server/overview

Tuesday, November 22, 2016

Using Custom Deployments with Ansible

For those who are using Ansible and other configuration management tools from Bamboo, Custom Deployments plugin might come in handy:

https://marketplace.atlassian.com/plugins/com.valens.deployments.bamboo-custom-deployments/server/overview

Sometimes you would like to deploy partially only certain machines, sometimes a particular software only.

Bamboo has since the beginning variables and custom build plan executions, what about deployments? I often wanted to deploy and pass a new parameters to the deployment either to limit the number of hosts affected in a large cluster, or to deploy single components, or to change the target of a deployment.

While a regular deployment will update the entire software release I wanted to use at least 2 variables in my Ansible tasks:

- HOSTS: all
- TAGS:

and most importantly customize them without having to edit the environment all the time.

The Custom Deployments for Bamboo plugin allows this scenario, assuming you use the regular flows to create a release you can select a version, fill your variables and deploy.

In terms of security you can also filter which variables you would like to expose to the teams. Under Bamboo security you can set a regular expression and only variables matching will be shown.


Thursday, September 15, 2016

Deploy using Bamboo and Ansible, getting started

In order to get started with Ansible and Bamboo (talking about latest versions which include the deployment management section) there is very little to do, at least this is what I did

  • get your Bamboo up and running of course
  • prepare one agent with Ansible (pip install ansible or other installation guides)
  • prepare a build and locate your build artifacts
  • share your artifacts so they are picked by the release management module of Bamboo
  • create a deployment project
Now that you have your artifacts and you can prepare software releases is time to move to Ansible:

  • prepare a repository to store your Ansible code, create usual folders for Ansible
  • commit your playbook, roles, vaults, inventories
As a next step we put things together, the release and the ansible scripts meet at environment level:

  • checkout the ansible code
  • add a task to download the artifacts in a subfolder called "files"
  • invoke Ansible - any command to copy files to remote destination will be searching under "files" and will do the shipping for you. 

To make life easier you can make a special Ansible role that removes versions from the release files so that roles can reference them easier, for example my.software-1.3.rpm could become my.software.rpm

And you are done, now starts the Ansible fun. Personally I create one Bamboo environment for each inventory file and name them in a way that I can trace easy which env is tied to which environment.

In case you have sensitive information use vaults, encrypt files in you Ansible repository, you can keep the password in a file directly on the agent.









Sunday, August 7, 2016

Deployments - Ansible and multiple clusters of servers

I have been digging on the internet for a solution to have Ansible variables per cluster of servers (environment in the Bamboo), by cluster variables I mean some sort of inventory vars but a bit easier to maintain.

You just put the code (inventory_vars.py) in the plugin folder - eg plugins\vars_plugins (where plugins folder is at same level with roles)

The plugin will allow you to have the following structure


While the plugin is basically very close to inventory variables concept, I found the original concept very complicated to maintain due to the inventory file format. Using the plugin will allow to use the regular yml dictionaries. The fact that you can have group vars or host vars per cluster is more of a nice to have for me.

In order to use the plugin you will need to have a folder under cluster_vars with exact same name as the original inventory file name.

Eg. if "qa" is my inventory file name, I can place a cluster with same name under cluster_vars. There are a few print statements to help you with debugging in case you run in trouble, if you are annoyed by them just comment the out.


# (c) 2016, Iulius Hutuleac

import os
import glob
from ansible import errors
from ansible import utils

from ansible.errors import AnsibleUndefinedVariable
from ansible.parsing.dataloader import DataLoader
from ansible.template import Templar
from ansible.utils.vars import combine_vars

import ansible.constants as C

def vars_file_matches(f, name):
    # A vars file matches if either:
    # - the basename of the file equals the value of 'name'
    # - the basename of the file, stripped its extension, equals 'name'
    if os.path.basename(f) == name:
        return True
    elif os.path.basename(f) == '.'.join([name, 'yml']):
        return True
    elif os.path.basename(f) == '.'.join([name, 'yaml']):
        return True
    else:
        return False

def vars_files(vars_dir, name):
    files = []
    try:
        candidates = [os.path.join(vars_dir, f) for f in os.listdir(vars_dir)]
    except OSError:
        return files
    for f in candidates:
        if os.path.isfile(f) and vars_file_matches(f, name):
            files.append(f)
        elif os.path.isdir(f):
            files.extend(vars_files(f, name))

    return sorted(files)

class VarsModule(object):

    def __init__(self, inventory):
        self.inventory = inventory
        self.group_cache = {}

    def get_group_vars(self, group, vault_password=None):
        """ Get group specific variables. """

        inventory = self.inventory
        inventory_name = os.path.basename(inventory.src())

        results = {}

        #basedir = os.getcwd()

        basedir = os.path.join(inventory.basedir(),"..")

        if basedir is None:
                # could happen when inventory is passed in via the API
            return
        inventory_vars_dir = os.path.join(basedir, "cluster_vars", inventory_name, "group_vars")

        inventory_vars_files = vars_files(inventory_vars_dir, group.name)
        print("Files for group ", group.name, ":",  inventory_vars_files)

        if len(inventory_vars_files) > 1:
            raise errors.AnsibleError("Found more than one file for host '%s': %s"
                                      % (group.name, inventory_vars_files))

        dl = DataLoader()

        for path in inventory_vars_files:
            data = dict()
            data.update( dl.load_from_file(path) )
            if type(data) != dict:
                raise errors.AnsibleError("%s must be stored as a dictionary/hash" % path)
            if C.DEFAULT_HASH_BEHAVIOUR == "merge":
                # let data content override results if needed
                results = utils.merge_hash(results, data)
            else:
                results.update(data)

        return results

    def run(self, host, vault_password=None):
        print("Requested files for host ", host)
        return {}


    def get_host_vars(self, host, vault_password=None):
        """ Get group specific variables. """

        inventory = self.inventory
        inventory_name = os.path.basename(inventory.src())

        results = {}

        #basedir = os.getcwd()

        basedir = os.path.join(inventory.basedir(),"..")

        if basedir is None:
            # could happen when inventory is passed in via the API
            return
        inventory_vars_dir = os.path.join(basedir, "cluster_vars", inventory_name, "host_vars")

        inventory_vars_files = vars_files(inventory_vars_dir, host)
        print("Files for host ", host, ":",  inventory_vars_files)

        if len(inventory_vars_files) > 1:
            raise errors.AnsibleError("Found more than one file for host '%s': %s"
                                      % (host, inventory_vars_files))

        dl = DataLoader()

        for path in inventory_vars_files:
            data = dict()
            data.update( dl.load_from_file(path) )
            if type(data) != dict:
                raise errors.AnsibleError("%s must be stored as a dictionary/hash" % path)
            if C.DEFAULT_HASH_BEHAVIOUR == "merge":
                # let data content override results if needed
                results = utils.merge_hash(results, data)
            else:
                results.update(data)

        return results