Tuesday, August 21, 2018

Grafana Buttons and Actions

Viewing data with Grafana is quite simple but taking some action not so much. So here is my way of doing it :)

A simple solution is to use a TEXT Panel and inject some HTML with a button and a Javascript code.

Bellow is a sample that will use the $Datasource and perform a GET request on a specific URL "/"

<button class="btn navbar-button gf-timepicker-nav-btn" style="width: 100%" onClick="runGame()">Start</button>

<script>
function runGame() {
  var t = angular.element('grafana-app').injector().get('templateSrv');
  t.updateTemplateData();
  url = t.index.Datasource.datasourceSrv.datasources["$Datasource"].url;
  
  $.get( url + "/" , function( data ) {
    $( "#output" ).html( data );
  });
  
}

</script>

<p id="output"></p>

From this point things should be much easier.

Sunday, March 11, 2018

Template Build and Deployment for Bamboo plugin - new release / breaking changes !!!

Version 3.6.121 bring a few fixes but also a breaking change - please read carefully:

Feature improvements:

- Artifact templating introduces a new concept - the artifacts per job, when copying the template artifacts job keys are being concatenated to artifact names. This has been done to prevent duplicate artifacts in jobs templated in multiple stages. In order to use the new functionality you must enable it under "System templates" administration page. Attention !!! This functionality will break initially artifact downloader tasks, you will need to select again the artifact names !
- The build template report will now list artifact definition aswell making it easier to review the plans
- A new cache has been introduced too maintain a list of templates in the system (some customers with very large number of jobs had trouble opening Misc page of jobs)

Fixes:

- Bulk template setting is not enabling all templating features

Saturday, July 1, 2017

Scaling Atlassian Bamboo Builds & Branches

Designing a build strategy that is both practical and simple when dealing with hundreds of components is not a trivial task. Maintaining your sanity when talking few hundreds builds and many branches can be quite challenging. This is one simple design allowing compilation, testing, high parallelism of jobs and extremely simple to build.


The concept relies on N + 1 builds, N components each of them capable of running compilation and unit testing independently and producing artifacts. The artifacts should be exposed to Bamboo using standard "Shared artifact" functionality.

I feel comfortable using develop as a default branch in Bamboo, this is allowing me to play nice with feature branches, created out of develop branch.

The extra build will be responsible for aggregation, it will download the artifacts from the N builds and produce a "release". Bamboo is respecting the branch names when downloading the artifacts between build plans:

Aggregator master = sum(component 1 ...N master )
Aggregator develop = sum(component 1 ...N develop )
Aggregator featureX = sum(component 1 ...N develop ) - component 3 develop + component 3 featureX**

** For the example diagram above

The aggregation build has all artifacts at its disposal, doing your performance, integration or governance tests at this stage becomes trivial and a green build here can mean much more then only "I managed to find installers for the 5 applications downloaded"

A "feature" release can prove quite useful, especially when we talk about complex features. Of course features should be small, easy to test but sometimes complexity cannot be avoided and it is much nicer to run integration tests or even deploy it fully to some environment.

The setup works very nice also with automatic branch creation and cleanup of Bamboo, no need to create builds or branches on demand, let the system take care of itself.

Using the setup above you can produce easily releases, tie them to deployment projects and deploy to environments specific branches.

This setup works nicely with independent building blocks that can be mixed and matched to create a customer centric solution or tailored products targeting a specific need ! If you are having a platform to maintain you can tie your aggregation master to your production and have the freedom of testing develop & branches in other environments.


Saturday, April 22, 2017

Adding some automated testing via TMS

A simple automation scenario is to execute test cases in sequence via some external tool like maven, QTP etc.

As main trigger in this sample scenario I am using test execution plans assigned to a special user "agent1". Once some plans are assigned to it a python script can search for issues, execute and transition execution to pass or failed. This will allow you to distribute execution across multiple servers.

Lets get started with the agent preparation, we will need a python installation and the jira pip package (pip install jira).

The following script will be responsible for search and execution of tests (this can also scheduled in cron or systemd for example:

from jira import JIRA
import json
import logging
import shlex, subprocess
import re

FORMAT = '%(asctime)-15s %(message)s'logging.basicConfig(format=FORMAT)
logger = logging.getLogger()
logger.setLevel(logging.INFO)

jira = JIRA(basic_auth=('agent1', 'agent1'), server='http://192.168.0.1:8080')

allfields=jira.fields()
nameMap = {field['name']:field['id'] for field in allfields}

my_test_plans = jira.search_issues('assignee=currentUser() and issueType="Test Execution Plan" and status=Open')

for issue in my_test_plans:
    logger.info ( "Test execution plan: " + str(issue) )
    transitions = jira.transitions(issue)
    inprogid = jira.find_transitionid_by_name(issue, 'In Progress')

    jira.transition_issue(issue, str(inprogid))

    subtask_issues = issue.fields.subtasks
    for subtask in subtask_issues:
        passid = jira.find_transitionid_by_name(subtask, 'Passed')
        failid = jira.find_transitionid_by_name(subtask, 'Failed')
        subtask_issue = jira.issue(subtask.key)
        steps =  getattr(subtask_issue.fields, nameMap['Execution Steps'])
        count = 0        stop_execution = False        for step in steps:
            count=count + 1            logger.info ( str(count) + ": " + str(step))
            jsonsteps = json.loads(step)
            if str(jsonsteps['step']).startswith("execute"):
                try:
                    ret=subprocess.check_output(
                        str(jsonsteps['step']).replace("execute",""),
                        stderr=subprocess.STDOUT,
                        shell=True)
                    logger.info ( "Execution finished ok ... checking output: " + ret )
                    pattern = re.compile(str(jsonsteps['expected']))
                    if pattern.match(ret):
                        logger.info ( "Output looks ok !"  )
                    else:
                        logger.info ( "Output looks BAD !"  )
                        stop_execution = True                        jira.transition_issue(subtask_issue, str(failid))
                        break                except:
                    logger.info ( "Exception occured" )
                    stop_execution = True                    jira.transition_issue(subtask_issue, str(failid))
                    break                jira.transition_issue(subtask_issue, str(passid))

        if stop_execution:
            break

 So how can something like that be used ? The whole script actually relies on 2 conventions, when you got some automation you start the step with the word "execute" while in the expected results field you place a regular expression that will match the output of the command.


Also most applications on the market will return some error code if they encounter an error, the tests are failed also in this case.

The sample script above has quite limited functionality, error handling etc, there is room for improvement and is not intended for production use in the current format, it is only intended to demonstrate the capabilities.

Monday, April 17, 2017

Adding new features to TMS addon for Jira, improved reporting

Adding, running test cases are not the only activities a test team needs, easy exploration of stories, test plans, executions are often required.

The issue explorer is using the issue linking feature of Jira to indicate relationship between story/requirements and test plans or from a plan to its executions.

Most links are created automatically during scheduling of executions the only missing link is story to test plan or epic to test plan. As soon as a Jira issue is being linked to a test plan using "Is verified by" link the report should provide a comprehensive view over tests linked and their progress.


Quick reference of the links used:

Story "is verified by" test plan which has "subtasks" - test cases which are "related" to executions.


Monday, April 10, 2017

A new JIRA addon for the test teams out there

Checkout the new test management inside JIRA

https://marketplace.atlassian.com/plugins/com.valens.testmanagement.tmsframework/server/overview