Saturday, April 22, 2017

Adding some automated testing via TMS

A simple automation scenario is to execute test cases in sequence via some external tool like maven, QTP etc.

As main trigger in this sample scenario I am using test execution plans assigned to a special user "agent1". Once some plans are assigned to it a python script can search for issues, execute and transition execution to pass or failed. This will allow you to distribute execution across multiple servers.

Lets get started with the agent preparation, we will need a python installation and the jira pip package (pip install jira).

The following script will be responsible for search and execution of tests (this can also scheduled in cron or systemd for example:

from jira import JIRA
import json
import logging
import shlex, subprocess
import re

FORMAT = '%(asctime)-15s %(message)s'logging.basicConfig(format=FORMAT)
logger = logging.getLogger()
logger.setLevel(logging.INFO)

jira = JIRA(basic_auth=('agent1', 'agent1'), server='http://192.168.0.1:8080')

allfields=jira.fields()
nameMap = {field['name']:field['id'] for field in allfields}

my_test_plans = jira.search_issues('assignee=currentUser() and issueType="Test Execution Plan" and status=Open')

for issue in my_test_plans:
    logger.info ( "Test execution plan: " + str(issue) )
    transitions = jira.transitions(issue)
    inprogid = jira.find_transitionid_by_name(issue, 'In Progress')

    jira.transition_issue(issue, str(inprogid))

    subtask_issues = issue.fields.subtasks
    for subtask in subtask_issues:
        passid = jira.find_transitionid_by_name(subtask, 'Passed')
        failid = jira.find_transitionid_by_name(subtask, 'Failed')
        subtask_issue = jira.issue(subtask.key)
        steps =  getattr(subtask_issue.fields, nameMap['Execution Steps'])
        count = 0        stop_execution = False        for step in steps:
            count=count + 1            logger.info ( str(count) + ": " + str(step))
            jsonsteps = json.loads(step)
            if str(jsonsteps['step']).startswith("execute"):
                try:
                    ret=subprocess.check_output(
                        str(jsonsteps['step']).replace("execute",""),
                        stderr=subprocess.STDOUT,
                        shell=True)
                    logger.info ( "Execution finished ok ... checking output: " + ret )
                    pattern = re.compile(str(jsonsteps['expected']))
                    if pattern.match(ret):
                        logger.info ( "Output looks ok !"  )
                    else:
                        logger.info ( "Output looks BAD !"  )
                        stop_execution = True                        jira.transition_issue(subtask_issue, str(failid))
                        break                except:
                    logger.info ( "Exception occured" )
                    stop_execution = True                    jira.transition_issue(subtask_issue, str(failid))
                    break                jira.transition_issue(subtask_issue, str(passid))

        if stop_execution:
            break

 So how can something like that be used ? The whole script actually relies on 2 conventions, when you got some automation you start the step with the word "execute" while in the expected results field you place a regular expression that will match the output of the command.


Also most applications on the market will return some error code if they encounter an error, the tests are failed also in this case.

The sample script above has quite limited functionality, error handling etc, there is room for improvement and is not intended for production use in the current format, it is only intended to demonstrate the capabilities.

No comments:

Post a Comment