Associate RF tests and their results in Azure Devops, I finally figured it out!

So, to give you guys some background we have test case work items in Azure DevOps (ADO) containing test descriptions (steps) for manual testing. However, now we have a HIL rig (hardware in the loop) where we’ve started running Robot Framework tests from our pipeline. The HIL tests we’ve created are based on the test description in the ADO test cases and even now that we’ve created new tests we also create a test case in ADO since that’s we’re keeping track of our tests. Also, we still have tests that we need to perform manually so it’s nice to have everything in one place.

The last few days I’ve been trying to figure out a way to link our HIL tests with their corresponding test cases in ADO and that was fairly easy, but linking the test results/outcomes such that they showed up in ADO as passed/failed turned out to be quite trickier. At one point I was ready to give up with the conclusion that this was only possible with .Net applications developed in Visual Studio, but I managed to figure it out! So I thought I’d describe the steps here in case someone else is interested.

To summarize what I set out to do:

  1. Associate a RF test with a test case (work item) in ADO such that the RF test is showing up under the “Associated Automations” tab on the test case
  2. Associate the outcome of a RF test run to the test cases in ADO (i.e. set the ADO test case to passed/failed)

Important note: This is how I did it, that certainly doesn’t mean it’s the only way! I have done some extensive googling to come up with this solution so the credit goes to all those people posting on various forums. This is more like a summary of all those posts and not something I developed on my own.

ADO setup

This whole thing is based on having a ADO test case work item for each RF test. They don’t need to contain any description/steps but to make things easier down the road I’ve tagged them with “HIL test”.

  1. Create a test plan
  2. Create one or more test configurations
  3. Create a "query based test suite” that automatically includes test cases with the tag “HIL test”
  4. Assign at least one test configuration to the test suite

You need all three (test plan, configuration and suite with assigned configuration), how you go ahead and create these is up to you of course. However, when you assign a configuration to the test suite (which automatically assigns that config to the included test cases) ADO will generate test points and these are what allows us to associate the test results later.

Associate RF test with ADO test case

We are running our RF tests in a pipeline that’s triggered when we create a pull request (runs just a few “smoke” tests) and that’s scheduled every night if we have new commits (runs all RF tests). I only wanted to associate the tests and results during nightly builds and not on every PR.

All of our RF tests follows this format:

<req. no.> Test name
  [Documentation]  Brief description
  ...  <URL to ADO test case>
  [Tags]  <tags> 

  <keywords>

The URL to ADO test case makes it possible to parse the ID of the test case which is crucial for this to work, you don’t have to have the whole URL though.

When running the tests, use -x outcome.xml argument to get the XUnit file. Now we need to get that ADO test case ID in to the XML file and I did that by creating a “Listener” (AddTestSpecId.py):

"""
This is a listener that adds the test case ID (in Azure) as a property to all test cases in the XUnit output file.
"""
import re
from lxml import etree

class AddTestSpecId:
	ROBOT_LIBRARY_SCOPE = “GLOBAL”
	ROBOT_LISTENER_API_VERSION = 3

	def __init__(self):
		self.spec_ids = {}

	def end_test(self, test, _):
		"""
		Extracts the test case ID from the link in the documentation.
		"""
		spec_id = re.search(
			r"https://dev.azure.com/<organisation>/<project>/_workitems/edit/(\d+)",
			test.doc,
			re.MULTILINE,
		)
		if spec_id:
			self.spec_ids[test.name] = spec_id.group(1)
		else:
			print(f"No spec ID found in documentation for test {test.name}")

	def xunit_file(self, path):
		"""
		Adds a property to each test case in the XUnit file with name 'test_spec_id' and value
		set to the test case ID.
		"""
		content = etree.parse(path)
		test_cases = content.findall(".//testcase")
		for test_case in test_cases:
			test_name = test_case.get("name")
			if test_name in self.spec_ids:
				spec_id = self.spec_ids[test_name]
				properties = test_case.find("properties")
				if properties is None:
					properties = etree.SubElement(test_case, "properties")
				etree.SubElement(
					properties, "property", name="test_spec_id", value=spec_id
				)
		content.write(path, xml_declaration=True, encoding="utf-8")

Add --listener <path to script> to the list of arguments.

Normally one would use the task: PublishTestResults@2 to publish the results. However, we only do this on our CI builds (triggered by pull requests) now. For scheduled nightly builds we instead run the next script that will do a bunch of things and have the following arguments:

  • –xunit-file The path to the XUnit file
  • –build-id The build ID so that we can publish the results for the correct build
  • –pat We use the $(System.AccessToken) in our pipeline
  • –organization The ADO organisation (as in https://dev.azure.com/<organization>)
  • –project The ADO project (as in https://dev.azure.com/<organization>/<project>)
  • –configuration The name of the test configuration to associate the test results to
  • –test-plan-id The ID of the ADO test plan
  • –test-suite-id The ID of the ADO test suite

The script does the following:

  1. Parse the XUnit file to get the ADO test case id, outcome, comment (if failed), duration as well as build a “fully qualified name” and the “test storage” (for ADO)
  2. Associate RF tests to their corresponding ADO test cases (if not already associated). This step includes creating a unique UUID/GUID that’s used as an identifier for the RF test in ADO.
  3. Get the test configuration ID
  4. Get the ADO test points for the given test plan, test suite and configuration
  5. Create a ADO test run linked to the build and test plan. By also providing a list of all the test point IDs it will be “pre-populated” with test result “stubs” for the test cases that we have in the test suite (and associated with the configuration)
  6. Get those test result “stubs”
  7. Update them with outcome, duration, comment, priority and state
  8. Set test run state to “Completed”

And here’s the script (associate_tests.py):

"""
This script associates HIL tests with their corresponding Azure DevOps test cases as well
as publishes the test results such that they are registered in the Test Suite.

The process involves the following steps:
1. Parse the XUnit file that was generated during the HIL test run
2. Associate each HIL test with its Azure test case work item. Only test cases for requirements with
   verification method set to "HIL-test" are associated and only if they are not already associated.
3. Get the Azure test points associated with the given test plan, test suite and test configuration
4. Create a new Azure test run, referencing the build ID, test plan and test points
5. Update the test run with the results. The test results will be linked to the Azure test case through
   the test points.
6. Complete the test run

This requires that a Test Suite has been created in Azure DevOps with the test cases that are to be
associated with the HIL tests. It also requires that the test cases have been assigned one or more
configurations.
"""

import argparse
import sys
from lxml import etree
import requests
import json
import uuid
import os
from dataclasses import dataclass
from typing import List

BASE_URL = "https://dev.azure.com"


@dataclass
class XUnitData:
    test_case_id: int
    fully_qualified_name: str
    test_storage: str
    duration_ms: int
    outcome: str
    comment: str


@dataclass
class TestCaseData:
    title: str
    url: str
    automatedTestId: str
    priority: int


@dataclass
class TestResultData:
    test_name: str
    fully_qualified_name: str
    test_storage: str
    test_case_data: TestCaseData


@dataclass
class ShallowReference:
    id: str
    name: str
    url: str


@dataclass
class TestPoint:
    id: str
    test_case_id: int


def parse_xunit_file(xunit_file_path: str) -> dict[str, XUnitData]:
    """
    Parse the XUnit file and extract test names with their properties,
    fully qualified test names and test storage path.
    """
    test_associations: dict[str, XUnitData] = {}

    try:
        tree = etree.parse(xunit_file_path)
        test_cases = tree.findall(".//testcase")

        for test_case in test_cases:
            test_name = str(test_case.get("name"))
            class_name = test_case.get("classname", "")
            test_storage = class_name

            # Create fully qualified test name (classname.name)
            fully_qualified_name = str(
                f"{class_name}.{test_name}" if class_name else test_name
            )

            # Parse outcome
            outcome = "Passed"
            comment = ""
            failure = test_case.find("failure")
            if failure is not None:
                outcome = "Failed"
                comment = failure.attrib.get("message", "")
                comment = (
                    comment.replace("\r", "").replace("\n", " ").replace("&#10;", " ")
                )

            # Parse duration in milliseconds
            duration_sec = float(test_case.get("time", "0"))
            duration_ms = int(duration_sec * 1000)

            properties = test_case.find("properties")

            if properties is not None:
                test_case_id = 0

                for prop in properties.findall("property"):
                    prop_name = prop.get("name")
                    if prop_name == "test_case_id":
                        test_case_id = prop.get("value")

                if test_case_id:
                    test_associations[test_name] = XUnitData(
                        int(test_case_id),
                        fully_qualified_name,
                        test_storage,
                        duration_ms,
                        outcome,
                        comment,
                    )
                    print(f"Found: {test_name} -> Test Case {test_case_id}")
                    print(f"  Fully qualified: {fully_qualified_name}")
                    print(f"  Test storage: {test_storage}")
                    print(f"  Duration (ms): {duration_ms}")
                    print(f"  Outcome: {outcome}")

        return test_associations
    except Exception as e:
        print(f"Error parsing XUnit file: {e}")
        return {}


def get_test_points(
    organization: str,
    project: str,
    pat: str,
    xunit_data: dict[str, XUnitData],
    test_plan_id: int,
    test_suite_id: int,
    configuration_id: str,
) -> List[TestPoint]:
    """
    Get test points the HIL tests.
    """
    api_url = f"{BASE_URL}/{organization}/{project}/_apis/test/Plans/{test_plan_id}/suites/{test_suite_id}/points?api-version=5.0"

    headers = {"Content-Type": "application/json", "Authorization": f"Bearer {pat}"}

    relevant_test_case_ids = [x.test_case_id for x in xunit_data.values()]

    try:
        response = requests.get(api_url, headers=headers)
        response.raise_for_status()
        test_points_raw = response.json().get("value", [])
        test_points = []
        for tp in test_points_raw:
            if (
                tp.get("configuration", {}).get("id") == configuration_id
                and int(tp.get("testCase", {}).get("id")) in relevant_test_case_ids
            ):
                test_points.append(
                    TestPoint(
                        id=str(tp.get("id")),
                        test_case_id=int(tp.get("testCase", {}).get("id")),
                    )
                )
        print(
            f"✓ Retrieved {len(test_points)} test points for configuration ID {configuration_id}"
        )
        return test_points
    except requests.exceptions.HTTPError as e:
        print(f"Error getting test points: {e}")
        print(
            f"Response: {json.dumps(e.response.text, indent=2) if e.response else 'No response'}"
        )
        return []
    except Exception as e:
        print(f"Error getting test points: {e}")
        return []


def create_test_run(
    organization: str,
    project: str,
    pat: str,
    test_plan_id: int,
    test_points: List[TestPoint],
    build_id: int,
    configuration_id: int,
) -> int | None:
    """
    Create a new test run in Azure DevOps.

    Returns the test run ID if successful, None otherwise.
    """
    api_url = f"{BASE_URL}/{organization}/{project}/_apis/test/runs?api-version=7.1"

    payload = {
        "name": f"UAC HIL Test Run - Build {build_id}",
        "plan": {"id": str(test_plan_id)},
        "automated": True,
        "build": {"id": str(build_id)},
        "state": "InProgress",
        "pointIds": [tp.id for tp in test_points],
        "configurationIds": [configuration_id],
    }

    headers = {
        "Content-Type": "application/json",
        "Authorization": f"Bearer {pat}",
    }

    try:
        response = requests.post(api_url, json=payload, headers=headers)
        response.raise_for_status()
        test_run = response.json()
        print(f"✓ Created test run with ID: {test_run['id']}")
        return int(test_run["id"])
    except Exception as e:
        print(f"✗ Error creating test run: {e}")
        return None


def get_azure_test_case_data(
    organization: str,
    project: str,
    pat: str,
    test_case_id: int,
) -> dict | None:
    """
    Get the test case data for a given test case ID.
    Returns a dictionary with test case details if found, None otherwise.

    test_case_id: ID of the Azure DevOps test case work item
    """
    api_url = f"{BASE_URL}/{organization}/{project}/_apis/wit/workitems/{test_case_id}?api-version=7.1"

    headers = {"Content-Type": "application/json", "Authorization": f"Bearer {pat}"}

    try:
        response = requests.get(api_url, headers=headers)
        response.raise_for_status()
        return response.json()
    except Exception as e:
        print(f"  ⚠ Error getting test case {test_case_id}: {e}")
        return None

def associate_test_with_case(
    organization: str,
    project: str,
    pat: str,
    hil_test_name: str,
    xunit_data: XUnitData,
) -> TestCaseData | None:
    """
    Associate a HIL test with a test case not already associated.
    """
    case_data = get_azure_test_case_data(
        organization, project, pat, xunit_data.test_case_id
    )

    if case_data is None:
        print(f"✗ Azure test case {xunit_data.test_case_id} not found")
        return None

    fields = case_data.get("fields", {})

    test_case_data = TestCaseData(
        title=fields.get("System.Title", ""),
        url=case_data.get("url", ""),
        automatedTestId=fields.get("Microsoft.VSTS.TCM.AutomatedTestId", ""),
        priority=int(fields.get("Microsoft.VSTS.Common.Priority", "0")),
    )

    # Check if already associated
    if (
        fields.get("Microsoft.VSTS.TCM.AutomatedTestName", "")
        == xunit_data.fully_qualified_name
        and len(test_case_data.automatedTestId) > 0
    ):
        print(
            f"✓ HIL test '{hil_test_name}' is already associated with Test Case {xunit_data.test_case_id}"
        )
        return test_case_data

    api_url = f"{BASE_URL}/{organization}/{project}/_apis/wit/workitems/{xunit_data.test_case_id}?api-version=7.1"

    test_id = str(uuid.uuid4())
    patch_document = [
        {
            "op": "add",
            "path": "/fields/Microsoft.VSTS.TCM.AutomatedTestName",
            "value": f"{xunit_data.fully_qualified_name}",
        },
        {
            "op": "add",
            "path": "/fields/Microsoft.VSTS.TCM.AutomatedTestStorage",
            "value": f"{xunit_data.test_storage}",
        },
        {
            "op": "add",
            "path": "/fields/Microsoft.VSTS.TCM.AutomatedTestType",
            "value": "Unit Test",
        },
        {
            "op": "add",
            "path": "/fields/Microsoft.VSTS.TCM.AutomatedTestId",
            "value": test_id,
        },
        {
            "op": "add",
            "path": "/fields/Microsoft.VSTS.TCM.AutomationStatus",
            "value": "Automated",
        },
    ]

    test_case_data.automatedTestId = test_id

    headers = {
        "Content-Type": "application/json-patch+json",
        "Authorization": f"Bearer {pat}",
    }

    try:
        response = requests.patch(api_url, json=patch_document, headers=headers)
        response.raise_for_status()
        print(
            f"✓ Successfully associated '{hil_test_name}' with Test Case {xunit_data.test_case_id}"
        )
        return test_case_data
    except requests.exceptions.HTTPError as e:
        print(
            f"✗ Failed to associate '{hil_test_name}' with Test Case {xunit_data.test_case_id}: {e}"
        )
        print(
            f"  Response: {json.dumps(e.response.text, indent=2) if e.response else 'No response'}"
        )
        return None
    except Exception as e:
        print(
            f"✗ Error associating '{hil_test_name}' with Test Case {xunit_data.test_case_id}: {e}"
        )
        return None


def get_test_configuration(
    organization: str,
    project: str,
    pat: str,
    config_name: str,
) -> ShallowReference | None:
    """
    Get the test configuration for a given configuration name.
    Returns a "ShallowReference" dictionary if found, None otherwise.

    config_name: Name of the test configuration to search for
    """
    api_url = f"{BASE_URL}/{organization}/{project}/_apis/test/configurations?api-version=5.0-preview.2"

    headers = {"Content-Type": "application/json", "Authorization": f"Bearer {pat}"}

    try:
        response = requests.get(api_url, headers=headers)
        response.raise_for_status()

        configurations = response.json().get("value", [])

        # Search for Configuration by name (case-insensitive)
        for config in configurations:
            if config.get("name", "").lower() == config_name.lower():
                config_id = config.get("id")
                name = config.get("name")
                url = config.get("url")
                print(f"  Found configuration '{config_name}' with ID: {config_id}")
                return ShallowReference(id=str(config_id), name=name, url=url)
        return None
    except requests.exceptions.HTTPError as e:
        print(f"  ⚠ Failed to get test configurations: {e}")
        print(
            f"  Response: {json.dumps(e.response.text, indent=2) if e.response else 'No response'}"
        )
        return None
    except Exception as e:
        print(f"  ⚠ Error getting test configurations: {e}")
        return None


def set_test_run_state(
    organization: str, project: str, pat: str, test_run_id: int, state: str
) -> bool:
    """
    Set the state of a test run.

    test_run_id: ID of the test run to update
    state: New state for the test run (e.g., "Completed", "InProgress")

    Returns True if successful, False otherwise.
    """
    api_url = f"{BASE_URL}/{organization}/{project}/_apis/test/runs/{test_run_id}?api-version=7.1"

    patch_document = {
        "state": state,
    }

    headers = {
        "Content-Type": "application/json",
        "Authorization": f"Bearer {pat}",
    }

    try:
        response = requests.patch(api_url, json=patch_document, headers=headers)
        response.raise_for_status()
        print(f"✓ Successfully set test run {test_run_id} state to '{state}'")
        return True
    except requests.exceptions.HTTPError as e:
        print(f"✗ Failed to set test run {test_run_id} state: {e}")
        print(
            f"  Response: {json.dumps(e.response.text, indent=2) if e.response else 'No response'}"
        )
        return False
    except Exception as e:
        print(f"✗ Error setting test run {test_run_id} state: {e}")
        return False


def get_test_results(
    organization: str, project: str, pat: str, test_run_id: int
) -> List[dict]:
    """
    Get the test results for a given test run.

    Returns a dictionary mapping HIL test names to their XUnitData.
    """
    api_url = f"{BASE_URL}/{organization}/{project}/_apis/test/runs/{test_run_id}/results?api-version=7.1"

    headers = {
        "Content-Type": "application/json",
        "Authorization": f"Bearer {pat}",
    }

    try:
        response = requests.get(api_url, headers=headers)
        response.raise_for_status()
        return response.json().get("value", [])
    except requests.exceptions.HTTPError as e:
        print(f"  ⚠ Failed to get test results: {e}")
        print(
            f"    Response: {json.dumps(e.response.text, indent=2) if e.response else 'No response'}"
        )
        return []
    except Exception as e:
        print(f"  ⚠ Error getting test results: {e}")
        return []


def update_test_results(
    organization: str,
    project: str,
    pat: str,
    current_test_results: List[dict],
    test_run_id: int,
    hil_test_results: dict[str, XUnitData],
    azure_test_case_data: dict[str, TestCaseData],
) -> int:
    """
    Update the test results.

    Returns the number of results successfully updated.
    """
    api_url = f"{BASE_URL}/{organization}/{project}/_apis/test/runs/{test_run_id}/results?api-version=7.1"

    headers = {
        "Content-Type": "application/json",
        "Authorization": f"Bearer {pat}",
    }

    update_document = []
    for hil_test_name, result in hil_test_results.items():
        test_id = None
        for existing_result in current_test_results:
            if existing_result.get("testCase", {}).get("id") == str(
                result.test_case_id
            ):
                test_id = existing_result.get("id")
                break
        if test_id is None:
            print(f"  ⚠ No matching test result found for HIL test '{hil_test_name}'")
            continue
        update_document.append(
            {
                "id": test_id,
                "durationInMs": result.duration_ms,
                "outcome": result.outcome,
                "state": "Completed",
                "priority": azure_test_case_data[hil_test_name].priority,
                "comment": result.comment,
            }
        )
    try:
        response = requests.patch(api_url, json=update_document, headers=headers)
        response.raise_for_status()
    except requests.exceptions.HTTPError as e:
        print(f"  ⚠ Failed to update test results: {e}")
        print(
            f"    Response: {json.dumps(e.response.text, indent=2) if e.response else 'No response'}"
        )
        print(json.dumps(update_document, indent=2))
        return 0
    except Exception as e:
        print(f"  ⚠ Error updating test results: {e}")
        print(json.dumps(update_document, indent=2))
        return 0
    return len(update_document)


def main():
    parser = argparse.ArgumentParser(
        description="Associate HIL tests and results with Azure DevOps test cases"
    )
    parser.add_argument(
        "--xunit-file", required=True, help="Path to the XUnit output file"
    )
    parser.add_argument("--build-id", required=True, help="Azure DevOps Build ID")
    parser.add_argument(
        "--pat", required=True, help="Personal Access Token (or System.AccessToken)"
    )
    parser.add_argument(
        "--organization", required=True, help="Azure DevOps organization"
    )
    parser.add_argument(
        "--project",
        required=True,
        help="Azure DevOps project name",
    )
    parser.add_argument(
        "--configuration",
        required=True,
        help="Test configuration name corresponding to Test Plan Configurations in Azure DevOps (e.g., 'RAE', 'SAI125CB', 'OAE')",
    )
    parser.add_argument(
        "--test-plan-id",
        required=True,
        help="Test plan ID to associate test results with",
    )
    parser.add_argument(
        "--test-suite-id",
        required=True,
        help="Test suite ID to associate test results with",
    )

    args = parser.parse_args()

    print("=" * 70)
    print("Azure DevOps Test Case Association")
    print("=" * 70)

    # Parse XUnit file to get test associations
    print(f"\n📄 Parsing XUnit file: {args.xunit_file}")
    xunit_data = parse_xunit_file(args.xunit_file)

    if not xunit_data:
        print("⚠ No test associations found in XUnit file")
        return 0

    print(f"\n✓ Found {len(xunit_data)} test(s) with test case IDs")

    # Associate each test with its test case
    print("\n Creating test case associations...")
    azure_test_case_data = dict[str, TestCaseData]()
    success_count = 0
    fail_count = 0

    for hil_test_name, x_data in xunit_data.items():
        test_case_data = associate_test_with_case(
            args.organization,
            args.project,
            args.pat,
            hil_test_name,
            x_data,
        )
        if test_case_data:
            azure_test_case_data[hil_test_name] = test_case_data
            success_count += 1
        else:
            fail_count += 1

    case_result_count = 0
    if success_count > 0:
        print(f"  Looking up test configuration: {args.configuration}")
        test_config = get_test_configuration(
            args.organization,
            args.project,
            args.pat,
            args.configuration,
        )

        if not test_config:
            print(
                "  ⚠ Missing test configuration, cannot proceed with test run creation"
            )
            return 0

        print(f"  Retrieving test points for configuration {test_config.name}...")
        test_points = get_test_points(
            args.organization,
            args.project,
            args.pat,
            xunit_data,
            args.test_plan_id,
            args.test_suite_id,
            test_config.id,
        )

        if not test_points:
            print("  ⚠ No test points found, cannot proceed with test run creation")
            return 0

        print("  Creating new test run...")
        test_run_id = create_test_run(
            args.organization,
            args.project,
            args.pat,
            args.test_plan_id,
            test_points,
            int(args.build_id),
            int(test_config.id),
        )

        if not test_run_id:
            print(
                "  ⚠ Failed to create test run, cannot proceed with updating test results"
            )
            return 0

        print("  Retrieving current test results...")
        current_test_results = get_test_results(
            args.organization,
            args.project,
            args.pat,
            test_run_id,
        )

        if not current_test_results:
            print("  ⚠ No current test results found, cannot proceed with updating")
            set_test_run_state(
                args.organization,
                args.project,
                args.pat,
                test_run_id,
                "Aborted",
            )
            return 0

        print("  Updating test results...")
        case_result_count = update_test_results(
            args.organization,
            args.project,
            args.pat,
            current_test_results,
            test_run_id,
            xunit_data,
            azure_test_case_data,
        )

        if case_result_count == 0:
            print("  ⚠ No test results were updated")
            set_test_run_state(
                args.organization,
                args.project,
                args.pat,
                test_run_id,
                "Aborted",
            )
            return 0

        print("  Setting test run state to 'Completed'...")
        set_test_run_state(
            args.organization,
            args.project,
            args.pat,
            test_run_id,
            "Completed",
        )

    print("\n" + "=" * 70)
    print(f"✓ Associations created/skipped: {success_count}")
    print(f"✗ Association failures: {fail_count}")
    print(f"✓ Test results updated: {case_result_count}")
    print("=" * 70)

    return 0


if __name__ == "__main__":
    sys.exit(main())

Here’s the part of our pipeline that runs the tests, publish the logs and then either runs the above script or publish the results depending on what triggered the build:

- script: |
    set -eu
    ls -lhtr
    source .venv/bin/activate
    {
      robot \
        -d $(Build.ArtifactStagingDirectory)/log/ -b $(Build.ArtifactStagingDirectory)/log/debug \
        -v hil:"${{ parameters.hil }}" \
        -v semver:"${{ parameters.semver }}" \
        -v filepath:"${{ parameters.file_path }}" \
        -v bb_dir:"$(Build.ArtifactStagingDirectory)/log/" \
        -i "${{ parameters.test_level }}" \
        -x outputxunit.xml \
        --loglevel DEBUG:INFO \
        --listener RetryFailed \
        --listener "HIL-rig/python/AddTestSpecId.py" \
        HIL-rig/tests/"${{ parameters.test_suite }}"
    } || {
      echo "##vso[task.setvariable variable=TestFailure;isOutput=true]true"
      exit 1
    }
  displayName: Run tagged test-suite
  name: TestTask
  continueOnError: ${{ parameters.continueOnError }}

- script: |
    echo "##vso[task.setvariable variable=logFileName]Test_logs_$(date '+%y%m%d_%H%M%S')"
  condition: succeededOrFailed()
  displayName: Set log file name

- task: PublishPipelineArtifact@1
  inputs:
    targetPath: $(Build.ArtifactStagingDirectory)/log
    artifactName: $(logFileName)
  condition: succeededOrFailed()
  displayName: Build log artifact

- ${{ if or(startsWith(variables['Build.SourceBranch'], 'refs/heads/release/'), eq(variables['Build.Reason'], 'Schedule')) }}:
    - script: |
        set -eu
        ls -lhtr
        source .venv/bin/activate
        python tools/python/associate_tests.py \
          --xunit-file "$(Build.ArtifactStagingDirectory)/log/outputxunit.xml" \
          --build-id "$(Build.BuildId)" \
          --pat "$(System.AccessToken)" \
          --configuration "${{ parameters.test_configuration }}" \
          --test-plan-id "${{ parameters.test_plan_id }}" \
          --test-suite-id "${{ parameters.test_suite_id }}" \
          --swrs-path "doc/swrs"
      displayName: "Associate HIL tests with Azure test cases and publish results"
      condition: succeededOrFailed()
      env:
        SYSTEM_ACCESSTOKEN: $(System.AccessToken)
- ${{ else }}:
      - task: PublishTestResults@2
        condition: succeededOrFailed()
        displayName: "Publish results without association"
        inputs:
          testResultsFiles: outputxunit.xml
          searchFolder: $(Build.ArtifactStagingDirectory)/log/
          testRunTitle: "UAC HIL Test Results - $(Build.BuildId)"

Now we have both the tests and the outcome of each test run linked to ADO:

Hope someone find this useful, I sure had a few days ago :slight_smile:

5 Likes