so, I’m a test automation lead, and we are starting to build infrastructure for our future main test automation tool (RF obviously). (If you know me from my 1x talk about remote library interface in .NET – yes, I’m still working on that, but it’s been dragging to say the least)
And I am currently researching test management platforms and ways to basically have overviews, dashboards, places where we can store and put all test results, but also what those results contain (what the log provides).
Such platforms are needed for me, because it makes the whole aspect of test results more transparent and approachable.
While doing this, I notice, that most of those platforms don’t really “integrate” or work well with robot framework, because they all just use it as a marketing plaque saying that they support , while they don’t actually “support” it.
Most of those platforms really only use the `xunit` transformation of the robot framework results, which is really trimming down on what those logs and results could actually contain.
And also it is using a non-robot-native way of importing those results.
I wanted to know, does anyone know of solutions, which have native support? (open-source, commercial, all are welcome suggestions)
I think the tricky part is the possibility to view the reports/logs, as this would mean either store them as .html, or be able to regenerate them in demand in a tool.
Anyway, I saw this that I planned to look further into:
And I remember that there is also rflogs:
Then totally agree, we’re using Xray and apart from importing the results from .xml and have a link to team tickets, there is no real added value for consulting results.
Yes, I agree that viewing the results on some test management platform does not make sense.
(that is what CI tools can provide anyways)
I meant more that it is not very useful to me to only have things like what tests suites I have in automation or what test cases failed to be uploaded to the test management tool.
That’s not very helpful to me.
Thank you very much for the link to the dashboard tool.
The keyword statistics is something that seems very promising of something that I need.
I used Xray in the past and only now, not being in Jira environment anymore, I realize that most other platforms lack very much in comparison.
I should have been a bit more clear in my previous post:
Something like robotframework-dashboard is very good.
Because I want my test results and reporting to be more approachable than just always talking about how many test cases failed and more focused on why they failed.
But to discuss the why, other involved people need to understand what is actually happening in the automation, without having to look through the test repository and figuring out the robot framework syntax.
And if I just upload my results to some test management platform, which only displays fancy charts regarding test runs only performing analysis on PASS vs. FAIL, then it is really hard to get the point across.
It is a bit easier for me to get the point across if I can explain “These 500 something test cases are failing, because actually that 1 one keyword failed in all of them, because that 1 keyword does this …”
The robot logs are already very helpful in that, but it can also be a bit clumsy sometimes with hierarchies and logs will also often only show a subsection, but not a greater picture when it comes to more widespread issues.
Anyways, thanks for the link, I did not find the dashboard tool, probably because I was a bit too focused on the management aspect of it.
As the “test automation guy”, you need other teams to understand why it failed easily, and have an overview of the keyword without digging in you low-level code (and commenting all your choices )
I think that RF is so versatile and adaptable, that it’s difficult to build or find a hand-in-hand solution for this. Mainly because each project, architecture with suites/tests/keywords might be different.
But a combination of:
Correct hide/display of inner keywords logic (through tags and removekeywords for example)
Dashboard (displaying the most failed keywords)
An updated/dynamic documentation of your keywords made through LibDoc (or LibToc)
Could be a part of the solution. This means you have to structure your keywords with proper documentation and Tags.
But this is a job/part - a bit time consuming in the beginning - is useful later when you maintain, or on-board newcomers to dive into the project structure.
the test management tool TestBench by imbus AG could also be interesting for you as it provides a good integration with Robot Framework via its own VisualStudio Code Extension.
You can specify test cases within the TestBench, but automate them in VS Code using Robot Framework. Also Robot Framework Keywords can be created /specified in the TestBench & they are getting synchronized as a template to VS Code in a *.resource file which can be furthermore used for implementation on technical layer.
TestBench provides also the opportunity to start the automated test case via Robot Framework & the results are getting uploaded backwards into the TestBench. You will see the test case status with a potential error message, but you can also attach the original logfiles / reports generated by Robot Framework.
In the following screenshot you can see the uploaded test results in the TestBench Web iTorx:
It is designed to be easy to understand what happens in a test case and why it failed. I built it using output.json, Jinja and Bootstrap.
If you want to build a Dashboard I would recommend using Elastic Search and Kibana. Using output.json you can easily extract, transform and store the data in a format suitable for your queries.
Since you are coming from Jira you might find Allure Report interesting.