Tool validation robot framework

Hi all,

has anyone ever done some kind of “tool validation” for robot framework to answer the question if “robot framework does the right things right?”

Let me give you my example.

I have a tool which consists of 400 test cases written down to be executed manually by the verification team each time we wanna release a software. Luckily we were able to automate 100 test cases using robot framework. So now for a release we just simply run the test automation and if all tests pass we declare them as “PASS”. So we now still has to do manual tests but “only” 300.

But the problem is that our QA says “how can you be sure that your tool (RF) is doing the right things right? - Have you validated your tool you use for formal verification?”
The question is not about reviewing the automated test if they are “correct” implemented and if the “transformation” from manual to automated test was done correct - the question is about the tool which executes the RF “scripts” is doing it correct?

Any help is welcome.
Thx
Br,
Camil

so if i’am getting well what your saying : your QA team is not questioning whether the automated tests themselves are correct (quality of test automation project / flakiness/ false positif/ false negative), right ? but , in fact, the question is whether RF as a tool can be trusted to execute them accurately/reliably etc… ( this what i got when reading this sentence "How do you know RF is doing the right things right?”)

assuming this , here is my answer:

i am not seeing too much sense in this question actually, and i don’t think that there is a tool to do this ( unless i am missing something or didn’t get the question well )

RF is reliable framework with proven history and the test automation is a code that will do what you have told it to do…
I would say that the real subject is flakiness, false positif, and false negatif tests, and this concern is solved by :

  1. Continuous Feedback Loop : the key to improving test quality is frequent iteration. When a test fails or produces inconsistent results, analyze the root cause, refine the test, and run it again
  2. Precision in Assertions: ensure that assertions are tightly coupled with what’s expected and avoid generic checks that could mistakenly pass or fail

does this get to the heart of their concerns, or am I missing something deeper?

2 Likes

Hi Camil,

The starting point is the logs from robot framework, give them a way to view to easily log.html and then they can inspect the test steps themselves.

This then leads into your test keywords, I would suggest your top level keywords (the ones called directly from the test) have a name that closely matches the step name from the manual test, so when they open the log they can see step names they recognise

for each of your top level keywords, it might be worth, at least initially to have a take screenshot type step, that allows them to visually verify what the automation did in that keyword, as time goes on and the manual testers trust the automation more you can remove these screenshots.

Dave.

@hassineabd actually had a different take on your question :+1: This is the great thing about forums you get different views and answers.

Here’s some background behind my answer:

I view this question more as an audibility/verification question, as I’ve seen it asked with many test automation tools, and I’ve asked it myself of a dev team that were pushing their test automation.

The problem with the devs I mentioned, their tests were NUnit tests written in .NET, they had a list of test case names with a PASS / FAIL status, but their test report had no way to drill down into a test case and verify what the test did, the devs being typical devs said you could look at the code of the test case, I pointed out that was ok for me (a performance tester), but most of the test team can’t read code, many of them came from the business and had no IT knowledge, how do they verify the test does what the business requires?

It’s completely reasonable that a manual tester should be able to drill down into the results of a test automation and verify what the test automation did matches the manual test and applies the same verifications. This is especially true when a high severity defect (or worse a production incident) occurs and the test automation passed that functionality.

In larger organisations it’s also completely normal for an external auditor to come in an audit your test cases, both manual and automated, they auditor also needs to be able to view the test results and understand what the tests actually did.

This is one of the reasons I’m such a big fan of robot framework you can make the test keywords at the upper levels match the manual test steps, and even when they get down to the library keywords inside the high level keywords, they are also clear enough in their names to be easily understood, so the audibility of robot framework is much higher than any other test tool I’ve used.

Dave.

3 Likes

Hi,

Adding to the discussion, but our auditor asked for this tool validation process and report.
I have of course mentioned as @hassineabd said about wide use and reliability of RF, but seemed not enough…

So I have extracted for each library the keywords really used in the tests, and built a pass/fail test case for each with expected results as log.

Example : Wait Until Page Contains, with a pass, and a fail case.

I usually run this test in step by step when updating RF, specific libraries or python and confirm the behavior. This takes 1 hour max, about twice a year.
I know some of these tests are made at libraries level as unit test, but I tried to fit more the use we have of these keywords.

And of course, for me this doesn’t validate the logic behind the custom/built keywords specially done for the application (done once when building), that would be a huge work (but here most used ones could be covered).

As always it’s a matter of where do you want to push the cursor on validation and verification, and time/ressources you have to do it.
But there’s a point where we’re testing the tests, and as usual while doing this, we’re not testing app and maintaining tests…

Regards.
Charlie

6 Likes

Think for the most part it’s been answered above, but I’d also and hopefully this isn’t the case (which could lead to such questions from them to stall this new way of testing) but hopefully your manual testers aren’t threatened by such tool, and that they see it as a huge gain to be able to do more exploratory testing, as well as other areas of focus, as well as removing a huge mundane task that they have probably been doing like zombies for ages.

I know from past experience, people will say anything once threatened by automation, but the door of opportunity is widely opened at the same time.

3 Likes

Thank you everyone for the replies and the suggestions.

I think we will also go the way @CharlieScene said and will point out each Keyword we are using from the RF libs and do a PASS/FAIL test case to prove that the Keyword we are using is working fine and do the job it should.

Br,
Camil

1 Like

Hi,

If it might help, here are the two files I use to extract keywords infos from the test folder (.i.e C:\Users\RobotFiles):

Regards
Charlie

3 Likes