Management of multiple Robotframework executions

Hi all,

I have a question about the management of multiple Robotframework executions.
For me it occurs often, that I have to use e.g. a variablefile to load test specific variables.
Then I need to run the Robotframework with different variablefiles to cover different scenarios.
Sometimes, there are even some tasks that need to be called call before or after a calling Robotframework. E.g. like power cycling a device under test.

To glue everything together, I mostly end up with having e.g. a small bash script that executes all this as a sequence as kind of meta test runner.

I see also other options, like having a Robotframework task that executes everything, or the testsuitesmanagement project, which is maybe a bit to complex
( https://pypi.org/project/robotframework-testsuitesmanagement/ ) (some years ago I even wrote something similar for a project).

Do you maybe have better ideas or recommendations about organizing multiple test runs with differing settings?

Regards
Michael

I cant say “do this” but in that same situation, I wrote custom runner in python.

Pabot and robot can be called directly from python code with same arguments and then the rest is just adding a bit of logic what to add and/or transform

2 Likes

As Jani said I think also custom .sh, .py or .ps1 is a good solution. At least you can adapt it to your own architecture and don’t depend/are not limited in possibilities.

Personally I use ps1, read variables from input files or ask user (version, simple, multiple run or rerun failed) to construct robot command and run suites in parallel (several containers).
Good thing here is that it works locally in a docker container, but also in cloud based app where env variables and secrets can be stored and provided to docker environment for example.

Also if your test only use different variables, did you look at Datadriver library?

1 Like

In our case we use GitLab’s CI/CD features to define jobs for each configuration in our automated test pipeline. That sounds like overkill if your use case is covered by “a small bash script”, though.

1 Like

Hi @Michael-1 ,

I’m the maintainer of robotframework-testsuitesmanagement. Actually to use our library is not difficult, but it opens a lot of opportunities for you which might have a later a very high value.
Usually projects and requirements / variants grow, that you need to adapt the test continuously. Goal of our library is that you never have a problem to adapt to new requirements. We have 20 years of test automation knowledge in this library.

If you decide to use it, then please don’t hesitate to contact us. We will support you.

Please also check RobotFramework AIO. The testsuites-management is part of our OSS project, it runs also stand-alone, but this might be interesting for you, too.

Thank you,
Thomas

3 Likes

Hi @ThPoll,
thanks for the response and the link. I had a look at the docs again and I’m still unsure about a feature. Does the TestsuitesManagement also have support for global/environment setup and teardown? I.e. something that needs to be executed before starting tests for a new variant.
I know, this is possible for suites aka suite_setup but I need it for the whole test run.

To be a bit more specific: we have some embedded hardware where we need to flash a configuration file, then run a set of tests with one variant, then flash another configuration and run tests for another variant.
Regards, Michael

Hi, thanks for the responses.
At least I did not miss something totally obvious.
So a script, CI or pipeline definition or even a (still to be invented) meta testrunner needs to care about the environment and maybe do side-tasks.
I even found a section in the user-guide that covers parts of this : “Creating start-up scripts”
https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#toc-entry-457
Regards, Michael

Hi @Michael-1 ,

robotframework-testsuitesmanagement is about how to handle configuration data for your tests in order to have the best overview and in a way that it scales without limits.

It does this by providing highly scaling infrastructure to configure your tests by means of configuration files which can contain imports and variables.
It allows you e.g. to define common parts for all tests, specific parts for variants and also local config files which can overwrite common default values according to the actual test setup (e.g. serial port, IP-Adresses, …).
This allows you e.g. to maintain the default part of your test configuration in a version control system. Test-machine/setup specific parts you overwrite in a machine/setup specific local configuration (which you can of course also version control per machine/setup).

If you have to flash different configurations to your device under test, then this would by for me a part of the CT pipeline(s). First flash, then execute test for corresponding variant.

Regards,
Thomas