I have been using RF for a number of years for testing an embedded Linux system that uses a set of proprietary libraries for controlling the software under-test as well as the test environment. This has been wonderful and works well with my CI/CD system that has been integrated with the hardware test facility.
Now I want to extend the environment to have multiple instances of the system under test, each one with a slight different arrangement of the test suites. I am having difficulty trying to figure out the best way to arrange my test case repos so that I do not have copies of the same .robot files in different sub-directories.
I don’t mind having an extra layer of automation to manipulate the file structure, but there are many ways of skinning this cat, and I wanted to know if there were any patterns already established in the community for this.
As a concrete example, imagine I have two DUT : DEVA and DEVB. They both have the same Linux kernel and an early ‘proof-of-life’ test suite interacts with the CPU stats to make sure communication to the devices are working and everything makes sense. Let’s call this suite 1_CPUHealth. Then each of the devices have device specific applications to check, 2_DEVA_App and 2_DEVB_App.
My directory structure would then be something like:
DEVA/1_CPUHealth
DEVA/2_DEVA_App
DEVB/1_CPUHealth
DEVB/2_DEVB_App
Within DEVA and DEVB I would have a dev.resource file that defines a variable used to connect to the DUT. Then within 1_CPUHealth/init.robot I have a resource reference of …/dev.resource that could allow the source to be shared.
Apart from making a heap of symbolic links, are there any other tricks for reuse of test suites?
You can use variables for the resource files (or a prefix/suffix), and then define wich you wnat to use in the launch options.
Have the common keywords definitions in a common resource file.
Indeed I think that is what I am doing presently. I’m looking for advice about organising my directory structure to enable re-use of the .robot files. Could you explain a little more about how to use variables in the resource files can enable re-use?
Or are you saying make ‘thin’ copies of the .robot files that reference shared common keywords? This is still a good amount of duplicate code in the repo as I have to essentially have 100 (or so) test case definitions that are identical. Even if they are just internally making a single call to a shared keyword.
*** Test Cases ***
My Example Test
Launch APP
# more stuff
Note: If you get variable ${SUT} not defined, then define it in a resource file SUT ${None}, or set one default value. It will be overriden by the command option.
The subtleties of language are tricky. The devices operate simultaneously, but not independently, so I don’t want to run tests on them in parallel. They are similar devices that share a lot of characteristics, but you could say that one is the master and one is the slave. I can’t run tests on the slave without first configuring the master to enable the slave.
Once enabled, I want to run through the tests that are common between DEVA and DEVB, without managing the source files independently (i.e. I don’t want separate .robot files in git).
As a suggestion, create your test cases in a single robot file, tag the tests that only run on the server device with server and likewise tests that only run on the client device with client, and if any tests run on both apply both tags to those tests, then simply run robot with a Tag pattern--include server first and then run another robot with a Tag pattern --include client
Once the 2 robots tests are both finished you can use rebot to combine the outputs into a single test result.
For a trivial situation where I have a single .robot file, this would be fine, but in reality this is within a much larger hierarchy of test suites. The ‘master’ and ‘slave’ do have a considerable number of test cases that are independent, and presently organized due to a taxonomy related to the product breakdown structure. If I try to make a tree of test cases that are shared, it will look pretty odd, and it will also make it much more difficult for a developer to know what test suites are going to be applied to which device, without going inside the .robot files.
Perhaps I have a real mental block, but if I was able to include test cases from one .robot file into another, I think it would be trivial for me to implement what I’m after.
Perhaps you missed the power of this feature… consider this made up example that might be similar to your situation:
Start with a base directory that contains a robot file and 2 directories:
common.robot
contains tests tagged with server tag and all tag
contains tests tagged with client tag and all tag
arm directory (for arm based devices)
H6 directory for arm devices based on the H6 SOC
Orange_Pi_Lite.robot
contains tests tagged with server tag and orange_pi_lite tag
contains tests tagged with client tag and orange_pi_lite tag
FriendlyARM_NanoPi_M1.robot
contains tests tagged with server tag and friendlyarm_nnopi_m1 tag
contains tests tagged with client tag and friendlyarm_nnopi_m1 tag
RK3588 directory for arm devices based on the RK3588 SOC
Pine64_Quartz64.robot
contains tests tagged with server tag and pine64_quartz64 tag
contains tests tagged with client tag and pine64_quartz64 tag
riscv directory (for riscv based devices)
riscv.robot
contains tests tagged with server tag and riscv tag
contains tests tagged with client tag and riscv tag
BL808 directory for riscv devices based on the BL808 SOC
Pine64_Ox64.robot
contains tests tagged with server tag and pine64_ox64 tag
contains tests tagged with client tag and pine64_ox64 tag
If you then run robot --include serverANDall --include serverANDriscv --include serverANDpine64_ox64 /path/to/base directory/* then all the common server test and all the Pine64 Ox64 specific tests will run but all other tests would be ignored.