N-stage data-driven tests

I have a N data-driven test cases of the type: Request, Process, Validate.
The processing part, which is the thing to be tested, takes a long time, but can be done in parallel.
Doing sequential RPV, one test case at a time, will take N times P.
If instead all requests were send together and then validated afterwards the run time would be 1 time P.

Example with 3 test cases:
Sequential: R1 P1 V1, R2 P2 V2, R3 P3 V3.
Parallel: R1 R2 R3, [P1 P2 P3], V1 V2 V3.

One solution could be to mark specific test cases failed during Suite Teardown, in which case the actual test cases will execute only the Request parts (and fail if request fails), register which requests were made together with the test case ID, and then let the Suite Teardown make the validations and retrospective fail the Test Cases that ended up failing.

Is this possible today? - Have been searching the docs without much luck so far.

Another, more clean solution, would be to allow 2 (or more) keywords to be run for each data-driven test case/suite. The semantics should then be that all data-sets are run through first keyword, then all through second keyword, etc. - Each and any of the keywords can then fail the respective test case.
(We could call it test pipelining?!). It does not require multi-threaded keyword execution, so I presume it must be pretty easy to achieve?

Have I missed something obvious?

~Per

Not sure, if I understand the question.

You want to run once R, parallel N P’s, once V?

You can make a data driven test suite with data driver: GitHub - Snooz82/robotframework-datadriver: Library to provide Data-Driven testing with CSV tables to Robot Framework

The test suite should o my have P as the only test case. Set R as suite setup and and V as suite teardown. Now you got P sequentially executed

If you run the test suite with labor instead do robot with --testlevelsplit you get P executed in parallel: GitHub - mkorpela/pabot: Parallel executor for Robot Framework test cases.

Regards, Markus

No. R+P+V all needs to be run N times. The P is the service to test.
In my situation I have a service that periodically scans for new files in a directory. The scan period is typically a few seconds.
With N transactions to test, the most efficient way would be to generate and save all N transactions to the directory first, let the server process, and then validate the results.
I could do it with the pabot, but won’t that cause issues in other suites and test cases that cannot be run in parallel?

Example on my thought:

Test Transaction That Must Succeed
| [Template] | Generate And Send Transaction | Validate Transaction Result
| Data set 1
| Data set 2
| …
| Data set 100

This should call “Generate And Send Transaction” for Data Set 1 to 100, then “Validate Transaction Result” from Data Set 1 to 100. (a test has failed during the first Keyword, should not be called with the second keyword.)
This would allow all 100 transactions to be generated and sent to SUT, and subsequently validated, with only 1 or 2 waiting periods instead of 100 periods.

Pseudo code for this could be:

Test Set-up
For Keyword in Template Loop
. For Data-set in Data-series Loop
. . If State[Keyword] != Failed then
. . . State[Keyword] = Keyword (Data-set)
. . endif
. end loop Data-series
end loop Template
Test Tear-down

I fear the complexity Pabot would make it difficult to maintain a large hierarchy of test suites. The
–testlevelsplit seems just right for this isolated situation, but if I understand it correctly, then test cases of each and every suite are run in parallel (suites in sequence). Not all our test cases (within a suite) can be run in parallel. I will look a bit deeper into pabot to consider possibilities and consequences…

Still, my suggestion would provide sequential testing, but allow for parallelity (or batching) in the system being tested.

~Per